Sign up FAST! Login

One Line Answers

Stashed in: Best PandaWhale Posts, DuckDuckGo

To save this post, select a stash from drop-down menu or type in a new one:

There needs to be a search engine that provides one line answers to questions. I just finished two tasks: helping my kids with their homework and debugging the intricacies from a LAMP program as I ported it from Ubuntu to Centos. If you do a google for questions, you get paid slots, correct page heading, but no relevant info, signups to see answers, etc. Any question marked with a [Solved] should be extracted and voted on and returned as a one line answer. How do you tell the version of your mysql? Where's the repo file for Chromium? Does pecl-xxx run on 64 bit? How do you turn on 64 bit flash in Chrome? Or even simple stuff, where did the Chumash Indians live? What crafts did they do? Right now all Google results take some of the excerpts out, but not exactly what's needed. Maybe we need a 140 character best answer similar to how they used to do the code obfuscation/optimization contests. Who can have the best 140 character answser and then include it inline in the search results.

Do you think the best such engine is powered by humans or powered by machines?

Is it curated or is it aggregated?

Will asking the #lazyweb on Twitter suffice, or is lazyweb insufficient?

The lazyweb takes too long, but distilling other people's lazyweb answers might work. I think it's a combination of archeology and presentation that's missing. The distilling piece might be able to be machines.

One way to approach about this problem predates the internet and web -- basically how do we extract "useful" information from long texts like books and papers that were the output of scholarly work. This is what info-science pioneer Paul Otlet tried, although with manual labor and index cards.

A foundational assumption there is that are discrete facts can be separated out of their context.

Google (and search engines) and their presentation of "relevant" snippets are kind of a continuation of that, extracting from semi-structured text, but using automated techniques.

DuckDuckGo's Zero Click stuff - (and Google's ill-fated coop and subscribed links) were something a little different - given more structured info they try to "answer" specific queries with facts. I guess Wolfram Alpha is like that too in some ways.

The broad alternative strategy to this is rather than pre-coordating a way to refine and repurpose existing information sources, create new ones focused on question answering as needed -- I see,, and the newly release to be in that category.

Given the decreasing quality of search results, automated crap content farms used to game them for financial reason, and that monopolies tend to not innovate as quickly in the absence of competition, I expect much of the interesting work in improving the experience for these type of fact-queries will be from the latter category.

The big problem I see for those sites is scale and authority. It's hard to scale a community to answer lots of questions, it's hard to incentivize it as communities grow, and often the knowledge and authority to answer questions effectively is inversely correlated to the amount of time you have to mess around on the internet.

Adam this was a great answer. I'm giving you Props!!

I'm working on something that (among other things) should address exactly this: crafting perfect snippets, by human cooperation, for any query that could reasonably answered by a perfect snippet. Ping me for more details!

Hey Gordon! But of course you *always* do interesting stuff! Will drop you an email offlist.

Where can we learn more?

I presented the rationale and demo'd an early prototype in a recorded presentation last year, which you can view at You can also follow @thunkpedia for tweets-on-the-theme and project-updates.

Such updates have been rare because other things this year put Thunkpedia on the backburner for a while, but I've recently picked it back up and will have something open for real users "real soon".

Sweet. How big is a thunk?

Initial thunk format is...

Always displayed:

  • a 1-line (~66 char) context: this appears before the main text, in lower-contrast, to provide a label and disambiguation. However, it need not be unique per thunk, so is not quite a traditional 'title'.
  • an up-to-333-character main body 'text': Just enough for a complete sentence or a few, with simple formatting … but no inline outlinks.

Displayed below if expanded in-place (aka 'the annex'):

  • about 1 line's worth of 'hooks': keywords/synonyms/tags to assist findability (white-hat SEO)
  • 'links': up to 3 outlinks, with title/anchortext, to resources that support the above 'text'
  • other metadata to-be-determined

The goal is for result lists of thunks to be easy to scan, read, and reorder in-place. This should minimize the need to navigate away or reformulate queries to get the full story, and better enable instant inline review/feedback.

This is awesome and reminds me a lot of Ted Nelson for some reason.

You May Also Like: