Last year I joined the beta program for a new semantic search startup out of Great Britain that was making rounds through the IT press. True Knowledge claimed to allow searches to perform natural language searches that allowed the system to infer answers from previously inserted data and extrapolations of meaning based on the words used to form the questions. At the time the system needed much training, it was unable to answer simple queries, however with time and as users have trained the system using the training facility, it can accurately answer questions such as "who is J Lo's husband?" , "what time is it in London now?" and thanks to some knowledge that I added myself, "Is venus bigger than mars?" I was curious to see if the newly released hyped up site launch of competitor "wolframalpha" could match up to True Knowledge out of the gate. Now to be fair, it is just launched (today quite literally) and has a much smaller base of facts to draw from. So I would not hold it against the system if it can't answer the J Lo or the planet size comparison but being able to infer the time in London should be relatively straight forward with a set of inferences for which I would hope the Wolfram team had provided seed data. (The countries and capitals of the world and their time zones and semantic meaning for questions such as the one I posed.) Let's take a look at the results:
Let's start with question 1)
"Who is J Lo's husband?"
True Knowledge returned the following screen (click for large size):
You'll notice it returned "Marc Anthony" without any equivocation. As well it accompanies the result with the semantic reasoning that lead to the conclusion and the facts used to derive it. Finally, just in case the returned result is wrong (the fact could be changed before the knowledge base is updated) it returns a traditional search page of results for verification purposes. Now let us take a look at what Wolfram Alpha returns:
A swing and a miss for Wolfram Alpha, "Wolfram|Alpha isn't sure what to do with your input." Not exactly the friendliest response ("I don't know would have been more personable.") but I expected to fail as this requires new knowledge that the site probably doesn't have added yet. However, the site doesn't even try to return a listing of potential traditional search items like True Knowledge did, it just stops in its tracks the lack of a fail over to traditional search is a big omission that will hamper the ability for the site to grow, people who lack the answers they are looking for will immediately go elsewhere, where as with TrueKnowledge at least it tries to give you a traditional search of the provided query. Let us see what WolframAlpha makes of a question that it should have default data for, let's move to the next one.
"what time is it in London now?"
First, up True Knowledge.
Yet again True Knowledge returns an immediate and definitive answer, it follows up with its reasoning (based on available time zones and current time information in the asking jurisdiction [I am on EST] and finally it provides traditional search terms which in this case are useless as the question I asked was answered directly above. Now let us see how Wolfram Alpha handles this, it should have access to the same data by default and should return an answer.
Unfortunately it doesn't understand the use of the word "now" after the question and asks to verify , this is good, however True Knowledge was not confused by "now" and answered implicitly. After clicking on the "did you mean:" link the following page results:
Which I must admit looks impressive, provides the current time clock and the requested time clock for London so that we can clearly see it is showing us what we asked for, however it took a confirmation step. It should be able to be taught the semantically inert use of "now" after the question, possibly using a statistical learning routine that allows the system to assume answers based on past patterns of asked questions. Finally let us ask a question that requires knowledge that is novel and would require inputing into the system how does Wolfram Alpha fair versus True Knowledge here? Since I added the knowledge to True Knowledge it is guaranteed to return a result but showing the screens will illustrate a big problem with Wolfram Alpha as it is currently launched.
Final question:
"is venus bigger than mars?"
True Knowledge returns :
Despite the fact that I added the knowlede necessary for it to answer the question definitively it is run into a disambiguation problem. I used the word "bigger" (in a common way that most people would ask the question) rather than a more accurate "has more mass" or "has larger area" how ever True Knowledge understands the differences between these and asks me to confirm which I intend. It even has an entry where it does not assume that "Mars" is the planet, and instead uses the candy corporation "Mars, incorporated" though in this example it would return the wrong result ...if there were a Venus Corp to compare, it might go on and infer "bigger" implied larger in market capitalization or number of employees and if the knowledge is not present would allow us to add it. Clicking on the last option "Is Venus, the second planet from the Sun in the Solar System currently of greater surface area than the object Mars, the fourth planet from the sun in the solar system?" that correctly describes the intent of our question reveals the following page:A definitive "yes", and the fact source (me "sent2null") and associated reference data (wikipedia articles) It also still provides the traditional search just in case the data has gone stale since it was added. I was curious if the system would be able to infer the opposite question:
It is not sure again, and asks for disambiguation. I select the last option which compares surface area of the two planets and the next page results in:
It correctly uses the "size" data provided to infer that if Venus is bigger than Mars, then Mars can not be bigger than Venus and returns "no". Now to see how Wolfram Alpha does:
Another swing and a miss, but this time we observe the real problem with the system as it is currently constructed. There is no obvious way to add knowledge about the query we just posed, in TrueKnowledge the add knowledge interface is quite intuitive, the first iteration wasn't as much but it was recently updated. Wolfram Alpha does have such a tool but it is not prominent for each search page as it should be, it appears in a list of items AFTER clicking on the "are you an expert in this field?" link that appears under the incorrect or incomplete answer. In TrueKnowledge, it took me a few minute to add the Mars and Venus data and it allowed the system to correctly answer those questions, provide the reference for those answers and as back up provide traditional search results. It will be interesting to see how Wolfram Alpha improves over time , particularly without a easily accessed knowledge adding tool. One thing that Wolfram Alpha has that True Knowledge didn't have that may help it , is the media hype surrounding Steven Wolfram the mathematician behind the project but we learned from Google that you don't need a big media splash to win over users when your product provides a superior technological advantage..in this case it had a head start, has more knowledge stored , enables training easier and seems to be better with making sure to disambiguate queries. We'll see if Wolfram Alpha can make up the difference.
Update May, 18 2009
I thought about my last comparisons and felt I wasn't fair to Wolfram Alpha's area of expertise, mathematical computation. So I decided to compare some questions. I asked "how many cups in 50 liters?" It returned the answer (211) quickly and provided a full sheet of related data. I asked , "how many inches in 10 miles?" it again, returned the answer and additional details. I asked the cups question to True Knowledge and it failed at first but when I changed the question to "convert 50 liters to cups" it was able to infer the answer (to better accuracy actually than Wolfram Alpha) It seems that both engines are exploring different types of knowledge but so far True Knowledge (probably due to it's year long head start) is better at figuring out answers and being taught new knowledge. It will be interesting to see how quickly each fills in the gaps over time.
Let's start with question 1)
"Who is J Lo's husband?"
True Knowledge returned the following screen (click for large size):
You'll notice it returned "Marc Anthony" without any equivocation. As well it accompanies the result with the semantic reasoning that lead to the conclusion and the facts used to derive it. Finally, just in case the returned result is wrong (the fact could be changed before the knowledge base is updated) it returns a traditional search page of results for verification purposes. Now let us take a look at what Wolfram Alpha returns:
A swing and a miss for Wolfram Alpha, "Wolfram|Alpha isn't sure what to do with your input." Not exactly the friendliest response ("I don't know would have been more personable.") but I expected to fail as this requires new knowledge that the site probably doesn't have added yet. However, the site doesn't even try to return a listing of potential traditional search items like True Knowledge did, it just stops in its tracks the lack of a fail over to traditional search is a big omission that will hamper the ability for the site to grow, people who lack the answers they are looking for will immediately go elsewhere, where as with TrueKnowledge at least it tries to give you a traditional search of the provided query. Let us see what WolframAlpha makes of a question that it should have default data for, let's move to the next one.
"what time is it in London now?"
First, up True Knowledge.
Yet again True Knowledge returns an immediate and definitive answer, it follows up with its reasoning (based on available time zones and current time information in the asking jurisdiction [I am on EST] and finally it provides traditional search terms which in this case are useless as the question I asked was answered directly above. Now let us see how Wolfram Alpha handles this, it should have access to the same data by default and should return an answer.
Unfortunately it doesn't understand the use of the word "now" after the question and asks to verify , this is good, however True Knowledge was not confused by "now" and answered implicitly. After clicking on the "did you mean:" link the following page results:
Which I must admit looks impressive, provides the current time clock and the requested time clock for London so that we can clearly see it is showing us what we asked for, however it took a confirmation step. It should be able to be taught the semantically inert use of "now" after the question, possibly using a statistical learning routine that allows the system to assume answers based on past patterns of asked questions. Finally let us ask a question that requires knowledge that is novel and would require inputing into the system how does Wolfram Alpha fair versus True Knowledge here? Since I added the knowledge to True Knowledge it is guaranteed to return a result but showing the screens will illustrate a big problem with Wolfram Alpha as it is currently launched.
Final question:
"is venus bigger than mars?"
True Knowledge returns :
Despite the fact that I added the knowlede necessary for it to answer the question definitively it is run into a disambiguation problem. I used the word "bigger" (in a common way that most people would ask the question) rather than a more accurate "has more mass" or "has larger area" how ever True Knowledge understands the differences between these and asks me to confirm which I intend. It even has an entry where it does not assume that "Mars" is the planet, and instead uses the candy corporation "Mars, incorporated" though in this example it would return the wrong result ...if there were a Venus Corp to compare, it might go on and infer "bigger" implied larger in market capitalization or number of employees and if the knowledge is not present would allow us to add it. Clicking on the last option "Is Venus, the second planet from the Sun in the Solar System currently of greater surface area than the object Mars, the fourth planet from the sun in the solar system?" that correctly describes the intent of our question reveals the following page:A definitive "yes", and the fact source (me "sent2null") and associated reference data (wikipedia articles) It also still provides the traditional search just in case the data has gone stale since it was added. I was curious if the system would be able to infer the opposite question:
It is not sure again, and asks for disambiguation. I select the last option which compares surface area of the two planets and the next page results in:
It correctly uses the "size" data provided to infer that if Venus is bigger than Mars, then Mars can not be bigger than Venus and returns "no". Now to see how Wolfram Alpha does:
Another swing and a miss, but this time we observe the real problem with the system as it is currently constructed. There is no obvious way to add knowledge about the query we just posed, in TrueKnowledge the add knowledge interface is quite intuitive, the first iteration wasn't as much but it was recently updated. Wolfram Alpha does have such a tool but it is not prominent for each search page as it should be, it appears in a list of items AFTER clicking on the "are you an expert in this field?" link that appears under the incorrect or incomplete answer. In TrueKnowledge, it took me a few minute to add the Mars and Venus data and it allowed the system to correctly answer those questions, provide the reference for those answers and as back up provide traditional search results. It will be interesting to see how Wolfram Alpha improves over time , particularly without a easily accessed knowledge adding tool. One thing that Wolfram Alpha has that True Knowledge didn't have that may help it , is the media hype surrounding Steven Wolfram the mathematician behind the project but we learned from Google that you don't need a big media splash to win over users when your product provides a superior technological advantage..in this case it had a head start, has more knowledge stored , enables training easier and seems to be better with making sure to disambiguate queries. We'll see if Wolfram Alpha can make up the difference.
Update May, 18 2009
I thought about my last comparisons and felt I wasn't fair to Wolfram Alpha's area of expertise, mathematical computation. So I decided to compare some questions. I asked "how many cups in 50 liters?" It returned the answer (211) quickly and provided a full sheet of related data. I asked , "how many inches in 10 miles?" it again, returned the answer and additional details. I asked the cups question to True Knowledge and it failed at first but when I changed the question to "convert 50 liters to cups" it was able to infer the answer (to better accuracy actually than Wolfram Alpha) It seems that both engines are exploring different types of knowledge but so far True Knowledge (probably due to it's year long head start) is better at figuring out answers and being taught new knowledge. It will be interesting to see how quickly each fills in the gaps over time.
Comments