Just like a biological neural network e. Today, at age 18, 20Q is a confident adult, but only because it has been absorbing information since its infancy. Burgener wrote the first version of the 20Q software back in , when he put the program on a floppy disk and passed it around to all his friends. As more people play, 20Q gets better and better at understanding how each object is characterized. In , Burgener wrote a version of the game that could run on the internet, where it still resides today at 20q.
After a surprising layoff in , Burgener decided not to look for another job as a software programmer—it was time to focus solely on improving and promoting 20Q. Working with Radica Games, he created a miniature version of the neural net for the hand-held toy that became a holiday season best-seller in , and generated a huge amount of interest in his AI algorithm. Usually, training a neural network is a long and laborious process, requiring a huge investment of time before the AI is at all useful.
Because 20Q is continuously learning from so many different teachers , visitors play the game every day , its knowledge is based on an average of the opinions of all of them. This sometimes leads to unexpected results. Every once in a while 20Q hits you with a question that seems completely off the wall. It then chooses a question that will cut the number of likely objects in half.
Because 20Q does not simply follow a binary decision tree, answering a question incorrectly will not throw it completely off. In any situation when someone could misunderstand a question or inadvertently answer incorrectly, the 20Q AI could approximate a human trained in recognizing those types of errors. Like a triage nurse, 20Q could theoretically learn how to accurately diagnose ailments by asking the right questions.
Though applications like these are years away, Burgener is confident that the 20Q AI will eventually be useful in many different ways. Once a helioseismology researcher, once a professional singer, once a neuroscience lab tech, and always a cheesehead.
Hi Karen! Great to know how this 20Q works! So many times having worked for doctors many years one has a hard timegetting peope to explain exactly what ails them in a concise wasy, especially either very young patients or very old patients. This would really be a great assistant in that process, particularly now that there is often a long wait for treatment ad a form-filling-out process that often does NOT ask the correct or more specific questions!
Love, Aunt Luz. Tell the other players which category your mystery object fits into. Step 2: Have one player ask a Yes or No question Have one player ask a Yes or No question to try to learn more about the mystery object. Answer the question with a Yes or No. Step 3: Have the players continue asking questions Have the players take turns asking Yes or No questions up to a total of 20 Questions. Encourage players to ask questions that build on answers already given.
Follow "Is it bigger than a lunchbox? Yogi Yogi 2, 1 1 gold badge 16 16 silver badges 17 17 bronze badges. This simple program demonstrates what you are talking about rather well. Once you get there you can click on the code link to see it: openbookproject.
Is that kind of AI available as a service? What if I could provide all the questions and answers and let it find them? And what do you call this kind of algorithm? Does it have a name? Add a comment. That explains some of it. But when you consider incorrect answers and general ambiguity, it still seems not quite so straightforward.
While your answer is correct for 20 questions, I think that Shaun's answer is more accurate, a simple nearest-neighbor learning algorithm, and enough user input, allows for some very accurate results.
Ah, true, they are similar, but definitely the nearest neighbor makes more sense. True, although the BASIC program Animal doesn't have a training algorithm to determine which questions to use and how high in the tree to put them. Performance with a trained decision tree should be much better. I agree with the commenter that the questions Atwood got look very much like they were generated by the original Animal algorithm and not by a neural network. Cerin Cerin It is using a learning algorithm.
Shaun Mason Shaun Mason 4 4 silver badges 14 14 bronze badges. Is a nearest neighbour algorithm a good choice in this case? It would seem that it would be far too forgiving of wrong answers, and could end up with a massive number of dimensions, many of which with no data. I'm assuming the use of hamming distance, and one dimension per question. A decision tree seems a more natural fit. The learning theory is the correct answer- it doesn't matter that it gives less 'accurate' answers because it becomes based on the mistakes everyone tends to make, which actually makes it better at guessing.
The Overflow Blog. Does ES6 make JavaScript frameworks obsolete? Podcast Do polyglots have an edge when it comes to mastering programming Featured on Meta. Now live: A fully responsive profile.
0コメント