Can Machine Learning Use Knowledge instead of Data? Deep Cloning vs Deep Learning


Machine Learning (ML) field is defined by most people to be exclusively a field of data science, which is incorrect in principle. The main goal is to make computers perform cognitive skills similar to human brain and to immitate how human brain learns and thinks. Why use data only? Isn’t most of our learnings based on knowledge consumption?

Human brain learns mostly from knowledge, not from data!

As a result, we need machine learning methods that use knowledge directly. This area of research has not been explored as much as its data-driven counterpart (deep learning) because of the challenge of Knowledge Representation (KR) and the difficulty of computerized ontology creation.

KR methods such as semantic nets and logico-linguistic modeling have a long history of R&D using static/given knowledge but not in the context of “learning”. So, the question is how can we extend KR methods into a “learning” method? This brings us to the new idea of deep cloning where KR is molded into a neural-network-like structure poised for learning by reading.

Can Computers Learn by Reading?


Knowledge-based learning methods make it possible for computers to learn by reading similar to how we educate ourselves. Once a deep cloning system is set, then a computer can start reading books (text) to learn a subject and answer questions about it. The trade off is between the difficulty of ontological (knowledge-based) learning versus the advantages of independence from training large data (corpus) and dealing with issues like convergence and generalization.

Advantages of Knowledge-based Learning
There are a number of advantages of this approach in comparison to data-driven methods as outlined below:

  • One-shot Machine Learning: Since knowledge does not require a supervised reference point, learning becomes one-shot machine learning devoid of convergence problems encountered in deep learning.
  • Not Stuck in the Past: Data-driven models require data collected from the past experiences. This makes them vulnerable in application to new things (i.e., new car, new plane, new drug, new house, new neighborhood, new disaster.) Knowledge-based systems are not biased by the past, and can employ new knowledge immediately.
  • Knowledge is Less Limited than Data: Availability and abundance of data do not guarantee its completeness, and data can still be limited in explain the process it comes from. Weather prediction is a good example. Knowledge, on the other hand, represents the best data experience available.

Fundamental Differences
In processing natural language and representing knowledge (after reading a text), deep cloning network (shown on the left) is comprised of layers with different objectives and different neuron functions. In contrast, deep learning (shown on the right) is a homogenous architecture of neurons dedicated to minimize the error at the output in a supervised mode of learning. Despite variations of deep learning, no neuron activity is designated for any linguistic role.


Knowledge representation on the left can be a one-shot process using only the text of the knowledge whereas learning on the right requires long training cycles using corpus way larger than what is needed on the left.

Answering Questions


Knowledge-based machine learning can answer questions from the content it learned with utmost precision using the ontological connections shown in the network picture above. Shown aboveis a hypothetical case, where a question presented to the network finds its most relevant answer using those connections. In case of partial connections, the network puts more emphasis on target, event, and instrument (in this order) and produces answers with an accuracy score. Based on the type of application, a threshold can be set to declare “no answer” if the best scoring sentence is below the threshold. With such a capability, the chatbot becomes self-aware of its performance, and can report how well it did in answering questions. This can be further expanded to social learning where chatbots can ask for feedback to learn how to answer particular questions.

Knowledge Breeding


More impressive than answering questions, deep cloning machine learning can breed new knowledge from the content it learned as shown on the right. This is logic resolution using existing knowledge to produce possible new knowledge using the ontological connections. Obviously, breeding new knowledge is one of the most exciting aspects of learning algorithms that are not as straight forward as it looks when using data-driven models such as deep learning. One of the advantages of knowledge-driven machine learning is that the “new knowledge” is transparent (can be verified by human inspection) whereas the same cannot be said for data-driven deep learning.


This article is brought to you by exClone, a chatbot technology provider.

Chat with Vera about exClone.

Try free (no cc required) of our Cloning Platform via Linkedin access.

Join CHATBOTS group in linkedin.

You can follow exClone in Facebook, and in LinkedIn.


#chatbot #chatbots #AI #artificialintelligence #ConversationalAI #Virtualassistants #bots #machinelearning #NLP #DL #deeplearning #deepcloning

Chatbots Obey the Two Principles of the Human Brain: (1) Laziness, (2) Stimulus Junkie

Let’s start with the laziness aspect. If I flash two pictures in front of your eyes in a split second, you will recognize one picture instantly, and you will have no clue of the other. Guess which one is which?

For evolutionary reasons, the human brain’s cognitive capacity is largely reserved for image recognition to detect dangers instantly. Obviously, a tiger would not send you a text message before attacking; therefore, “reading” is not a biological priority. Since humans started to read only for a few thousand years, we are not yet evolved to balance the picture above. As a result, “reading” is a painful and tiresome activity. We all know this from school days. Hence the saying “a picture is worth thousand words.”

Now the same comparison can be made with these two images. The image replaced by tiger is still not as easily recognizable, however, it is much easy on the eye. And the most importantly, it promotes focus that is one screen, one place, one single action for interaction. The reason for messaging platforms to be so widespread and popular is this basic principle of FOCUS that plays into the hands of a lazy brain. Probably, half of the messaging activity includes pictures and videos, satisfying the hunger of a lazy brain through this focused interaction.

The second principle is that the human brain is a STIMULUS junkie. In a boring environment, a human brain will always steer toward something more exciting. Curiosity and learning have strong ties to the evolutionary instincts of survival. It is “in our nature.” Stimuli can now be delivered instantly by mobile devices. Chatting/texting with a friend on a mobile phone while socializing with others has recently become a widely acceptable form of social interaction. Everybody silently agrees that we all need to be stimulated even during the short, dull moments of our social gathering. It may actually improve our social relationships since we no longer have to endure boredom when we get together.

If people have already chosen chat/text as one of the most precious priorities in their lives, then why not use the same tool (Chatbots) to interact with computers, databases, websites, machines, and even with books?

That is the point of departure of this new wave of realization across the tech world. The only problem, though, is that chatbots are not as easy to develop as many people assume so. It requires the culmination and curation of machine learning, natural language processing, and the psychology of human dialogue. These are not easy skills to deploy, and the market will eventually filter out its natural selection of the fittest. Chatbots are here to stay and occupy our lives in the next decades to come.