Interactive AI Podcasting Debut


If you like listening to a podcast while taking a walk, you are one of millions of people who enjoy doing this every day. Now, you can experience something totally new: interactive AI podcasting. Instead of being a passive listener, you control the flow and use of the content. You curate your own podcast experience.

What is an Interactive AI Podcast?

Interactive AI podcasts allow you to:

  • Have dialogue like talking with an assistant
  • Skip to the next story or next subject matter
  • Ask questions/get answers about something you heard in the story
  • Repeat sections of importance or pause for thinking
  • Express your opinion and provide feedback simply by talking
  • See images and videos accompanying the storyline

Interactive AI podcasts allow the content creator to:

  • Have the AI computer voice the story automatically
  • Learn where listeners skipped and disliked content
  • Understand what questions were posed, and what engaged listeners
  • Receive listener opinions and feedback
  • Overcome the limitations of conventional podcasts as described above

Try Dr. Margo, Interactive AI Podcasting, Subject: Coronavirus


Try the BETA version of interactive podcasting (links below)! The best experience is designed for the mobile use (wearing headphones. The link to download mobile app iOS is here, and Android is here. Mobile apps have complete voice interaction – say or click on “Podcast” to start it, then say “stop podcast” to end it. The app also provides images and videos accompanying the story. Use headphones!


The link for Dr. Margo is here where you can test her using the Web browser. Make sure to click on the speaker button to hear the podcast (the button is at the bottom inside the search box.) Note that the Web browser version does not take your voice, it only takes your typing. Dr. Margo’s virtual expert will continue servicing after the podcast with questions and further information about the coronavirus.

A New Approach to Education and Corporate Training

Podcast can be an effective educational tool. Now that is enhanced by interactive AI control. Most importantly, the listener can talk to it just as talking to a teacher or a tutor.  Once dialogue is introduced, the most important element of learning starts to emerge.

The beauty of mobile phone technology is to connect us to an information source via headphones while we are busy with mundane things in life. Working out, cycling, walking, cleaning, gardening, etc. all may be accompanied by a podcast, as we multi-task and expand our potential. Interactive AI Podcasts allows the listener to pause, repeat, ask a question, or skip ahead. These are all basic functions of learning, which are further enhanced by images and videos.

Turn Your Documents into an AI Podcast

This new technology was created by exClone, which now offers its platform to you to try Interactive AI Podcasting. Click here to request a demo. The platform will be accessible by simple subscription soon.

Happy interactive podcasting!


This article is brought to you by exClone.

Request a Demo from exClone

Join CHATBOTS group in LinkedIn.

You can follow exClone in Facebook, and in LinkedIn.

exClone App (iOS) Ecosystem of virtual experts (beta)

exClone App (Android) Ecosystem of virtual experts (beta)

Turn your MS Word, PDF Documents Straight into Chatbots: Virtual Experts


It is finally here. You can now convert your MS Word/PDF documents into Chatbots and Virtual Experts with exClone technology. No coding involved, no data sets to mingle with, no long training cycles, no experience in AI. This is the highest level of automation in the market today where all AI functions are tucked under the hood, invisible to a chatbot builder. As a result, the path between an expert and his/her virtual version involves no other process/developer in between.

A Chatbot Learning from Documents Becomes a Virtual Expert
Siri, Cortana, Hey Google, or Alexa, lack any expertise they can chat about. If a question is asked with some complexity, they point you to search results. exClone’s process yields a virtual expert where questions are answered about the particular subject. Here is an example, Frank, who is a virtual expert on crystallography solutions using Phenix software system. Frank was built straight from MS Word documents in a single step process (we call it Instant Learning). The documents were written and curated by a real expert.



On-the-fly Learning by Conversations with Teachers After Deployment
In addition to learning from documents for deployment, exClone offers teaching virtual experts on-the-fly through conversations by designated teachers after deployment. This has a number of advantages one of which is the ability to update the system with new or modified knowledge anytime without the need for re-deployment.

Answering Questions at the Concept Level


The most powerful feature of the exClone system is its ontological answering capability where words of the question and its answer don’t match, but the concepts they refer to do match. The proprietary machine learning algorithm (Instant Learning) is able to achieve an almost human level of understanding when answering questions. This means that the system can handle hundreds of various forms of a single question, which points to the same meaning, thus can bring the same relevant answer. This is the ultimate goal in making computers understand language and learn knowledge correctly.

What does this Mean for Enterprises?
Documents in the world of enterprise are the main asset to encapsulate and preserve organizational expertise. Being able to create virtual experts out of these documents easily, with no specialized effort, means that it is now scalable and inexpensive to launch enhancements to enterprise searchhelp deskcall center, and training systems.

With virtual experts, the workers and customers of an enterprise can access critical information via conversational (messaging) type interface, rapidly, accurately, and efficiently. Such an efficiency directly improves bottom line.


Request a demo at

Machine/Deep Learning to Include Evolutionary, Experiential, and Instant Learning Components

In our quest to understand and replicate the cognitive capabilities of the human brain, the AI discipline has focused on the subject of learning rather unevenly. Regardless of the non-scientific reasons, I felt compelled to raise awareness of the most important 3 components of learning mainly distinguished by the “time” factor. Evolutionary, Experiential, and Instant learning. Without taking into account all 3 forms of learning, it is unlikely to achieve ambitious goals in AI regardless of how much computing power, or data collection is available to us. The diagram below summarizes this concept.



When a wildebeest calf is born, it takes only a few minutes for her to run fast enough to escape from predators. It is obvious, evolution has hard-wired some of its learning in the blue print of a new born calf in terms of motor skills. Evolutionary learning is also obvious from the distinct regions of a biological brain which is almost always utilized in a predetermined manner. We can argue that human specie has developed a unique neuron structure suitable for language and logic in response to survival pressure through evolution. If this is true, then the idea of “linguistic neuron” could be what separates us from animals.

Has human specie developed language sensitive neurons in the brain through an evolutionary process so that some neurons take on linguistic roles?

Evolutionary learning is like a factory setting, initial condition, or starting assumptions of any model we want to build for specific learning task. This initial condition step is what is missing in today’s deep learning methods.



Once a biological system is born, experiential learning starts along with the growth of the brain. In case of humans, many activities like walking, speaking, learning how to ride a bicycle, or playing piano fall into this type of learning where repetition is the key. Today’s deep learning methods heavily focus on this model using artificial neural networks. Unfortunately, the network types and learning algorithms do not start from any biological inspiration, and there are no initial assumptions targeted to a certain type of learning. Consequently, most applications turn into a nonlinear mapping exercise rather than modeling a real learning process.



One of the most obvious, yet mysteriously ignored form of learning is instant learning. In case of humans, cognitive activities like reading, conversing, deducing, summarizing, abstracting, and conceptualizing require very small number of iterations to learn. If you ask directions on the street, repeating it twice would be more than enough to learn it. If we are studying a subject, we may have to read it a few times. That is instant learning. You cannot replicate this type of learning using today’s deep learning methods. Assuming the evolutionary learning has yielded a hard-wired design of linguistic neurons, we are experimenting with instant learning at exClone with promising results. In applications involving natural languages and human like dialogue, we believe that the 3 forms of learning is essential to complete the picture. More details are in my previous article about instant learning.

One of the examples that I have come across recently is the RBF learning which is another form of instant learning without mentioning the arguments described above. Their point of departure in RBF Learning is the industrial demand for instant learning systems.

If you know a new learning algorithm relevant to the arguments above, please mention it in the comments below.


This article is brought to you by exClone, a Virtual Expert & Chatbot technology provider.

Join CHATBOTS group in linkedIn.

You can follow exClone in Facebook, and in LinkedIn.

#instantlearning #deeplearning #chatbots #conversationalAI #AI #ArtificialIntelligence #ML #DL #Machinelearning #exclone #virtualexperts #NLP #humandialoguetheory

Machine Learning by Reading, a Path to Paul Allen’s Common Sense AI


A recent article about Paul Allen’s project Alexandria mentions the need for computers being able to have common sense. This means computers reaching somewhat human level cognition, which is a super ambitious goal. This level of achievement is most likely infeasible in the short run, and funding judgments for this goal is encouraged by the availability of super computers and vast amount of data. However, the assessment of this goal and the possible routes to success require us to define a measurable (or perceivable) scale. Hence, let’s start defining such a scale.


The easiest scale to follow this argument is the basic definitions of data, information, knowledge, and logic as shown here in the figure below. Detection of the difference among data creates information. Same hierarchy applies to knowledge, logic, and common sense reasoning. If there is no difference detected, there can be no information, knowledge, logic, or common sense reasoning. This is the very basic premise of processing intelligence. For computers to operate at the “common sense” level, they are required to resolve (1) common sense resoning from logic, (2) logic from available knowledge, (3) knowledge from available information, and (4) information from available data. The question is how can we shorten this path for a feasible solution for common sense reasoning in the foreseeable future?


Methods of data science, such as deep learning, are useful for anaylzing data to extract new information. These methods can sometimes go one step further to produce knowledge with limitations (For example, stock market analysis using data-driven methods can never justify the knowledge produced). There is a natural barrier of conversion from information to knowledge by sheer data analysis. Knowledge science is an entirely different realm. The difference between data science and knowledge science is as striking as the difference beteen Newtonian physics versus Quantum physics.

The challenge of knowledge science is to deploy correct models of knowledge, whereas data science crunches numbers without assuming a model.

The attractiveness of “no model” in deep learning, for example, causes a misconception such that it can be applied to higher domains (i.e., CNN applied to any problem). One particular direction is natural languages where tensor flow and vectorized words are assumed to cross that barier. One of my earlier articles titled “Why Deep Learning and NLP don’t Get Along Well?” explain why this is nothing but wishfull thinking.

As shown in the figure above, knowledge-driven machine learning will undoubtedly be the shorter path to reach common sense reasoning. Because the existing knowledge (millions of books for example) can be processed by a computer just like reading them to learn. Here is another article, titled “Can Machine Learning Use Knowledge instead of Data?” that sheds light to this subject.


Knowledge-driven approach does not treat sentences in natural language as data. Instead, it assumes them as part of its initial model. The basic premise is that the initial model assumed for knowledge representation can be corrected iteratively as more sentences are processed. This hypothesis is supported by our own human experiences as our understanding improves by reading more books.

The idea of lifting knowledge from a source curated by a human experts (authors), and implanting to a computer is, in one sense, similar to cloning knowledge. Hence, the method is called Deep Cloning, and explained in this article titled “Deep Cloning vs Deep Learning“.

The figure below shows one of our experiments with deep cloning for logic resolution. The system resolves the question “Is Mike in good shape?” by following a path through its knowledge representation from the sentences acquired earlier. As more sentences learned, the logic improves, and it may strengthen of reverse its conclusion. This demo will be open to public in coming months.


It will clearly be a very long path for implanting common sense reasoning to a computer. The knowledge-driven methods offer a shorter path to reach the goal while subject to more challenging and creative solutions.

If we can make computers read and learn like we do, then there is a good chance to expect higher level cognitive functions from them in the near future.


This article is brought to you by, a chatbot technology provider.

Join CHATBOTS group in linkedin.

Join our experiments, chat with Vera about exClone.

Try free (no cc required) of our Cloning Platform via Linkedin access.

You can follow exClone in Facebook, and in LinkedIn.


Consulting with a Virtual Doctor for Women’s Health


One of the biggest impacts chatbots are expected to make on society will be in the medical field. The newly launched (in beta) is a prime example. DrCHAT provides patients with medical consultations prior to initial doctor’s visits, or a second opinion afterwards. Free usage and the ubiquitous availability of DrCHAT allows patients to continue consultation at every stage of treatment. This empowers women, with a number of benefits to the entire health ecosystem, and it presents unlimited potential for the use of technology such as DrChat to improve the nexus between patients and care.

The only obstacle for chatbots becoming virtual doctors is the ability to handle consultation dialogue similar to what occurs in a doctor’s office.

The dialogue obstacle is a major challenge, and solving it will determine who wins the race to claim this value service space.

Knowledge-driven Machine Learning as the Backbone
While most machine learning methods are data-driven, they all suffer the problems of data availability and reliability. However, volumes of medical knowledge are readily available that may be turned into a dialogue system. Knowledge-based machine learning accomplishes just that without the rigorous requirements of a data-driven approach. The expertise of a medical doctor, as depicted below, is converted into a conversational system through the knowledge-driven machine learning method (as indicated by the blue arrow). This process is explained in simple terms in two linkedin articles “Deep Cloning Versus Deep Learning” and “Can Machine Learning Use Knowledge …


In the case of DrCHAT, the expertise is derived from certified Ob/Gyn physicians who have laid out over 30 different clinical flows – following American College of Obstetricians & Gynecologists guidelines for evidence-based care. Although the machine learning process continues its growth, some beta-testers have been granted early access to DrCHAT.

Compared to Flat Search Systems
One of the striking differences between flat (single-step) searches using Google, WebMD, or Wikipedia and a medical chatbot such as DrCHAT is the consultation dialogue, in which clinical work flows are utilized to allow a step-by-step conversation to diagnose illnesses and suggest treatment options. Considering the popular usage of mobile devices and messaging apps, consultation dialogue offers the richest and quickest experience compared to opening documents and sifting through large volumes of text on a narrow screen.

Single-step search engines fall short for health problems that require multi-step interaction with a patient to suggest diagnosis and treatment options.

Current Health Apps are Not Chatbots
Some current health apps, including ADA, Babylon, and YourMD, offer valuable services such as scheduling visits or video conferencing with doctors. However, their chatbot interactions are imitations of a single-step search with no genuine dialogue capability. The fact that these apps are geared toward “general medicine” to cover everything without specialization makes them less capable of delivering the requested consultation. Medicine is such a vast topic that automated consultation is best handled by specialized expertise.

Professional Version

Another important feature of DrCHAT is that it comes in two versions, one for patients and the other for professionals. Although derived from the same expertise (IP), the professional version lays out the clinical flows for decision-making which is a valuable reminder, fact-checker, and a quick guide for practitioners. The complexity of the medical terminology used during a dialogue also differs between the two versions.


Anonymity is a Big Plus for Women’s Health Chatbot

Most Ob/Gyn specialists agree that women do not always feel comfortable talking about their intimate problems, and sometimes skip mentioning critical details during face-to-face consultation. DrCHAT’s approach of anonymous dialogue, without any registration, will break down some of these barriers and further empower women during these exchanges. In return, conversation logs (without identity information) become a valuable source of information to analyze women’s behavior under a regular clinical examination.

The Future of Health Chatbots
Where we go from here will be determined by the engagement and acceptance level of health chatbots such as DrCHAT. It is clear, however, that once the concept has been validated, other specialty areas may be replicated quickly by deploying the underlying technology – which focuses on automated knowledge acquisition from experts. Cardiology, Emergency Medicine, Pediatrics, and Urology (men’s health) are some of the specialties to be launched under DrCHAT following Women’s Health. If you want to be a tester, just talk to the chatbot and ask to become a tester. Stay tuned for more on health chatbots.



CHAT WITH DrCHAT ABOUT women’s health.







Can Machine Learning Use Knowledge instead of Data? Deep Cloning vs Deep Learning


Machine Learning (ML) field is defined by most people to be exclusively a field of data science, which is incorrect in principle. The main goal is to make computers perform cognitive skills similar to human brain and to immitate how human brain learns and thinks. Why use data only? Isn’t most of our learnings based on knowledge consumption?

Human brain learns mostly from knowledge, not from data!

As a result, we need machine learning methods that use knowledge directly. This area of research has not been explored as much as its data-driven counterpart (deep learning) because of the challenge of Knowledge Representation (KR) and the difficulty of computerized ontology creation.

KR methods such as semantic nets and logico-linguistic modeling have a long history of R&D using static/given knowledge but not in the context of “learning”. So, the question is how can we extend KR methods into a “learning” method? This brings us to the new idea of deep cloning where KR is molded into a neural-network-like structure poised for learning by reading.

Can Computers Learn by Reading?


Knowledge-based learning methods make it possible for computers to learn by reading similar to how we educate ourselves. Once a deep cloning system is set, then a computer can start reading books (text) to learn a subject and answer questions about it. The trade off is between the difficulty of ontological (knowledge-based) learning versus the advantages of independence from training large data (corpus) and dealing with issues like convergence and generalization.

Advantages of Knowledge-based Learning
There are a number of advantages of this approach in comparison to data-driven methods as outlined below:

  • One-shot Machine Learning: Since knowledge does not require a supervised reference point, learning becomes one-shot machine learning devoid of convergence problems encountered in deep learning.
  • Not Stuck in the Past: Data-driven models require data collected from the past experiences. This makes them vulnerable in application to new things (i.e., new car, new plane, new drug, new house, new neighborhood, new disaster.) Knowledge-based systems are not biased by the past, and can employ new knowledge immediately.
  • Knowledge is Less Limited than Data: Availability and abundance of data do not guarantee its completeness, and data can still be limited in explain the process it comes from. Weather prediction is a good example. Knowledge, on the other hand, represents the best data experience available.

Fundamental Differences
In processing natural language and representing knowledge (after reading a text), deep cloning network (shown on the left) is comprised of layers with different objectives and different neuron functions. In contrast, deep learning (shown on the right) is a homogenous architecture of neurons dedicated to minimize the error at the output in a supervised mode of learning. Despite variations of deep learning, no neuron activity is designated for any linguistic role.


Knowledge representation on the left can be a one-shot process using only the text of the knowledge whereas learning on the right requires long training cycles using corpus way larger than what is needed on the left.

Answering Questions


Knowledge-based machine learning can answer questions from the content it learned with utmost precision using the ontological connections shown in the network picture above. Shown aboveis a hypothetical case, where a question presented to the network finds its most relevant answer using those connections. In case of partial connections, the network puts more emphasis on target, event, and instrument (in this order) and produces answers with an accuracy score. Based on the type of application, a threshold can be set to declare “no answer” if the best scoring sentence is below the threshold. With such a capability, the chatbot becomes self-aware of its performance, and can report how well it did in answering questions. This can be further expanded to social learning where chatbots can ask for feedback to learn how to answer particular questions.

Knowledge Breeding


More impressive than answering questions, deep cloning machine learning can breed new knowledge from the content it learned as shown on the right. This is logic resolution using existing knowledge to produce possible new knowledge using the ontological connections. Obviously, breeding new knowledge is one of the most exciting aspects of learning algorithms that are not as straight forward as it looks when using data-driven models such as deep learning. One of the advantages of knowledge-driven machine learning is that the “new knowledge” is transparent (can be verified by human inspection) whereas the same cannot be said for data-driven deep learning.


This article is brought to you by exClone, a chatbot technology provider.

Chat with Vera about exClone.

Try free (no cc required) of our Cloning Platform via Linkedin access.

Join CHATBOTS group in linkedin.

You can follow exClone in Facebook, and in LinkedIn.


#chatbot #chatbots #AI #artificialintelligence #ConversationalAI #Virtualassistants #bots #machinelearning #NLP #DL #deeplearning #deepcloning

Most Chatbots don’t Use AI, are Misrepresenting AI


This title is the summary of what is happening in the market today, mostly encouraged by Facebook’s move for Messenger bots.

The ChatbotConf 2017 revealed this sad truth. There are 200,000 Messenger bots today, most likely none of them have a real AI backbone. A recent article summarizing the conference draws a similar conclusion.

End users of chatbots would not really care whether there is AI backbone or not if the chatbot they are using solves their problem. In a small fraction of cases, chatbots without AI can be helpful, especially in e-commerce transactions where buying and selling options are rudimentary, and the conversations can be buttonized. However, the AI issue surfaces when chatbots try to service higher complexity tasks. The way chatbots can be used in real life, this corresponds to, maybe, 90% of the cases. So, what is the AI backbone that is required?

The AI Backbone

Chatbots that represent AI must have some (if not all) of the capabilities listed below:

  • NLP: Capability to understand users’ responses in their most variant form.
  • Answering Questions: Ability to communicate with the user about a subject matter by absorbing knowledge and answering questions about it.
  • Asking Questions: Ability to ask questions to navigate the user to solve a problem.
  • Dialogue Behavior: Ability to engage users in certain behavior in concert with the chatbot’s objective (sales, transactions, advice, training, story telling, idea sharing, etc.)
  • Learning from Conversations: Ability to ask users for answers and to learn from them. This should be optional since social input may not be desirable for certain objectives.
  • Short-term Memory: Ability to remember the topic of conversation and interpret pronouns correctly. This requires chatbot to take into account what was said 2, 3, or 4 steps earlier.
  • Long-term Memory: Remembering previous chat sessions and starting conversation from where it was left of.
  • Emotions and Attitude: Ability to detect unproductive conversations, change strategy, or abort not to waste resources.
  • Awareness: Ability to self-assess its performance, produce reports about its performance, and suggest bot builders the weaknesses encountered.
  • Infinite Speech: Not to be restricted by a pre-defined steps of conversation.

Canning Responses Instance-by-Instance is not AI

Most chatbot platforms today are requiring instance-by-instance input from its builder to develop every step of the intended conversation in a rigid sequence. This approach is feasible for banking transactions, travel bookings, or other similar interactions where dialogue is restricted to solid options. Obviously, there is no AI backbone needed for such chatbots.

Chatbot science is at its infancy while most developers are expecting adult behavior.

Deep Learning is not a Silver Bullet

One of the latest misconceptions emerged in the market is that if there is enough data thrown at deep learning system, all the requirements listed above as AI backbone can be satisfied. Deep learning can only handle some parts of the required list, and the rest must be called the “chatbot science”. The only way to produce a chatbot development platform in the scope of AI backbone is to offer data-driven tools and/or knowledge-driven tools with certain level of built-in functions, where those functions define the secrets of the chatbot science.


Talk to Vera, exclone’s company representitive.

For exClone’s Chatbot Platform, click here for free trial via LinkedIn access.

Join our CHATBOTS linkedin group

Follow exClone in Linkedin or on Facebook

#chatbot #chatbots #AI #artificialintelligence #ConversationalAI #Virtualassistants #bots #machinelearning #NLP #DL #deeplearning

Cloning Chatbots for Education


In this context, cloning is an advanced form of impersonating where the chatbot can talk about the person’s life experiences and his/her expertise as curated by the chatbot maker. Compared to impersonating a person just using his/her image and name, cloning is obviously more involved and more challenging. As an example, you can chat with Abraham Lincoln and see how it was developed via one-shot machine learning technology with no-coding requirement. This chatbot uses Wikipedia content as its main source of conversation.

As one can easily deduce, all historical characters can be cloned into chatbots for educational purposes. But cloning goes beyond that as it allows creating chatbots of teachers themselves.

Top 6 Reasons Why Cloning Chatbots are Inevitable Tools for Education

  1. Control: Interactive content gives students much more control over what they want to focus on.
  2. Fun: Talking/messaging/chatting is always more fun than just reading.
  3. Ease: Use of small screen devices are ideal fit for chatbots which add to their educational role.
  4. New Teaching Methods: Chatbots can be a great summarization tool offering students main points to remember and option to dive deeper. Various new teaching strategies can be implemented.
  5. Creativity: Creation of chatbots can also be an educational experience.
  6. Feedback: Conversational analytics obtainable from chatbot interactions provide valuable clues to teachers as to how students learn, or fail to learn.

Profiliration of Chatbots Require Editorial Platforms

For chatbots to take a serious role in education, their development and profiliration must be fast and effective. Here are the three most important requirements for such a progress:

  1. No Coding: Chatbot creation should migrate from a coding effort to an editorial effort. This will enable students and teachers to develop education chatbots by curating content only.
  2. No Corpus Training: Underlying technology should not require large corpus training, and no experience in AI. One-shot machine learning techniques must drive these platforms processing the content for chat interaction while working silently in the background.
  3. Effective Communicator: Chatbots created for education must be effective, being able to answer improptu questions and offer topics of discussion. Although no chatbot today is expected to match human level dialogue, the educational effectiveness can be achieved by presenting chatbots for the specific goals they are designed for.

If you come across cloning/impersonating chatbots, please drop a note below. We may create a list of educational chatbots here.

How I made Abraham Lincoln CHATBOT in Less Than 10 Minutes


In our quest for turning static knowledge (documents) into interactive knowledge (chatbots) via the chatbot Platform, we have experimented creating a chatbot from scratch to completion. The main question was, how long would it take? We first downloaded Lincoln’s content from Wikipedia (16,000+ words), cleaned the content, made editorial changes, and curated some images. Then, it took less than 10 minutes to create a fully functional chatbot through the platform. Its one-shot machine learning technology (learning by reading) took less than 1 minute, and the previous 9 minutes were spent on entering the content into the platform. You can test this chatbot at this link and examine how it was developed.

It is a fully functional chatbot with short-term memory, answering impromptu questions any time, topical suggestions, detecting user behavior, and providing infinite speech. Its knowledge is limited to what the historians said as compiled in the Wikipedia page.


For chatbots to spread and flourish in the future depends on how quickly they can be developed. This would mean development by editorial effort rather than by coding effort. In other words, chatbot platforms should only require content curation and selecting dialogue features. Everything else should be automated underneath (invisible to the developer), including machine learning and NLP capabilities.

Developers of chatbots in the future will be the writers not the computer programmers.

Current platforms offered by big companies (Microsoft Bot Framework, IBM-Watson, Amazon-Lex, Google API, and Facebook Messenger Platform) all require coding skills and/or AI experience. Obviously, developing the same chatbot for Abraham Lincoln would take much longer than 10 minutes when hands-on AI skills and coding are involved.

Considering the document stockpiles of enterprises, a quick and easy conversion to chatbots can be valuable for training, help desk, and other vital operations.


The second reason for this initiative was to assess the value proposition of chatbots for the education sector. Here are the top 6 reasons why chatbots (conversational AI) will be inevitable tools for education:

  1. Control: Interactive content gives students much more control over what they want to focus on.
  2. Fun: Talking/messaging/chatting is always more fun than just reading.
  3. Ease: Use of small screen devices are ideal fit for chatbots which add to their educational role.
  4. New Teaching Methods: Chatbots can be a great summarization tool offering students main points to remember and option to dive deeper. Various new teaching strategies can be implemented.
  5. Creativity: Creation of chatbots can also be an educational experience.
  6. Feedback: Conversational analytics obtainable from chatbot interactions provide valuable clues to teachers as to how students learn, or fail to learn.

There is no doubt that one of the most active areas of conversational AI will be education. We will report how Abraham Lincoln chatbot was received in a follow up article.

——- FOLLOW US ———-

For exClone’s Chatbot Platform, click here for free trial via LinkedIn access.

Join our CHATBOTS linkedin group

Follow exClone in Linkedin or on Facebook

Is DIGITAL EMPLOYEE the Next Big Thing?


All the technical jargon you have been hearing nowadays such as deep learning, artificial intelligence, natural language processing, etc., all converge to one single question for businesses: Can we build digital employees?

One may wonder what makes a digital employee different than all the software tools we are already using today. A digital employee may be defined as a computerized system that has superb communication skills using natural languages, and has some level of autonomy to make its own judgement and decisions.

Digital Employee represents the fine line where we delegate business responsibilities to autonomous systems, and where we communicate with them like talking to human employees.


Digital employees will directly contribute to business efficiency in 4 major areas as shown below. The communication at the top is essential for all other functions to perform cohesively. In other words, a digital employee starts from the core capability of communication and performing a high level dialogue.



Creation of a digital employee cannot be a scientific project. Otherwise, it will remain very limited to a few examples based on substantial R&D budgets. This revolution will only happen when we have platforms that allow the creation of digital employees easily and fast. Here are the some of the top requirements for such transition:


It is also important to mention that seamless integration to all communication platforms and operating systems is another key requirement.


Creating digital employees through a platform will require many scientific disciplines and methods to amalgamate. There is no “silver bullet” solution to create such a complicated system. Below is a simplified landscape of disciplines that are most likely to contribute at least one aspect of development.


The success will depend on who has the best cocktail of methods tucked under the platform which are literally invisible to the end user (i.e., the creator of digital employees).


Undoubtedly, there are several benefits of gaining digital employees as outlined below. However, their limitations compared to human employees (in certain aspects) represent a tradeoff. This trade off will exist until the technology reaches human level cognition, which may take a very long time.



Estimating the timeline of the transformation from human information workers to digital employees is not an easy guess. Many businesses have adopted the IKEA model of DIY software during the last few decades, delegating tasks to clients. Banking is a prime example where you are supposed to use software to do transactions on your own. However, the current trend shows demand for command driven banking using conversational interfaces for requests like “Transfer $5,000 from checking to saving by tomorrow morning.” If we can talk to a digital employee, why bother using a software. And that’s the underlying promise for the upcoming revolution.

DIY Software model is wearing off, creating a future demand for digital employees.
We predict that the first solid evidence of this revolution will show itself by the fading away of DIY systems from our lives (including IKEA).