exClone Launches Virtual Experts at Black & Veatch to Enhance Knowledge Utilization by Artificial Intelligence


NEW YORK–(BUSINESS WIRE)–Today, exClone Inc. announced the launch of its AI-based virtual experts at Black & Veatch as an enhancement to enterprise knowledge capture, utilization, communication, and search functionality.

exClone’s virtual experts open a new, unprecedented window of communication between experts and employees in an enterprise. In addition to the documents of expertise written in the conventional manner, experts now may be represented virtually through a conversational AI system (chatbot) where the embedded knowledge comes from exClone’s platform that converts documents, such as MS Word or PDF, straight into chatbots. The conversational interaction delivered by such virtual experts helps workers access critical knowledge in a more productive way than by other conventional means such as search engines.

exClone’s technology of converting documents into chatbots does not require any coding, availability of large data sets, long training cycles, or experience in AI. After deployed, the technology also allows “on-the-fly” teaching of virtual experts through conversations undertaken by designated teachers. As a result, virtual experts remain dynamic sources of knowledge updated as often as needed without a redeployment process. Workers’ unanswered questions beyond the scope of the deployed knowledge may be quickly answered by designated teachers thus introducing a new social connection and communication paradigm across the enterprise.

Alan Young, the CEO of exClone, said, “If messaging tools can be used to get answers from friends, we should be able to get answers from virtual experts embedded with knowledge from enterprise documents.” He added: The connection between experts and workers in an enterprise is elevated to a new dimension with virtual experts, and we are proud to lead this new paradigm with visionary companies like Black & Veatch. “We’re excited to deploy and leverage this new connectivity tool for our professionals and capitalize on the efficiencies we believe it will bring to our business,” said Mike Etheridge, Global Chief Engineer for Black & Veatch’s water business. “This tool will help our professionals to find information quicker and harness knowledge and expertise from our global workforce to drive efficiency and effectiveness in new ways moving toward the future.”

About Black & Veatch
Black & Veatch is an employee-owned, global leader in building critical human infrastructure in energy, water, telecommunications and government services. Since 1915, we have helped our clients improve the lives of people in more than 100 countries through consulting, engineering, construction, operations and program management. Our revenues in 2017 were US$3.4 billion. Follow us on bv.com and in social media.

About exClone
exClone, Inc. is a New York City-based technology company specializing in virtual experts, chatbots and conversational AI systems to enhance enterprise knowledge utilization, communication, and search functionality.


Instant Learning vs Deep Learning

In the context of conversational AI, instant learning refers to a cognitive function we are too familiar with: learning instantly from conversations.

If someone tells you “beware of the dog when you enter the yard“, your brain will process it immediately, and you will absorb that knowledge. Once learned, you may warn another person saying “be careful, there is a dog in the yard.” Why is it so difficult to teach a computer to do the same? Actually, instant learning technology is already here as explained below.

Instant Learning Example

ALIXD is a conversational system that switches to learning mode by entering a password anytime during conversation (see the bottom of the article for testing ALIXD) As shown below, ALIXD learns new knowledge about Bitcoin and Ethereum from its human teacher.

The important point here is that the system will bring this answer to approximately 960 different semantic variations of a relevant question, which is computed by a simple equation using ontological parameters: V = T x N x E x I where V is the variations, T is the question type (typically 20), N is the number of onomasticon (6), E is the number of events (2), and I is the number of instruments (4). Both N, E and I include words in the answer as well as the question entered by the teacher.

Being able to bring an answer to hundreds of different meaningful variations of a single question is highly similar to the human brain’s cognitive skill in instant learning during conversations.

Two examples (out of possible 960) are shown below. If there is no other knowledge entered into the system, these variations are easy to track. If some new and relevant knowledge is added, then the system will pick an answer that is semantically best match to the embedded meaning.

Mechanics of Instant Learning

The departure point of instant learning is to represent knowledge by a group of linguistic neurons as shown below (left). When another knowledge is entered, a new group of linguistic neurons appear and they connect (right) based on ontological properties. If a property is identical (such as the same event), then neurons fuse into one. As more knowledge added to the system, a vast network of ontological relationships emerges. Entire documents can be learned in a single step of fusion process. This approach allows answering questions with great ease and deducting new knowledge by logic resolution. The inner workings of this method is proprietary.

Compared to Deep Learning (DL)

There is nothing “instant” about deep learning as the name implies. In short, the DL approach to conversational AI goes against our natural life experiences. For example, the instant learning example shown above cannot be replicated by DL.

First of all, DL method requires a vast amount of data (more than a Q&A pair) to climb the ladder of language proficiency. Then, a training process and convergence are needed. After a time consuming process, a DL network can be claimed to function as intended, however any new addition of knowledge would require expanding the training data set, and re-training the network.

While current DL methods are producing impressive results in image processing and kinematics, there are serious problems in application to conversational AI mainly caused by the uniformity of neurons (no neurons with linguistic role), limitation of vector space modulation, and statistical bias. More explanations can be found in my previous articles listed below.

Instant learning comes from knowledge science whereas deep learning is rooted to data science. Considering the pyramid of hierarchy, knowledge science works from top to bottom whereas data science works from bottom to top. Going from bottom to top in this hierarchy suits well for image processing, for example, yet it becomes impractical and misfit for natural language processing at least for the current approaches of deep learning.

Instant Learning API

Instant learning enables conversational systems (chatbots) to continue learning after they are deployed. This allows organic growth of knowledge by designated teachers, or sometimes by the end users. ALIXD API can be integrated into any conversational system, and will be available soon. Interested parties can contact me for notification.


You can test it at this link by entering temporary password 0014. Note that if other people are adding knowledge about the same subject, you may find the system more versatile.


This article is brought to you by exClone, a chatbot technology provider.

Join CHATBOTS group in linkedIn.

You can follow exClone in Facebook, and in LinkedIn.

#instantlearning #deeplearning #chatbots #conversationalAI #AI #ArtificialIntelligence #ML #DL #Machinelearning #exclone #virtualexperts #NLP #humandialoguetheory

Is Google Hyping it? Why Deep Learning cannot be Applied to Natural Languages Easily (41,655 views)

How does IBM Watson Compare to Google’s Hype of Deep Learning for NLP?(10,843 views)

Why Deep Learning is Not a Good Fit For Chatbots: Combinatory Explosion Problem(4,465 views)

Why Deep Learning and NLP Don’t Get Along Well? (6,596 views)

Can Machine Learning Use Knowledge instead of Data? Deep Cloning vs Deep Learning (5,823 views)

Deep Cloning vs Deep Learning (3,672 views)

Most Chatbots Don’t Use AI, are Misrepresenting AI (2,695 views)

Learning by Conversations in Chatbots, and Why it is Important


I have published several examples of chatbots with embedded expertise (Virtual Experts) under the exClone umbrella. This time, the chatbot I want to talk about is ALIX, which has no embedded expertise, but she has something very unique: Social Learning capability. With this capability also comes curiosity and emotions, which are essential parts of a cognitive picture.

Chatbots which can learn instantly from social conversations will be one step ahead in the realm of AI


Social learning is the capability of teaching a chatbot new content by having a conversation. As seen below, ALIX will ask to learn if she cannot answer a question. In this case, ALIX learned the answer to the question “why does spring bloom aggravate my allergies?”


The system will simply produce the answer if the same question was asked again by anyone using the system. However, this is not the extent of the learning occurred in the system. To understand the depth of learning in ALIX, there are four cases shown below.

(1) The original query is asked in a different way using different word senses

An example is shown here where the original query WHY is changed to WHEN and the words (IRRITATE, SINUSES) are different. The system is able to make the associations and bring the answer it learned previously. This expands the answering capability of the system many folds since the users will rarely be able to replicate the original question.


(2) The new query is referring to the knowledge embedded inside the answers previously taught, no question matching involved

More importantly, ALIX is able to analyze ON-THE-FLY the previously entered answers to bring the relevant one without matching to the original query. In this case, the query has no matching segments to the original question. As a result, the content taught to the system is utilized to the maximum extend in answering questions.


For novice readers, it is important to point out that all other systems in the market today (Google, Wikipedia, Quora, etc.) are basically “question matching” systems, and none of them have on-the-fly capability to analyze their content embedded in answers. Not to mention, none can be taught, nor can engage in dialogue.

(3) Learning is not limited to Questions & Answers

ALIX is able to learn from regular statements (non-question) as shown here when she has no relevant knowledge to chat about. This further promotes the organic growth of knowledge by contributions from the end users. As more knowledge captured by the system, a two way dialogue about a certain subject becomes more frequent in a fashion similar to two human beings exchanging opinions.


(4) Curiosity and Self-awareness

As part of an essential element of learning, ALIX gets curious about certain subjects and asks to learn more. In a way, ALIX is aware of her lack of knowledge in such topics. In the example shown here, ALIX had not heard anything about Star Wars, and asking to learn about it.


Currently, curiosity is triggered in ALIX for onomasticons (Proper names) to manage the memory load, which is a temporary limitation. ALIX also exhibits some basic emotions like joy and annoyance (she may quit if the conversation is fruitless.)


The learning function described above can be open only to a group of designated users (teachers). In an enterprise set up (such as help desk), or in any other Virtual Expert application, the initial loading of content (learning by reading) can be augmented by social learning (learning by conversations).


Social learning allows chatbots to be updated with new or modified answers instantly (on-the-fly) after they are deployed.


ALIX is, by no means, at the level of human learning, however a few milestone capabilities are accomplished, probably the first time ever. While we will improve ALIX’s understanding capabilities, an important next milestone will be generating new knowledge from existing knowledge by reasoning. This is depicted in the example in which the system figures out why United Airlines shares fell by examining other evidence and by logic inferencing. Accordingly, the question “Why did United Airlines shares fall?” will find an answer from the new knowledge generated. We will make an announcement when this milestone is achieved.



The technology behind ALIX is a proprietary machine learning technique that utilizes knowledge directly (knowledge science as opposed to data science). More about this approach was published in the article titled “Deep Cloning vs Deep Learning” and was further elaborated in another article titled: “Can Machine Learning Use Knowledge instead of Data?”

In creating virtual experts, the same backbone technology drives “learning by reading” from documents curated by experts, and “learning by conversations” with the end users (of all or designated) after deployment.

Knowledge-driven Machine Learning replaces decades-old technology of question matching and indexing

The Rise of Virtual Experts via Machine Learning


By now, you must have heard the term “virtual assistants.” The natural evolution of virtual assistants is the virtual experts. Going from former to the latter is a substantial technical challenge not many companies are willing to meet yet, because the simpler “assistant” version has untapped commercial potential and a quick ROI. Nevertheless, virtual experts is the real game changer – a paradigm shift – that will have social and economic impact beyond our wildest imagination.

The rise of virtual experts is just hiding behind the puzzle of the most effective machine learning approach.

Investing in the most effective machine learning approach holds the key for commercial success. It has to be practical, transparent, agile, and quick to deploy. Our process is explained in simple terms in my previous articles: “Deep Cloning Versus Deep Learning” and “Can Machine Learning Use Knowledge …

Virtual Doctor for Women’s Health – DrCHAT

Some examples of virtual experts coming off our conveyor belt include DrCHAT which is a virtual doctor for women’s health (in Beta). DrCHAT encapsulates physicians’ expertise following the ACOG guidelines for evidence-based care, and is further described in the article “Artificial Intelligence (AI) in Medicine …

Virtual Spokesperson

Companies needing to interact with clients beyond Website presentations can launch a virtual company spokesperson. Vera is an example where she has absorbed several layers of company information via machine learning. Although her conversation skills do not match a real human, she is highly effective with genuine visitors who are looking for information by chatting instead of surfing Web pages.

Virtual Tax Helper

As an example of converting documents into chatbots, the virtual expert Terry Kohen chats about IRS Small Business Tax guide (Publication 347). The conversation with Terry is somewhat limited to the scope of the IRS document, thus it does not replicate the expertise of a human tax expert.

Virtual Guide – Smart Cities and Travel Safety

Geographic expertise is always in demand for travellers. The two most prominent areas for virtual guides include smart city and travel safety applications. Before these specialties become virtual experts, we have been testing a destination finder, Davis Hunter, using a limited-scope wikivoyage data.


New Opportunity to Monetize Expertise
The commercial impact of virtual experts will be driven by the scalability offered by chatbots.

While human experts monetize only by face-to-face consultations, their virtual counterparts will be able to monetize by one-to-thousands consultations, simultaneously.

Eventhough such electronic consultations may require small payments, high volume will push the revenues to levels only determined by server capacity and market demand. That’s the critical value point.


This article is brought to you by exClone, a chatbot technology provider.

Chat with DrCHAT about Women’s Health

Follow DrCHAT in Facebook, and in Linkedin

Chat with Vera about exClone

Join CHATBOTS group in linkedin

Follow exClone in Facebook, and in LinkedIn


Build a Chatbot Impersonating Yourself

Impersonating chatbots is one of those concepts that are around the corner. They will add one more option to our online digital presence with social networks, personal blogs, etc. An immediate question is why would anyone build his/her own chatbot? Here are five reasons why impersonating chatbots may take off sooner than later.

1. Share Your Ideas

A chatbot impersonating you is like your personal messenger that can tell others about your ideas, expertise, interpretations, and status. You can pack as much information as you want inside your chatbot and update it as frequent as you can. When you review the conversational logs, you can see how people are reacting to your ideas.

Anonymous conversations with your chatbot can test your ideas by real feedback devoid of social pressure to please.

2. Managerial Communication

If you are managing a group in your business, you can build your chatbot to remind your workers of the rules, regulations, milestones, visions, expectations, and much more. Usually, one-on-one conversations between a manager and a worker is an awkward one if the subject matter is rules, regulations, etc.

Chatbots can be a polite way to fully inform your workers about rules, regulations, and what is expected of them.

3. Chatbot as Your Talking Resume

If you are looking for a job, your conventional resume may fall short of explaining who you really are. Your impersonating chatbot, on the other hand, can contain more social knowledge of your life, pictures, videos, and those appropriately selected “personal touch” bits of information. Whilst it can be considered annoying to toot your horn during an actual interview, your chatbot can do that for you.

A chatbot as your talking resume can fill an important gap of personal touch which may otherwise not be appropriate to share with a future employer during an interview.

4. Dating Game

Impersonating chatbots can easily be a vehicle to increase our social engagement by presenting ourselves in a unique manner. While many dating sites use personal information to make matches, a chatbot may be a new way for both chatbot owner and the people talking to it. In one end, the anonymous talker can ask tough and private questions freely. On the other end chatbot owner can make selection from conversational logs.

Social selection based on chatbot presentation, and chatbot conversation can be a new avenue for dating.

5. Digital Life After Death

Either for personal reasons, or for educational purposes, life after death may be possible in a digital form. Impersonating chatbots are the first step in this direction.

Chatting with dead people via chatbots may keep us better acknowledged and aware of our heritage and history.


All these avenues will become possible only if chatbot creation is reduced to a mere editorial effort. It should not include any coding, corpus training, or AI experience. Everyone should be able to build it just by writing and curating content. Here is an example of my impersonating chatbot which I built using our editorial platform. The whole process is straighforward and fast as long as you have your content ready.

Another example is a chatbot impersonating Abraham Lincoln. That was built in the same manner for educational purposes.

The deployment is automatic: a public URL is created for your chatbot which you can share. Let us know what other creative reasons you can come up with for impersonating chatbots.

#chatbot #chatbots #AI #artificialintelligence #ConversationalAI #Virtualassistants #bots #machinelearning #NLP #DL #deeplearning

——- FOLLOW US ———-

For exClone’s Chatbot Platform, click here for free trial via LinkedIn access.

Join our CHATBOTS linkedin group

Follow exClone in Linkedin or on Facebook

Why Deep Learning is Not a Good Fit For Chatbots: Combinatory Explosion Problem


Let’s assume that you have a very simple business, and you want to deploy a chatbot for customer support. Let’s assume 100 questions and answers (Q/A)s would cover all your issues. It looks very simple, and you may be tempted to deploy one of the deep learning methods to build your chatbot. Here are the problems you are going to face:


Unless you are a trained linguist, you might easily undermine how flexible natural language can be, and how explosive the combinations will emerge out of a single question. Let’s say your first Q/A starts with a basic complaint the users will have something like “I have a problem with my cable.” This simple statement can be expressed in more than a dozen ways as shown below, and the combinations do not end there!

If we take one of the possible expressions above, there can be another dozen combinations only by morphological and synonymous variations:

As you can see, this is only the first Q/A from your set of 100. Just imagine if some of the (Q/A)s you have are more complicated than this simple starting expression.

Your set of 100 (Q/A)s can easily mount to 10,000 different equivalent expressions the users may type which must be detected and understood by your chatbot software.


The problem is not the deep learning method itself, but what it needs to function properly. You need to have a data set of 10,000 questions, if not more, that are linguistically equivalent expressions as shown above. Also, these 10,000 questions should map to 100 answers in this hypothetical case. Unless someone sits down and types them one by one, such a data set will be a nightmare to acquire.

If you already have a customer support system and collected, let’s say, 1 million (Q/A)s, there is still no guarantee that this 1 million (Q/A)s will cover the 10,000 linguistic variations to detect the 100 main (Q/A)s. Considering the Gaussian distribution of a typical user response analytics, 1 million (Q/A)s would cover less than 30% of your required data set. Your chatbot solution will remain vulnerable to undetected responses after all that trouble.

Consequently, someone who is deploying a deep learning method will find himself/herself in a data crises situation quickly. No matter what type of deep learning method you deploy, the data requirement described here holds. Neural networks cannot discover themselves equivalent variations of natural language without being provided ample examples. And I want to underline the word “ample” here.


Going back to the hypothetical case where you have a service operation and you can pull 1 million (Q/A)s. To make sure this data set will not cause any harm, someone must manually go through the set to clean it up. You cannot just dump data to a deep learning system without verifying it. Remember the Microsoft case, where Tay, the chatbot developed using twitter feeds, started to produce racial statements.

Learn-as-you-go approach also poses problems. Deep learning methods require a training process and convergence before deployment. This can be a long process. Once trained, the system cannot simply absorb new data in an addition mode. The entire data set must be trained again. As a result, if you plan to add new data to your chatbot every week, you need a team of AI specialists training the system every week and re-deploy. As one can imagine, this does not seem like a scalable business solution.


Facebook, when they launched the chatbot platform, assumed that buttonizing conversations could solve part of the combinatory explosion problem described here. First of all, let’s make one thing clear:

If the user is not allowed to enter free expressions any time during conversation, it is no longer a chatbot, or conversational AI. It is a toy.

Most Facebook chatbot developers jumped on the idea of buttonizing entire conversations, thus yielding nothing more than a toy. Most of the 30,000 plus chatbots developed in this fashion flopped big time, only few succeeded as reported in several recent articles prior to Facebook’s recent summit meeting. Entirely buttonized conversations can rarely provide successful solutions for very particular business types. If buttons are used alongside free expressions successfully detected, then this combination can be powerful.


I intend to write more about the solutions later. However, in a nut shell, solutions to the chatbot problem require independent NLP solutions before a deep learning methods can be used. One thing is for sure, deep learning alone is not a good fit, and has no future with this “silver bullet” engineering mentality.

——- FOLLOW US ———-

Join our CHATBOTS linkedin group

Follow exClone in Linkedin or on Facebook

For exClone’s Platform, click here for free trial.

What a Chatbot can do that Search Engines cannot?

There is a very easy distinction between a chatbot and a search engine which explains almost everything: SHORT-TERM MEMORY.

A search engine, like Google, has no short-term memory. Google will take your query, and bring results. The job is done. The next query you have is completely new to it. It is a new session with no ties to the previous query.

A chatbot, on the other hand, can remember 2, 3, 4, or N steps back, which gives it a huge advantage in responding better, more focused, and with higher accuracy. Especially, applications like “advisor” chatbots can take advantage of this fact. However, remembering N steps back poses a challenging technical problem that can grow in a combinatory fashion. Without getting into such technical details, let’s see an example.

Multiple Questioning Before Presenting Answers
A showcase example is the chatbot Davis Hunter which is designed to find you new travel destinations based on your choices. The multiple questioning operation uses short-term memory which is shown below.

At the end of the questioning steps, the chatbot presents travel destinations with precision. It has used its short-term memory to remember all your inputs before making a decision on its list of destinations. Once the user selects from the options, then Davis will start to present more information about the destination using the free content from Wikivoyage.

The operation shown above is a blue print of any kind of advisory chatbot in any subject.

If you type the same query to Google: “island in spain that has festivals and good seafood restaurants” you will end up with poor results as shown below. Simply because your query is too long and falls into long-tail, a region where search engines cannot handle queries.

Search will Shift to Specialized Chatbots in the Near Future
It is fair to assume that conventional search will die out as “Google generation” is steadily replaced by “Siri generation” who are more inclined to use messaging and chatting platforms. This transformation is already at works and is expected to accelerate as chatbots get better and spread in every vertical.

The expectation that a search engine user will sift through dozens of inaccurate results is increasingly becomming obsolete and intolerable for the new generation who grew up with persistent messaging habits highly suitable for chatbot interaction.

The key point in this transformation is the ability to create quality chatbots with an easy and familiar effort (like writing a blog entry) that would accelerate the proliferation of viable chatbots in every subject.


Join our CHATBOTS linkedin group

Follow exClone in Linkedin or on Facebook

For exClone’s Platform, click here for free trial.