Presence AI, a French AI that takes the opposite of Google Duplex

Headquartered in San Francisco, the start-up co-founded by two French proposes to manage the making of commercial meetings by SMS or via Alexa. She seduces the beauty sector.

Could a start-up developed in California by French counteract the plans of Google Duplex  ? For the record, the new Google Conversational Intelligence service offers to phone in the place of the user to make appointments. “Google Duplex offers to generate more commercial calls while companies want to reduce,” says Michel Meyer, one of his three co-founders who is also a personality of the hexagonal internet known to have founded Multimania (French community site , sold to Lycos in 2000). Created in 2016, its start-up took the problem differently.Alexa, Amazon’s voice assistant , waiting for Google’s one. The application also distinguishes itself from intelligent assistants like Julie Desk or Clara who automate the making of appointments by adding messaging and calendar.

The idea of ​​Presence AI came to Michel Meyer by consulting the telephone bills of his eldest daughter. He realized that she was no longer making phone calls, only exchanging text with her contacts. And she’s not the only one. According to him, fewer and fewer people drop their phones and this trend can only increase with the arrival of force Generation Y and Z, addicted to their smartphone. 

“We are entering the age of conversational internet,” says Michel Meyer. “Businesses must therefore optimize messaging and intelligent assistant interactions, especially since scheduling is a time-consuming activity, and more than 20% of meetings are moved or canceled.” By text or voice, AI Presence intends to allow businesses to be available immediately 24/7 to book, confirm or reschedule an appointment.

Incubated by the Alexa Accelerator by Techstars

To manage SMS, Presence AI leans on the fixed line of the company or creates a dedicated number. On the vocal side, the platform works closely with Alexa’s teams to optimize responses. From July to October 2018, she participated in the Amazon Alexa Accelerator incubation program powered by Techstars in Seattle. “With Alexa, orders are often simple,” notes Michel Meyer. “Taking appointments hides a certain level of complexity with the constraints of dates and the identification of the services requested.”

If a large number of appointments are managed automatically, Presence AI can also set multichannel exchanges to music. The customer initiates, for example, the conversation on Alexa and the sending of the confirmation of the niche is realized by SMS. In the meantime, making appointments required that a professional consult his agenda. For example, a customer of a hairdressing salon could ask for both a cut and a color with special requirements. The AI ​​will then hand over to his usual hairdresser.

Not only does Presence AI aim to achieve the highest conversion rate, but also to improve the frequency of visits. The artificial intelligence is going to prevent that the client it is time to make an appointment based on the history of past visits or prolonged absence. Each company adopts its strategy. “If a business passes for a given customer from six visits a year to seven, the impact on its turnover is immediate for a purchase cost virtually nil,” says Michel Meyer.

Faced with the fear that AI will replace employees, Michel Meyer believes that on the contrary it removes the burden of answering the phone to focus on higher value-added tasks. “At the end of the free trial period, it is also the people who are our best lawyers,” he says. In terms of price, Presence AI offers an entry-level version at $ 49 per month for less than 200 customers, a business version at $ 129 per month with an unlimited number of customers, and finally a business edition on estimate for a number unlimited sites and custom integrations.

Suggest additional sales

Presence AI raised funds in 2016 from business angels and two investment funds, the Frenchman Newfund and the American Blue Capital. More recently, and Techstars have invested $ 120,000 in the company. To increase its installed base, Presence AI plans to equip other messaging channels like WhatsApp but also to expand its field of prospection. So far, the young growth has focused on the health-beauty market (hairdressers or massage parlors, eyelash or nail care specialists) and related trades (sports classes, rental of ‘sports equipments). It integrates its application with management software focused on these segments, such as SalonBiz or Mindbody Booker. In the future, it plans to attack other sectors, including the automotive sector.

Presence AI also plans to support new scenarios. In the context of cross-selling and up-selling, “it will be to offer additional services,” says Michel Meyer. “You come tomorrow to the salon, we offer a massage of the head.You have opted for such oil during a massage session, do you want to buy it?”

In terms of development, Presence AI, which is currently present in 24 states, aims to continue its coverage of the United States but also expand internationally. The company has been contacted by several Anglo-Saxon companies in the United Kingdom, Australia or Canada. “When it comes to developing a new language, French will be at the top of the list with Spanish,” promises Michel Meyer. Meanwhile, the San Francisco-based start-up employs only six people. Alongside Michel Meyer, there are two other co-founders: Pierre Berkaloff of France, CTO, and John Kim in charge of customer follow-up.

“Product CEO” of Presence AI, Michel Meyer invested, in parallel, in a dozen start-ups he accompanies. Among them are Aircall (call center software publisher) and Algolia (famous  application search engine ), both also created by French people.

AI, new battlefield of Office 365 and G Suite

While Microsoft focuses on document creation support, Google is focusing on Gmail with help writing messages.

Historically, Office 365 has been the first productivity suite to integrate machine learning. This dimension was introduced with Delve in 2015. Objective of this brick? Offer the user a personalized and hierarchical view of his content (files, emails, conversations …) according to his relationship and documentary graph: his hierarchical history, collaboration, the mesh of his interests … Delve remains very little deployed to date. Feedback on the application is rare or non-existent. Less and less put forward, Delve is still maintained by Microsoft. It is also marketed in the French data centers of the American group. Four years later, Google is catching up. In mid-2018, the Mountain View company caused a sensation by equipping G Suite with a series of Gmail-based AI functions.

Office 365G Suite
Help content creationx
Bots and team messagingx
Knowledge Managementx
Smart messagingx
Unified and personalized document searchx
Grammatical and semantic suggestionsxx

At the heart of these new advances, Gmail has been given a possibility of smart reply offering predefined answers based on messages received. To an e-mail requesting a call “Wednesday at 11am or Thursday at 17h”, it will for example make several suggestions for replies: “Wednesday is perfect”, “Thursday suits me”, “The two of me go” or ” Neither of the two proposals suits me “. Simple and efficient.

“Google manages to decode fairly complex mail,” says Arnaud Rayrole, CEO of the French consulting firm Lecko, expert in collaborative solutions. “To a message asking to establish a new quote alongside other peripheral information, the smart reply will be able to put forward consistent suggestions: ‘Here is the modified quote’ or ‘excellent news, here is our new proposal'”. Thomas Poinsot, digital consultant at the French service company Spectrum Group, said: “The smart reply is a real plus, especially in a situation of mobility when there is little time to answer.” Google is already planning to extend it to its Hangouts IM.

Gmail: the AI ​​at the messaging service

Another lever of Gmail based on AI: the smart dials. For the time available only in English, this device “auto-complete” a message being input. By analyzing the typed terms, he finds the following most probable words given the context, and thus increases the speed of seizures. “This AI will expand to other languages ​​and get rich over time by identifying how you are addressing this or that person,” says the Mountain View giant (read the official post ).

Signed Google animation showing Gmail’s built-in email help feature. © Google

On the smart e-mail front, Office 365 is currently for absent subscribers. In terms of artificial intelligence, Office 365 R & D focuses on file creation support. Among the first affected Office bricks: PowerPoint. Based on the analysis of the presentations being created, the application recommends related content (images or texts) stored locally or on the web, which can be used for writing. It also uses image recognition to provide complementary photos (read the MSFT article ). 

Smart content in PowerPoint

Of course, G Suite is also developing this approach (read  the post ). But Microsoft pushes it further. In Excel, for example, the editor now includes a smart wizard called Ideas that identifies trends, patterns, or outliers in a table.  The automatic translation is also supported in Excel, PowerPoint and Word , with 60 languages ​​covered, but also in Stream for the automatic captioning of videos. In parallel, Microsoft continues to optimize the machine learning layer of SharePoint . On the program: decryption of images for character recognition and metadata extraction, or sentiment analysis and facial recognitionusing Azure Analytics Services. The same logic for Power BI, the data visualization application can use Azure Machine Learning to refine its processing, including identifying entities (organizations, people, locations). Via Azure ML, it even offers the possibility to build its own model of machine learning (read the post ).

Animation illustrating the capabilities of the intelligent assistant, Ideas, integrated into the right column of Excel. © Microsoft

Another battleground where AI comes into play: the search for content. Unsurprisingly, this is an area in which Google is showing a good head start. Called Google Search , ” the search engineGoogle Drive is very powerful, “says Arnaud Rayrole at Lecko.” It suggests results based on your favorite themes, the frequency of consultation and editing of a particular document. It gives access to a ranking of the results with regard to the authors with whom you exchange the most. “Only downside: Google Search does not go as far as managing skills.” It does not analyze the profiles of employees, their career and their a work history to unearth the knowledge available internally that can be useful for this or that project, “continues Arnaud Rayrole, an approach that Microsoft integrates via Delve or Yammer, the Office 365 enterprise social networking brick .

Google’s strategy more readable

In terms of documentary research, Microsoft still has a way to go before catching up with Google. The group has announced its intention to evolve in this field towards a unified experience. For now, several indexing enginescoexist within Office 365, each associated with an application (Yammer, Teams, Outlook, Drive ….) The publisher intends to consolidate them within a single base called Microsoft Search. R & D work that should be completed by the end of the first half of 2019. The promise? Provide an AI-based search environment that delivers personalized and consistent results across the entire platform. Eventually, the company of Satya Nadella even intends to tend to a common search interface covering not only Office 365 but also Windows 10 and the software package Dynamics 365.

For the consultants surveyed, these Office developments are generally in the right direction, but are less readable against the simple and pragmatic AI levers that draws Google in Gmail.

Last opposition ground: chatbots. Google and Microsoft both integrate this dimension into their respective suite through team messaging application: Hangouts Chat for the first and Teams for the second. Launched in early 2017, just over a year before Hangouts Chat, “Teams has significantly more conversational agents,” says Thomas Poinsot at Spectrum Group. “But like at Google, they are mostly limited to simple tasks, click-to-action.Rare are those equipped with a layer of conversational AI capable of providing the expected response by seeking the good applications. “

Algorithms that shape the business

“In the end, all these efforts have a common purpose: to help users manage ever-increasing volumes of data.” The goal is laudable, but AI is never neutral. no less than a vision of the world coded in algorithms, which ultimately gives Google and Microsoft the opportunity to guide the choices of an organization by prioritizing information according to their own rules, “warns Arnaud Rayrole. “Likewise, they encourage the evaluation of employee performance by following their logic.” On this point, the CEO of Lecko evokes the Office 365 MyAnalytics brick. “It provides KPIs inspired by American culture, for example the rate of users transmitting emails at non-working hours and not by time slots.

Artificial intelligence: what consequences for PIM, MDM and DAM?

Progress in AI is impressive. There are still limits but it is already possible to exploit them for product information management solutions, reference data, or in digital wealth management systems.

AI can do anything

Artificial intelligence, the media love it! And they talk a lot about it, including the powers that are given to him. And to convince the most reluctant, she managed to beat the man in the game of Go . Such an exploit was considered impossible for a machine. To calculate all the possibilities of game would be too long and the game of Go requires experiment and intuition … But the AlphaGo advances in Deep Learning won.

Thanks to this major technological breakthrough, we discover that artificial intelligence researches cancers, drives vehicles, dialogue in natural language … Artificial intelligence also makes coffee and is even the chef of the restaurant .

With such progress, we are made to believe that artificial intelligence is already capable of anything like programming in the place of man , composing music or creating works of art . Literature and sci-fi films make us aware that the uprising of machines is getting ready. Skynet will take power. Humanity is in danger because it loses control of these machines that are so superior to it.

Artificial intelligence does not think

Yet artificial intelligence does not think, at least the one that works today. AI is just a system of very advanced algorithms that are defined and controlled by humans. Artificial intelligence is able to exploit gigantic amounts of data (big data) to extract statistics and complex mathematical formulas that allow it to recognize and reproduce. And that’s the difference with Man. The AI ​​is, so far, not able to know and produce on its own. And such autonomy is not for tomorrow.
In a recent interview, Luc Julia, one of the designers of assistant Siri (Apple), recalls that ” artificial intelligence does not exist ” as he explains in a book of the same title. There is still a long way to go for artificial intelligence to become a real intelligence capable of consciousness, emotions, instinct, to work in several fields, to learn quickly by itself …

Recent advances in the field of machine learning and deep learning are based on a “learning” performed by the machine. This learning requires very large amounts of data and many iterations to be able to function. Google’s DeepMind victory over professional players at Starcraft II is a good example: it took the equivalent of nearly 200 years of training to reach this level of play . A gigantic time that shows that humans learn, happily, much faster. And they are able to capitalize on their experience to use it on new topics (to learn a new video game) while artificial intelligences are still specialized on very precisely defined tasks.

PIM / MDM / DAM and artificial intelligence

Today, product information management solutions (GIP or PIM), reference data (MDM) and digital asset management systems (DAM) can centralize large amounts of information (technical characteristics, logistics, marketing, tariffs …). The goal is to increase the quality of information and publish it effectively across different channels.

Using artificial intelligence in these solutions promises to solve all the problems encountered during the implementation of these software tools. Indeed, these systems can organize the information with a high level of finesse using many metadata. The quality of the information comes first and foremost from the fact that the users provide: a wrong price, an incoherent image, a duplicate form or incomplete properties … all these problems are caused by an incorrect data entry. 

With AI, we could have an intelligent system able to read all this information and identify what is wrong, correct errors and, better yet, capture the data.

The latest technologies already allow some of these actions. Today, the system can analyze your images and compare them to your product descriptions, translate texts automatically, analyze data and compare them to derive statistical rules and identify elements that do not respect them.
To properly perform such actions, it will be necessary to rely on existing “knowledge bases” (such as “Google Translate”, an artificial intelligence system that has already learned to translate into many languages) or develop your own database. knowledge. In this case, the task is complex because, for this database to be exploitable, it must contain a very large volume of data of very good quality . Not everyone has tens or hundreds of thousands of homogeneous product sheets of consistent quality to train their system:

  • If the quality of the data is uncertain or too fluctuating, the learning will be too and it will not produce good results,
  • If the amount of data is insufficient, the learning can not succeed,
  • If all the elements to be treated are too heterogeneous, no “statistical trend” will emerge properly.

Thus, the development of training games and control is a very delicate task. It requires the help of specialists (the famous data scientists). This is the most time-consuming task in machine learning projects .

As for the ability of machines to enter data or choose the right images, it all depends on the desired objective:

  • Prepare long descriptions from different characteristics or automatically associate images with the right products based on their metadata: this is already possible with “traditional” algorithms. What the AI ​​could do in addition is, from thousands of product sheets, analyze how the human has done to deduce the rules to apply. The utility is lower because users are usually able to formulate these rules directly.
  • Write appropriate marketing texts or choose the most rewarding image to present a product: such actions are subjective and rely on intuition. You have to have “the intelligence of situations”. The current artificial intelligence does not have it.

Exploiting progress in AI in the area of ​​PIM / MDM / DAM is already possible. There are still limits but they are pushed a little more each day.

Artificial intelligence and data capture: technologies that make the pair!

From Siri and Alexa to chatbots or robot traders, artificial intelligence has fundamentally changed many aspects of how we work and data capture is no exception.

Close your eyes and imagine depositing your invoices in a scanner, leaving and letting your computer archive / sort them so that you have only the “exceptions” to process before paying the bills. Do you think this is a dream still far from being realized? Not so sure.

Did you know ? Really intelligent capture software does not require templates, keywords, exact definitions, classifications or indexes to do a good job. Indeed, it can extract the right information and give meaning to a multitude of documents alone, whatever their size, format, language or symbols used.

Three ways in which artificial intelligence modifies data capture

With intelligent capture software, the AI-based “engine” can learn – like a new employee – to perform data entry. It can quickly extract contextual information and learn to interpret the patterns and characteristics of different types of documents. In addition, it can validate the data and provides additional protection, which employees can not achieve without tedious manual searches.

Intelligent data capture has changed the game for three main tasks: classification, extraction and validation.– Classification

With the classification, also called “sorting of documents”, the software learns to recognize different types of documents (when the user “teaches” some variations and examples). The automatic learning engine reduces the number of rules to be applied, which gives a high level of confidence in the classification of documents with a minimum of manual effort.– Extraction

Artificial intelligence has worked wonders for extracting data in semi-structured and unstructured documents. For example, consider identifying the invoice number, which typically involves creating complex templates, keywords, and links around specific domains and labels. A new employee can view a document and immediately locate invoice numbers, regardless of the form’s form. Now, software can do it too without the need for programming.– The validation

AI-driven research extends research with different tools. Thus, it can use different sources of information (such as an example, quantity, price, description, or amount) to link an article to the system database.

Working in tandem: intelligent capture and automation of robotic processes

The market for RPA (Robotic Process Automation) is booming. So far, it is delivering on its promise to automate complex, rule-based processes. Forrester expects a global market – with only a fraction of document capture – worth $ 2.9 billion in 2021, compared with just $ 250 million in 2016 (10 times more growth in five years).

In other words, the system itself becomes smarter.

In addition to the obvious advantages of automation, the use of intelligent data capture software also eliminates conjectures on the configuration side. It is important to note that the goal of AI-based data capture is not to replace humans, but to drive as much automation as possible with machines that can intelligently perform tasks. In the end, employees are freed from mundane tasks and can take on valuable tasks that require a human spirit to do things right.

In a world where information and documents are constantly changing, any company that wants to be successful must learn and adapt – ideally with technology that does the same thing.

How Facebook put the AI ​​at the heart of its social network

Detection of inappropriate content, ranking newsfeed, facial recognition … The platform uses massive machine and deep learning.

Artificial intelligence (AI) is present on all levels of the social network Facebook . At the heart of newsfeed, it prioritizes content based on users’ interests, their consultation history, their social and relational graph. Similarly, it allows them to push advertisements to which they have a high probability of joining, from the click to the purchase. More difficult, the AI ​​is also exploited by the platform to detect unauthorized connections or false accounts. Finally, it intervenes to orchestrate other tasks, less visible, but remain key in the daily social network: customize the ranking of search results, identify friends in photos ( by facial recognition) to recommend tagging them, manage speech recognition and text translation to automate captioning videos in Facebook Live …

Target functionAlgorithm familyType of model
Facial Recognition, Content LabelingDeep learningNetwork of convolutional neurons
Detection of inappropriate content, unauthorized access, classificationMachine learningGradient Boosting of Decision Trees
Customization of newsfeed, search results, advertisementsDeep learningMultilayer Perceptron
Natural language comprehension, translation, speech recognitionDeep learningNetwork of recurrent neurons
Matching usersMachine learningSupport vector machine
Source: Facebook search publication

The social network makes extensive use of standard machine learning techniques. Statistical algorithms (classification, regression, etc.) adapted to create predictive models starting from encrypted data, for example in order to predict changes in activity. Facebook uses it to find inappropriate messages, comments and photos. “Artificial intelligence is an essential component for protecting users and filtering information on Facebook,” insists Yann Lecun, vice president and scientific director of the group’s AI. “A series of techniques are used to detect hateful, violent, pornographic or propaganda content in images or texts, and, on the contrary, to label the elements likely to

Multiple neural networks

Alongside traditional machine learning, deep learning is obviously implemented by Facebook. Based on the concept of artificial neural network, this technique applies to both numbers and audio or graphic content. The network is divided into layers, each responsible for interpreting the results from the previous layer. The IA thus refines by successive iterations. In text analysis, for example, the first layer will deal with the detection of letters, the second of words, the third of noun or verbal groups, and so on.

Unsurprisingly, Facebook applies deep learning to facial recognition, particularly via convolutional neural networks, a particularly efficient algorithm for image processing. Via the multilayer perceptron method, ideal for managing a ranking, the social network uses it to customize the newsfeed or the results of its search engine. Finally, deep learning is also powered by Facebook for machine translation.

To recognize embedded text in images, Facebook uses a triple layer of neural networks. © Capture / JDN

Around image processing in particular, Facebook has built a deep learning platform called Rosetta . It is based on a triple network of neurons (see screenshot above). The first, of a convolutive nature, maps the photos. The second detects the zone or zones containing characters. As for the latter, he tries to recognize the words, expressions or sentences present within the identified regions. A technique known as Faster R-CNN that Facebook has slightly adapted to his needs. Objective: better qualify the images posted to optimize indexing , whether in the newsfeed or the search engine .

A deployment pipeline

To set its different AIs to music, the American group has a homemade pipeline. Code name: FBLearner (for Facebook Learner). It starts with an app store. For internal teams of data scientists, it federates a catalog of reusable functionalities as well for the training phases as for the deployment of the algorithms. A workflow brick is then added to manage the training of the models and evaluate their results. Training processes can be performed on computational clusters (CPUs) as wellonly clusters of graphics accelerators (GPUs), again designed internally. Last stone of the building, an environment motorized the predictive matrices, once these trained, in situ at the heart of the applications of Facebook.

Workflow built by Facebook to develop and deploy its machine models and deep learning. © Capture / JDN

Side libraries of deep learning, Facebook has historically chosen to develop two. Each is now available in open source. Designed for its basic research needs, the first, PytTorch, is characterized by its great flexibility, its advanced debugging capabilities and especially its dynamic neural network architecture. “Its structure is not determined and fixed, it evolves as learning and training examples are presented,” says Yann Lecun. The downside: the Python execution engine under the hood makes PytTorch inefficient on production applications. Conversely, the second deep learning framework designed by Facebook, Caffe2, was precisely designed to be the product deployments. In this context,

More recently, Facebook has set up a tool called ONNX to semi-automate the formatting for Caffe2 of models originally created in PyTorch. “The next step will be to merge PyTorch and Caffe2 into a single framework that will be called PyTorch 1.0,” says Yann Lecun. “The goal is to benefit from the best of both worlds, and result in a flexible infrastructure for research and efficient compiling techniques to produce usable AI applications.”

Towards the design of a processor cut for the AI

Following the model of Google’s Tensorflow Processor Units (TPU) cut for its Tensorflow deep learning framework, Facebook is also planning to develop its own optimized chips for deep learning. “Our users upload 2 billion photos a day to Facebook, and within 2 seconds each is handled by four AI systems: one manages the filtering of inappropriate content, the second the labeling of images in order their integration with the newsfeed and the search engine, the third the facial recognition, finally the last generates a description of the images for the blind, all consumes gigantic computing and electrical resources.With the real-time translation of video that we are now looking to develop, the problem will intensify.