Adobe Sensei: content intelligence to transform customer experience


Recent research by Adobe revealed people spend an average of 7.8 hours a day interacting with digital content (and among teenagers that goes further up to 11.1 hours a day). At the same time, they have become more and more demanding on the relevance of content that’s shown to them, and become wary of poor experiences.

These findings just go to show the tremendous pressure that’s put on companies, brands and media publishers to produce, publish and track captivating content, at an ever-increasing frequency and speed, in order to successfully satisfy their audiences and win new customers.

To support marketing teams in keeping up with these challenges, Adobe has further refined the capabilities of Adobe Sensei to automate the delivery of personalized content. With a powerful artificial intelligence (AI) and machine learning (ML) framework, Sensei connects Adobe Experience Cloud (e.g. Adobe Experience Manager, Adobe Target, Adobe Campaign and Adobe Analytics) with Adobe Creative Cloud. This tighter integration between platforms brings content intelligence into content marketing, helping designers, marketers and web analysts to manage content smarter, through fast, seamless workflows.

Discover Adobe Experience Manager’s latest artificial intelligence features:

1. Smart Tags: intelligent image discovery

This feature finds image characteristics matching the brand’s style and guidelines, which are then tagged through an image search taxonomy for easy selection.

The algorithm initially operates on a set of typical images of the brand to learn which elements are brand-specific and should be recognized in new images. In addition, the new images continually refine the recognized features to optimize the learned rules.

2. Smart Layout: multiformat, multichannel content personalization

Along with Adobe Sensei, the Smart Layout feature automatically generates the most effective content format and design layouts to match each audience’s typical behavior and preferred channels. That way, a restaurant can automatically present different menus, visuals and special offers for different target groups, such as vegetarians or meat eaters.

Content is personalized in Adobe Experience Manager in conjunction with Adobe Target across all marketing channels. For example, a retailer may email their customers about a new store opening with personalized invitations to the opening party and targeted offers. The same content can then also be reused for the mobile app and the Facebook channel of the merchant.

3. Dynamic Media: adjusting images for different devices

With intelligent Dynamic Media capabilities in Adobe Experience Manager, marketers can automatically locate and generate high quality visuals that adjust to device screen size, resolution and available internet bandwidth.

Smart Imaging detects this information and minimizes file sizes to ensure quick loading, while Smart Crop detects and crops to the focal point in any image, automatically capturing the intended sharpness range into account, regardless of screen size.

In addition, the feature generates high-quality patterns from sections of product images, with particularly uniform colors, structures or other components. These patterns can then be added as decorative elements in the page layout. It is the similarity to the images used that adds a special visual effect.

4. Automated Forms Conversion: better customer experiences with forms

Filling out forms is a necessary nuisance to communicate, exchange information or even complete transactions. But a poor form experience will easily lead to customer frustration or abandonment. This is especially true for PDF forms.

With the Automated Forms Conversion feature, companies have the ability to automatically convert PDF forms into a website-like experience. This feature uses image recognition techniques to identify form fields and convert them into a functionally equivalent, but much more enjoyable interface, whether on desktop or mobile.

What’s the future?

Abhay Parasnis, CTO of Adobe, describes Adobe Sensei as a “continuous learning system that learns from people’s behavior” – it learns from what the different users are doing within their digital ecosystem. From a technical point of view, Adobe Sensei essentially consists of the open source framework Spark, Torch and TensorFlow. Next to these core technologies, Parasnis also highlights the paramount importance of domain knowledge for the most successful intelligence systems to come in the next decade. In the case of Adobe, these would primarily include customer experience enhancements, such as Photoshop's Liquify feature to modify faces, or the Adobe Document Cloud feature to find similar documents to a document. The goal is to increasingly develop the AI capabilities across the Adobe ecosystem to unify creative and marketer workflows, better enabling brands to reach consumers across the full range of devices and channels.