I previously wrote about how we recently rolled out MessageGears Engage, our brand new product that offers secure and scalable access to customer contextual data (or any other internal data) in an easily consumable format. This enables personalization in real time, empowering marketers to deliver an unprecedented level of brand engagement and up-to-date content.
That marketer-focused overview gave a good high-level look at the product, but I’d like to take a deeper dive into the struggles that modern enterprises face, and how Engage can be an incredibly useful tool to help organizations overcome data hurdles.
Why is Engage necessary?
Enterprise marketers want access to current data in real time, so they can deliver the sorts of personalized messaging campaigns that get results. Whether that’s emails that make recommendations based upon recent purchases, push notifications that alert customers of nearby deals that expire soon, or abandoned-cart messages that offer an incentive for them to complete a purchase they didn’t finish, it’s essential today that marketers surprise and delight their customers in order to cut through the inbox clutter.
On the surface, it seems paradoxical — large companies with Terabytes of data can’t get all of the information to a place and shape that marketing needs it. How can that happen? There are two scenarios:
- The first example is big companies that have evolved over time and don’t have all of their data in a single “place.” These companies have their CRM data in one place, historical data in another, etc. This means that any data access is typically done piecemeal — an API here, a database call there. And, over time, as new or different data flows into the system, these data-access methods stop working. For instance, I was speaking with an enterprise marketer recently who mentioned that there was a robust set of APIs built for a marketing integration sometime in 2016. They were used heavily for a while, but were forgotten after a campaign change. When they were picked back up in 2019, they were effectively useless because they didn’t contain any of the new columns or fields that had been added to the original data set in the meantime — and they didn’t have the IT resources to go back and fix it again, rendering them useless.
- Additionally, companies may try to utilize their traditional ESP’s data source as a “real-time” data platform. This can have disastrous results. I spoke with another enterprise marketer who told me that when they tried to send a campaign using a leading marketing cloud ESP’s database as a data store for their open-time implementation, their platform slowed down so much he couldn’t even log in to send another campaign.
- Finally, there are companies that have a “modernized” data structure — possibly using a modern data warehouse or a CDP, systems that are fundamentally not designed for the kinds of high-access patterns that people want for Engage. Modern data warehouses are not built for the concurrency that Engage was built with. Databases like Snowflake and Amazon Redshift were optimized for reliable and predictable access (vacuum and analyze). Engage data and access are far more random and indeterminate. Many CDPs may have the pure capability to service this data, but most pricing plans charge by the query, meaning that even attempting real-time data would be prohibitively expensive.
It’s essential today that marketers surprise and delight their customers in order to cut through the inbox clutter.
One issue is that marketing is rarely prioritized in terms of getting data. Operational data that makes the business go and more direct analytical teams typically have prioritized needs, with marketing coming last. So, if data access is a challenge, that’s going to put marketing at the end of a fairly long line.
How is Engage any different?
Engage was built with two primary objectives in mind: scale and access. To give you an example of the use cases we’ve designed for Engage, let’s imagine a scenario: A program manager at a large enterprise company is about to send a monthly newsletter to her 40,000,000 recipients. This newsletter will reflect various bits of useful real-time information: Follow-ups on their recent purchases, information about deals at nearby stores, and new offers that may appeal to the recipient. Assuming:
- There are four embedded images in the email template that require two API calls to Engage each to create
- 25% of her audience opens the message within 24 hours of sending
- Peak volume is 20 times the “average” open rate
- The math for total peak load to Engage is:
Scale: Utilizing native postgreSQL functionality in conjunction with various caching and storage mechanisms, Engage was built to quickly ingest data and make it available to as many consumers as necessary. Engage attacks the typical issues of scale in two ways:
- Ingestion: Typically, loading real-time data at any large scale from one data store to another would invoke heavy strain on the user’s database. With that in mind, Engage utilizes a system of incremental refreshes to keep data up to date — meaning that, after an initial large data upload, only records that have recently been changed or uploaded in a rolling 15-minute window are selected from the user’s database to the target Engage data store. This reduces strain while ensuring that data is still up to date. On top of that, data loading into Engage is built with parallelization in mind, ensuring that data can be loaded into Engage as quickly as possible.
- Consumption: To avoid the traditional issues that many enterprise tools encounter, Engage was built to fundamentally separate the processes of data consumption and loading. With elastic resources for querying and fetching data, and dynamic caching mechanisms to ensure that new data is always available, Engage effectively ensures that every request for data will be serviced successfully.
Engage was built to quickly ingest data and make it available to as many consumers as necessary.
Access: To ensure that Engage can store and offer data of any type, we’ve designed a mechanism to store data as simple key-and-value JSON data payloads. By exposing data back to the user as a simple key-and-value JSON response, we’ve ensured that:
- All data is normalized to a similar response structure — whether you’re requesting a single record or up to 25 at a time.
- Data requests from different record sets are honored in the same request, allowing users to effectively join and hold data of any kind within Engage.
- Because of the serialized JSON structure returned in Engage, keys within the same data store can have massively different values — ensuring no need for records to be within a standard schema or data type.
- By working with Movable Ink in our design and implementation, any Engage data set is easily converted to a Movable Ink data source for real-time presentation.