The Data-Centric Revolution: Data-Centric vs. Centralization

We just finished a conversation with a client who was justifiably proud of having centralized what had previously been a very decentralized business function (in this case, it was HR, but it could have been any of a number of functions). They had seemingly achieved many of the benefits of becoming data-centric through decentralization: all their data in one place, a single schema (data model) to describe the data, and dozens of decommissioned legacy systems.

We decided to explore whether this was data-centric and the desirable endgame for all their business functions.

A quick review. This is what a typical application looks like:

The metadata is the key. The application, the business logic and the UI are coded to the metadata (Schema), and the data is accessed through and understood by the metadata. What happens in every large enterprise (and most small ones) is that different departments or divisions implement their own applications.

Click on the image to see a larger version.

Many of the applications were purchased, and today, some are SaaS (Software as a Service) or built in-house. What they all fail to share is a common schema. The metadata is arbitrarily different and, as such, the code base on top of the metadata is different, so there is no possibility of sharing between departments. Systems integrators try to work out what the data means and piece it together behind the scenes. This is where silos come from. Most large firms don’t have just four silos, they have thousands of them.

One response to this is “centralization.” If you discover that you have implemented, let’s say, dozens of HR systems, you may think it’s time to replace them with one single centralized HR system. And you might think this will make you Data-Centric. And you would be, at least, partially right.

Recall one of the litmus tests for Data-Centricity:

Let’s take a deeper look at the centralization example.

Click on the image to see a larger version.

Centralization replaces a lot of siloed systems with one centralized one. This achieves several things. It gets all the data in one place, which makes querying easier. All the data conforms to the same schema (and single shared model). Typically, if this is done with traditional technology, this is not a simple model, nor is it extensible or federate-able, though there is some progress.

The downside is that everyone now must use the same UI and conform to the same model, and that’s the tradeoff.

Click on the image to see a larger version.

The tradeoff works pretty well for business domains where the functional variety from division to division is slight, or where the benefit to integration exceeds the loss due to local variation.  For many companies, centralization will work for back office functions like HR, Legal, and some aspects of Accounting.

However, in areas where the local differences are what drives effectiveness and efficiency (sales, production, customer service, or supply chain management) centralization may be too high a price to pay for lack of flexibility.

Let’s look at how Data-Centricity changes the tradeoffs.

Click here to read more on TDAN.com

Does your Data Spark Joy? Part 1

Why is Marie Kondo so popular for home organization?

Does your Data Spark Joy?Marie Kondo released her book, “The Life-Changing Magic of Tidying up,” almost ten years ago and has since gained much notoriety for motivating millions of people to de-clutter their homes, offices, and lives. Some people are literally buried in their possessions with no clear way to get from room to room.  Others simply struggle to get out the door in the morning because their keys, wallet, and phone play a daily game of hide-and-seek. Whatever the underlying cause of this overwhelm, Marie Kondo offers a simple, clear method for getting stuff under control. Not only that, but she promises that tidying up will clear the spaces in our lives, leaving room for peace and joy.

Why does this method apply to Data-Centric Architecture?

You might be wondering what this has to do with data-centric architecture.  In many ways the Marie Kondo method is easily extrapolated out of the realm of physical possessions and applied to virtual things: bits of data, documents, data storage containers, etc. In the world of information and data, it’s not surprising that people have seen parallels between belongings and data.  That said, it’s not enough to just say that new applications, storage methods, or business processes will solve the problems of information overload, data silos, or dirty data.  Instead, it’s important to examine your company’s data and the business that data serves.

Overarching Data-Centric Principles

For most businesses and agencies, data is essential to function and is ensconced in legal requirements and data lifecycle policy.  It simply isn’t realistic to say, “Throw it all out!”  Instead, the principles behind acquiring, using, storing, and eventually discarding things must be understood.  And in the virtual space, we can understand “things” to be data-centric, metadata, and systems.

Her Method Starts with “Why?”

In her book, Marie Kondo says, “Before you start, visualize your destination.”  And she expands on this, asking readers to think deeply about the question and visualize the outcome of having a tidy space: “Think in concrete terms so that you can vividly picture what it would be like to live in a clutter-free space.” Our clients will often engage us with some ideal data situation in mind.  It might be expressed in terms of requirements or use cases, but it often has to do with being able to harmonize and align data, do large-scale systems integration, or add more sophisticated querying capabilities to existing databases or tools.  In fact, the first steps of our client engagements have to do with developing these questions into statements of work.

Also, we encourage clients to envision their data and what it can tell them independently of applications, systems, and capabilities precisely to avoid the pitfall of thinking in terms of using new tools to solve undefined problems.  It’s uncanny that this method of interrogation into underlying motivations is common between data-centric development and spark-joy tidying up.

Her Method is About the Psychology of Belongings.

It is important to understand how organizations come to have their data.  In the US Government, entire programs are devoted to managing acquisition. In finance, manufacturing, and other industries, the process of acquiring systems and data is often a business unto itself. It’s not uncommon to hear people working with data to refer to “data silos” when talking about partitioned and disconnected collections of data.  Sometimes this data is shuffled into classified folders and proprietary systems unnecessarily, simply because someone wants to retain control of it. In my work at the Federal Government, I found that the process of determining the system of record to be intensely political and time-consuming.  It’s not a trivial process and not simple, but it is essential to the effort of tidying your data-centric environment.

Sort your Data by Category.

Marie Kondo recommends going categorically for a reason.  In her book, she talks about her process of evaluating her belongings by location, drawer by drawer, room by room, and discovering that she found herself organizing multiple drawers with the same things repeatedly.  She tells us, “The root of the problem lies in the fact that people often store the same type of item in more than one place.  When we tidy each place separately, we fail to see that we’re repeating the same work in many locations and become locked into a vicious circle of tidying.” If this doesn’t sound familiar, you aren’t even working with data.

For me, this principle became clear when I gathered all my office supplies in one place. I was astounded by the small mountain of binder clips (and Sharpies) that seemed to materialize out of nowhere. I always seem to be looking for binder clips and sharpies, so I was shocked by how many I had.

I can think of no closer parallel than the proliferation of siloed systems that appear in each department within an agency.  When I worked for a government agency, I was part of a team whose job it was to survey the offices to find out who was using flight data.  There were several billion-dollar systems in development and in maintenance that held flight data. Over the course of a few years, I would hear quotes about the agency-wide number of flight data systems go from 15 systems, to 20, to 30, and beyond.  It literally became an inside-joke with leadership. And at times, we would hear rumors about some small branch office that had their own Microsoft Access database to keep track of their own data, because they couldn’t get what they needed from the systems of record.  Systems are like the binder clips of enterprise data, except that this kind of proliferation is as easy as making a copy. You don’t even need to make a trip to the office supply store to end up with a pile of duplicates.  If you want to understand how much data redundancy you have, search for specific categories of data across all systems.

Does it spark Joy? What does joy mean in the context of systems and data?

How do you know what sparks joy?  First, look at how the principle of looking for joy is applied.  Presumably, you are in your line of business because on some level it brings you joy – joy that derives from fulfilling a purpose.  Remember the first step of understanding why you are embarking on a transformative process and go back to what you envisioned.  Another way that you can look at joy is whether your space and the things in it allow for that spark to happen.  Ideally, you remove the items from your space that hinder that spark, after acknowledging the lessons they’ve taught you.  Do you feel that spark of joy when you grab your keys in the morning on the way out the door? If you’ve ever tried to find misplaced keys while you’re in a rush, you know the antithesis of joy. Having done the work of creating a space where your keys are easy to find is a way of facilitating joy in your morning routine.

One of the supposed failures of the Marie Kondo method as it applies to data clutter is that it is impossible to physically hold, or even look at, every single piece of data in your system.  Again, rely on the principle behind her method, which is that it is important to be thorough and aim for an environment that facilitates ease and joy.  Don’t say, “We can’t delete any personnel data!” and quit.  Commit to taking an inventory of your personnel systems and the systems that use personnel data. If that process reveals that you have ten different personnel systems and personnel data scattered in several other systems, you must take a closer look at your data environment.  At one point in my physical de-cluttering, I found a tin full of paper clips.  I didn’t handle each shiny paper clip individually; rather, I acknowledged the paper clips served me when I printed more documents onto paper, and since I no longer had a printer, I decided to toss them into the recycling bin.

Remember why you’re considering a solution to data problems in the first place and make a commitment to doing the work of determining your real data needs. Purpose is key, because the way data sparks joy is by enabling you to fulfil that purpose.  This can be difficult where the work you do is abstract and somewhat removed from business that is easy to understand.  However, the critical point to knowing whether or not the data in front of you serves its purpose in your business is to fully understand your business.

Discard and Delete your Data

Take a wardrobe full of clothing for example.  Many of Marie Kondo’s clients are surprised when they start organizing their wardrobes. It’s surprising when you can see the amount of clothing that is unserviceable, the number of items that still have tags on them, the number of hand-me-downs or gifts that don’t suit you, etc. These items are sometimes difficult to discard because of several reasons:

  • It’s kept out of obligation to the giver.
  • It cost a lot of money to buy it.
  • It’s still in good repair.
  • It might be the perfect thing to wear at an unspecified event in the future.
  • It reminds you of the lovely event at which you wore it.
  • It reminds you of the person who left it with you.

It may seem far-fetched to apply these reasons to data storage, but a quick glance through failed data projects will show you otherwise.  Consider the proprietary data locked in a system owned by a vendor for which your license has lapsed, or the system that’s coded in an outdated language whose expert programmers have to be called out of retirement to access, or that directory of data that doesn’t really match the fields in your database, but you requested through a complex data-sharing agreement with another agency.  If you can’t think of an example of a system that has been paid for but hasn’t been used, just consider that the terms shelfware and vaporware exist. It’s easy to be cynical about data precisely because of the overlaps between why we keep things in our closets and garages, and why we keep systems and data in our repositories. When you consider these parallels and understand the principles behind evaluating the items you keep with the hope that they will make your life better, sparking joy becomes easier.

Storage experts are hoarders.

Marie Kondo says you don’t need more storage.  That new Cloud service that can take all the old databases you have and make them accessible is not going to solve your problem. Data storage is expensive, and you do not need a new data storage solution.  What you need is to understand your business process, the business need for the data you believe you have, and a disposition plan for everything else.

How do you start?

In summary, if you’re looking for smart data-centric solutions to help you manage an overwhelming amount of data, or you’re looking for ways to access your vast stores of data in a way that enables smarter business solutions, your bigger issue might be data hoarding.  Looking at your business needs, closely examining the data you have, and coming up with strategies for aligning your data to a manageable data lifecycle can seem overwhelming.  Using a data-centric approach will bring that dream into focus. Keep an eye out for part two of this series to learn how to get your data to spark joy for you.

Click Here to Read Part 2 of this Series

The Data-Centric Revolution: The Sky is Falling (Let’s Make Lemonade)

Recently IDC predicted that IT spending will drop by 5% due to the COVID-19 pandemic.[1] Last week, Gartner went further by predicting that IT spending would drop by 8% or $300 Billion.[2] (Expect a prediction bidding war.) Both were consistent: highest hit areas would be devices, followed by IT service and enterprise software.

The predicted $100 billion drop, in those last two categories, should send chills through those of us who make our living in those two categories. And keep inIT Spending mind, this drop will occur in the latter half of this year. To date, here have been very few cuts.

But I’m seeing the glass half full here. Half full of lemonade.[3]

Here is my thought process:

  • For at least five years, we have been advocating to abandon the senseless implementation of application after application. (You know: the silo making industry.) We have made a strong case for avoiding the application centric quagmire in Software Wasteland.[4]
  • And yet spending on implementing application systems had continued unabated since 2015.
  • With the need to slash budgets in the latter half of 2020, the large application implementation projects will be the easiest section to target.
  • Indeed, the IDC article says that “IT services spending will also decline, mostly due to delays in large projects.”
  • Furthermore, “some firms will cut capital spending and others will either delay new projects or seek to cut costs in other ways.”
  • Gartner reported that “some companies are cutting big IT projects altogether; others are ploughing ahead but delaying some elements of their plans to save money.”
  • Hershey has halted sections of a new ERP system and will drop IT capital spending from the budgeted $500 million to between $400-450 million.
  • Gartner also stated that “health care systems [are] pushing out projects to create digital health records by six months or more.”

This would be a terrible time to be an application software vendor or a systems integrator. The yearly 7% reductions in both categories are still in front of us. Any contract not yet signed will be put on hold. Even contracts in progress may get cancelled.

Click here to read more on TDAN.com

The Data-Centric Hospital

Why software change is not as graceful

‘The Graceful Adaption of St Frances Xavier Cabrini Hospital since 1948.


This post has been updated to reflect the current corona virus crisis. Joe Pine and Kim Korn authors of Infinite Possibility: Creating Customer Value on the Digital Frontier say the coronacrisis will change healthcare for the better. Kim points out that although it is 20 years late. It is good to see healthcare dipping a toe in the 21st Century. However, I think, we have a long way to go.

Dave McComb in his book, ‘The Data-Centric Revolution: Restoring Sanity to Enterprise Information Systems, suggests that buildings change more gracefully than software does. He uses Stewart Brand’s model which shows how buildings can change after they are built.

Graceful Change

I experienced the graceful change during the 19 years that I worked at Cabrini Hospital Malvern. The buildings changed whilst still serving its customers. To outsiders, it may have appeared seamless. To insiders charged with changing the environment, it took constant planning and endless negotiation with internal stakeholders and the surrounding community.

Geoff Fazakerley, the director of buildings, engineering, and diagnostic services orchestrated most of the activities following the guiding principles of the late Sister Irma Lunghi MSC, “In all that we must first keep our eyes on our patients’ needs.”

Geoff, took endless rounds with his team to assess where space could be allocated for temporary offices and medical suites. He then negotiated with those that were impacted to move to temporary locations. On several instances space for non-patient services had to be found outside the hospital so that patients would not be inconvenienced. The building as it now stands bears testament to how Geoff’s team and the architects managed the graceful change.

Why software change is not as graceful

Most enterprise software changes are difficult. In ‘Software Wasteland: How the Application-Centric Mindset is Hobbling our Enterprises’, McComb explains that by focusing on the application as the foundation or starting point instead of taking a data-centric approach this hobbles agile adoption, extension and innovation. When a program is loaded first, it creates a new data structure. With this approach, each business problem requires yet another application system. Each application creates and manages yet another data model. Over time this approach leads to many more applications and many additional complex data models. Rather than improving access to information this approach steadily erodes it.

McComb says that every year the amount of data available to enterprises doubles, while the ability to effectively use it decreases. Executives lament their legacy systems, but their projects to replace them rarely succeed. This is not a technology problem, it is more about mindset.

From personal experience, we know that some buildings adapt well and some don’t. Those of us that work in large organisations also know that this is the same with enterprise software. However, one difference between buildings and software is important. Buildings are physical. They are, to use a phrase, ‘set in stone’. They are situated on a physical site. You either have sufficient space or you need to acquire more. You tend to know these things even before adaption and extension begin. With software it is different, there are infinite possibilities.

Software is different

The boundaries cannot be seen. Software is made of bits, not atoms. Hence, we experience software differently. As Joe Pine and Kim Korn explain in their book, ‘Infinite Possibility: Creating Customer Value on the Digital Frontier’, software exists beyond the physical limitation of time, space, and matter. With no boundaries, the software provides the opportunity for infinite possibilities. But as James Gilmore says in the foreword of the book, most enterprises treat digital technology as an incremental adjunct to their existing processes. As a result, the experience is far from enriching. Instead of making the real world experience better, software often worsens it. In hospitals, software forces clinicians to take their eyes off the patient to input data into screens and to focus on the content of their screens.

More generally, there appears to be a gap between what the end-users of enterprise software expect, the champions of the software, and the expectations of software vendors themselves. The blog post by Virtual Stacks makes our thinking about software sound like a war of expectations.

The war of expectations

People that sell software, the people that buy software, and the people that eventually use the software view things very differently:

  1. Executives in the C-Suite implement an ERP System to gain operational excellence and cost savings. They often put someone in charge of the project that doesn’t know enough about ERP systems and how to manage the changes that the software demands in work practice.
  2. Buyers of an ERP system expect that it will fulfil their needs straight out-of-the-box. Sellers expect some local re-configuration.
  3. Reconfiguring enterprise software calls for a flexible budget. The budget must provide for consultants that may have to be called in. It needs to provide for additional training and more likely than not major change management initiatives.
  4. End-users have to be provided with training even before the software is launched. This is especially necessary when they have to learn new skills that are not related to their primary tasks. In hospitals, clinicians find that their workload blows out. They see themselves as working for the software rather than the other way around.

The organisational mindset

Shaun Snapp in his blogpost for Brightwork Research points out that what keeps the application-centric paradigm alive is how IT vendor and services are configured and incentivised. There is a tremendous amount of money to be made when building, implementing and integrating applications in organisations. Or as McComb says, ‘the real problem surrounds business culture’.

Can enterprise software adapt well?

The short answer is yes. However, McComb’s key point is not that some software adapts well and some don’t, it is that legacy software doesn’t. Or as the quotation in Snapp’s post suggests:

‘The zero-legacy startups have a 100:1 cost and flexibility advantage over their established rivals. Witness the speed and agility of Pinterest, Instagram, Facebook and Google. What do they have that their more established competitors don’t? It’s more instructive to ask what they don’t have: They don’t have their information fractured into thousands of silos that must be continually integrated” at great cost.’

Snapp goes on to say that ‘in large enterprises, simple changes that any competent developer could make in a week typically take months to implement. Often the change requests get relegated to the “shadow backlog” where they are ignored until the requesting department does the one thing that is guaranteed to make the situation worse: launch another application project.’

Adapting well

Werner Vogels, VP &CTO of Amazon provides a good example of successful change. Use this link to read the full post.

Perhaps as Joe Pine and Kim Korn say the coronacrisis will change healthcare for the better. I have written about using a data-centric model to query hospital ERP systems to track and trace materials. My eBook, ‘Hidden Hospital Hazards: Saving Lives and Improving Margins’ can be purchased from Amazon, or you may obtain a free PDF version here.

WRITTEN BY
Len Kennedy
Author of ‘Hidden Hospital Hazards’

Structure-First Data Modeling: The Losing Battle of Perfect Descriptions

Structure-First Data Modeling: The Losing Battle of Perfect Descriptions In my last article I described Meaning-First data modeling. It’s time to dig into its predecessor and antithesis, which I call Structure-First data modeling, specifically looking at how two assumptions drive our actions. Assumptions are quite useful since they leverage experience without having to re-learn what is already known. It is a real time-saver.

Until it isn’t.

For nearly the last half century, the eventual implementation for data management systems has consisted of various incarnations of tables-with-columns and the supporting infrastructure which weaves them into a solution. The brilliant works of Steve Hoberman, Len Silverston, David Hay, and many others, in developing data modeling strategies and patterns are notable and admirable. They pushed data modeling art and science forward. As strong as those contributions are, they are still description-focused and assume a Structure-First implementation.

Structure-First data modeling is based on two assumptions. The first assumption is that the solution will always be physically articulated in a tables-with-columns structure. The second is that proceeding requires developing complete descriptions of subject matter. This second assumption is also on the path of either/or thinking; either the description is complete, or it is not. If it is not, then tables-with-columns (and a great deal of complexity) are added until it is complete. Our analysis, building on these assumptions, is focused on the table structures and how they are joined to create a complete attribute inventory.

The focus on structure is required because no data can be captured until the descriptive attribute structure exists. This inflexibility makes the system both brittle and complex.
All the descriptive attribution being stuffed into tables-with-columns are a parts list for the concept, but there is no succinct definition of the whole. These first steps taken on a data management journey are on the path to complexity, and since they are based on rarely articulated assumptions, the path is never questioned. The complete Structure-First model must accommodate every possible descriptive attribute that could be useful. We have studied E. F. Codd’s 5 data normalization levels and drive towards structural normalization. Therefore, our analysis is focused on avoiding repeating columns, multiple values in a single column, etc., rather than on what the data means.

Yet with all the attention paid to capturing all the descriptive attributes, new ones constantly appear. We know this is inevitable for any system having even a modest lifespan. For example, thanks to COVID-19, educational institutions that have never offered online courses are suddenly faced with moving exclusively to online offerings, at least temporarily. Buildings and rooms are not relevant for those offerings, but web addresses and enabling software are. Experience demonstrates how costly it is in both time and resources to add a new descriptive attribute after the system has been put into production. Inevitably something needs to be added. This happens either because something was missed or a new requirement was added. It also happens because buried in the long parts list of descriptive attributes, the same thing has been described several times in different ways. The brittle nature of tables-with-columns results in every change requiring very expensive modeling, refactoring, and regression testing to get the change into production.

Neither the tables-with-columns nor descriptive assumption parts lists assumptions apply when developing semantic knowledge graph solutions using Structure-First Data Modeling: The Losing Battle of Perfect Descriptionsa Meaning-First data modeling approach. Why am I convinced Meaning-First will advance the data management discipline? Because Meaning-First is definitional, the path of both/and thinking, and it rests on a single structure, the triple, for virtually everything. The World-Wide Web Consortium (W3C) defined the standard RDF (Resource Description Framework) triple to enable linking data on the open web and in private organizations. The definition, articulated in RDF triples, captures the essence to which new facts are linked. Semantic technologies provide a solid, machine-interpretable definition and the standard RDF triple as the structure. Since there is no need to build new structures, new information can be added instantly. By simply dropping new information into the database, it automatically links to existing data right away.

While meaning and structure are separate concepts, we have been conflating them for decades, resulting in unnecessary complexity. Humankind has been formalizing the study of meaning since Aristotle and has been making significant progress along the way. Philosophy’s formal logics are semantics’ Meaning-First cornerstone. Formal logics define the nature of whatever is being studied such that when something matches the formal definition, it can be proved that it is necessarily in the defined set. Semantic technology has enabled machine-readable assembly using formal logics. An example might make it easier to understand.

Consider a requirement to know which teams have won the Super Bowl. How would each approach solve this requirement? The required data is:
• Super Bowls played
• Teams that played in each Super Bowl
• Final scores
Data will need to be acquired in both cases and is virtually the same, so this example skips over those mechanics to focus on differences.

A Structure-First approach might look something like this. First, create a conceptual model with the table structures and their columns to contain  all the relevant team, Super Bowl, and score data. Second, create a logical model from the conceptual model that Structure-First Data Modeling: The Losing Battle of Perfect Descriptionsidentifies the logical designs that will allow the data to be connected and used. This requires primary and foreign key designs, logical data types and sizes, as well as join structures for assembling data from multiple tables. Third, create a physical model from the logical to model the storage strategy and incorporate vendor-specific implementation details.

Only at this point can the data be entered into the Structure-First system. This is because until the structure has been built, there is no place for the data to land. Then, unless you (the human user) know the structure, there is no way to get data back out. However, this isn’t true when using Meaning-First semantic technology.

A Meaning-First approach can start either by acquiring well-formed triples or building the model as the first step. The model can then define the meaning of “Super Bowl winner” as the team with the highest score for each Super Bowl occurrence. Semantic technology captures the meaning using formal logics, and the data that match that meaning self-assemble into the result set. Formal logics can also be used to infer which teams might have won the Super Bowl using the logic “in order to win, the team must have played in the Super Bowl,” and not all NFL teams have.

The key is that in the Meaning-First example, members of the set called Super Bowl winners can be returned without identifying the structure in the request. The Structure-First example required understanding and navigating the structure before even starting to formulate the question. It’s not so hard in this simple example, but in enterprise data systems with hundreds, or more likely thousands, of tables, understanding the structure is extremely challenging.

Structure-First Data Modeling: The Losing Battle of Perfect DescriptionsSemantic Meaning-First databases, known as triplestores, are not a collection of tables-with-columns. They are comprised of RDF triples that are used for both the definitions (schema in the form of an ontology) and the content (data). As a result, you can write queries against an RDF data set that you have never seen and get meaningful answers. Queries can return what sets have been defined. Queries can then find when the set is used as the subject or the object of a statement. Semantic queries simply walk across the formal logic that defines the graph letting the graph itself inform you about possible next steps. This isn’t an option in Structure-First environments because they are not based in formal logic and the schema is encapsulated in a different language from the data.

Traditional Structure-First databases are made up of tens to hundreds, often thousands of tables. Each table is arbitrarily made up and named by the modeler with the goal to contain all attributes of a specific concept. Within each table are columns that are also made up, again hopefully with lots of rigor, but made up. You can prove this to yourself by looking at the lack of standard definitions around simple concepts like address. Some will leverage modeling patterns, some will leverage standards like USPS, but the variability between systems is great and arbitrary.

Semantic technology has enabled the Meaning-First Structure-First Data Modeling: The Losing Battle of Perfect Descriptionsapproach with machine-readable definitions to which new attribution can be added in production. At the same time this clarity is added to the data management toolkit, semantic technology sweeps away the nearly infinite collection of complex table-with-column structures with the one single, standards-based RDF triple structure. Changing from descriptive to definitional is orders of magnitude clearer. Replacing tables and columns with triples is orders of magnitude simpler. Combining them into a single Meaning-First semantic solution is truly a game changer.

Graph Database Superpowers: Unraveling the back-story of your favorite graph databases

The graph database market is very exciting, as the long list of vendors continues to grow. You may not know that there are huge differences in the origin story of the dozens of graph databases on the market today. It’s this origin story that greatly impacts the superpowers and weaknesses of the various offerings.

While Superman is great at flying and stopping locomotives, you shouldn’t rely on him around strange glowing metal. Batman is great at catching small-time hoods and maniacal characters, but deep down, he has no superpowers other than a lot of funding for special vehicles and handy tool belts. Superman’s origin story is much different than Batman’s, and therefore the impact they have on the criminal world is very different.

This is also the case with graph databases. The origin story absolutely makes a difference when it comes to strengths and weaknesses. Let’s look at how the origin story of various graph databases can make all the difference in the world when it comes to use cases for the solutions.

Graph Database Superhero: RDF and SPARQL databases

Examples: Ontotext, AllegroGraph, Virtuoso and many others

Origin Story: Short for Resource Description Framework, RDF is a decades-old data model with origins with Tim Berners-Lee. The thought behind RDF was to provide a data model that allows the sharing of

data, similar to how we share information on the internet. Technically, this is the classic triple-store with subject-predicate-object.

Superpower: Semantic Modeling. Basic understanding of concepts and the relationships between those concepts. Enhanced context with the use of ontology. Sharing data and concepts on the web. These databases often support OWL and SHACL, which help with the process of describing what the data should look like and the sharing of data like we share web pages.

Kryptonite: The RDF original specification did not account for properties on predicates very well. So for example, if I wanted to specify WHEN Sue became a friend of Mary, or the fact that Sue is a friend of Mary, according to Facebook, handling provenance and time may be more cumbersome. Many RDF databases added quad-store options where users could handle provenance or time, and several are adding the new RDF* specification to overcome shortcomings. More on this in a minute.

Many of the early RDF stores were built on transactional architecture, so that they scaled somewhat to handle transactions, but had size restrictions on performing analytics on many triples.

It is in this category that the vendors have had some time to mature. While the origins may be in semantic web and sharing data, many have stretched their superpowers with labeled properties and other useful features.

Graph Database Superhero: Labeled Property graph with Cypher

Example: Neo4j

Origin Story: Short for labeled property graph, the premier player in the LPG was and is Neo4j. According to podcasts and interviews of the founder, the original thought was more about managing content on a web site, where taxonomies gave birth to many-to-many relationships. Neo4j developed its new type of system in order to support its enterprise content management team. So, when you needed to search across your website for certain content, for example, when a company changes its logo, the LPG kept track of how these assets were connected. This is offered as an alternative to the JOIN table in an RDBMS that holds foreign keys of both the participating tables, and this is extremely costly in traditional databases.

SuperPower: Although the origin story is about web site content taxonomies, it turns out that these types of databases were also pretty good for 360-degree customer view applications and understanding multiple supply chain systems. Cypher, although not a W3C or ISO standard, has become a de facto standard language as the Cypher community has grown with Neo4j’s efforts. Neo4j also has been an advocate of the new upcoming GQL standard, which may result in a more capable Cypher language.

Kryptonite: Neo4j has built its own system from the ground up on a transactional architecture. Although some scaling features have recently been added to Neo4j version 4, the approach is more about federating queries rather than an MPP approach. In version 4, the developers have added manual sharding and a new way to query sharded clusters. This requires extra work when sharding and writing your queries. This is a similar approach to transactional RDF stores where SPARQL 1.1, supports integrated federated queries through a SERVICE clause. In other words, you may still encounter limits when trying to scale and perform analytics. Time will tell if the latest federated approach is scalable.

Ontologies and inferencing are not standard features with a property graph, although some capability is offered here with add-ons. If you’re expecting to manage semantics in a property graph, it’s probably the wrong choice.

Graph Database Superhero: Proprietary Graph

Example: TigerGraph

Origin Story: According to their web site, when the founders of TigerGraph decided to write a database, one of the founders was working at Twitter on a project that needed bigger scale graph algorithms than Neo4j could offer. TigerGraph devised a completely new architecture for the data model and storage, even devising its own language for a graph.

Superpowers: Through Tiger, the market could now appreciate that graph databases could now run on a cluster. Although certainly not the first to run on a cluster, this focus was on real power in running end-user supplied graph algorithms on a lot of data.

Kryptonite: The database decidedly went on its own with regard to standards. Some shortcomings on the simplicity of leveraging ontologies, performing inferencing and make use of your people who know either SPARQL or Cypher are apparent. By far the biggest disadvantage to this proprietary graph is that you have to think more about schema and JOINs prior to loading data. The schema model is more reminiscent of a traditional database than any of the other solutions on the market. While it may be a solid solution for running graph algorithms, if you’re creating a knowledge graph by integrating multiple sources and you want to run BI-style analytics on said knowledge graph, you may have an easier time with a different solution.

Interesting to note that although TigerGraph’s initial thinking was to beat Neo4j at proprietary graph algorithms, TigerGraph has teamed up with the Neo4j folks and is in the early stages of making its proprietary language a standard via ISO and SQL. Although TigerGraph releases many benchmarks, I have yet to see them release benchmarks for TPC-H or TPC-DS, standard BI-style analytics benchmarks. Also, due to a non-standard data model, harmonizing data from multiple sources requires some extra legwork and thought about how the engine will execute analytics.

Graph Database Superhero: RDF Analytical DB with labeled properties

Example: AnzoGraph DB

Origin Story: AnzoGraph DB was the brainchild of former Netezza/Paraccel engineers who designed MPP platforms like Netezza, ParAccel and Redshift. They became interested in graph databases, recognizing that there was a gap in perhaps the biggest category of data, namely data warehouse-style data and analytics. Although companies making transactional graph databases covered a lot of ground in the market, there were very few analytical graph databases that could follow standards, perform graph analytics and leverage ontologies/inferencing for improved analytics.

Superpowers: Cambridge Semantics designed a triple store that both followed standards and could scale like a data warehouse. In fact, it was the first OLAP MPP platform for graph, capable of analytics on a lot of triples. It turns out that this is the perfect platform for creating a knowledge graph, facilitating analytics built from a collection of structured and unstructured data. The data model helps users load almost any data at any time.

Because of the schemaless nature, the data can be sparsely populated. It supports very fast in-memory transformations, thus data can be loaded and cleansed later (ELT). Because metadata and Instance data together in the same graph and without any special effort — sure makes all those ELT queries much more flexible, iterative and powerful. With an OLAP graph like AnzoGraph DB, you add any subject-predicate-object-property at any time without having to make a plan to do so.

In traditional OLAP databases, you can have views. In this new type of database, you can have multi-graphs that can be queried as one graph when needed.

Kryptonite: Although ACID compliant, other solutions on the market might support faster transactions due to the OLAP nature of this database’s design. Ingestion of massive amounts of transactions might require additional technologies, like Apache Kafka, to ingest smoothly in high-transactional environments. Like many warehouse-style technologies, data loading is very fast and therefore batch loads are very fast. Pairing an analytical database with a transactional database is also sometimes a solution for companies who have both high transactions and deep analytics to perform.

Other types of “Graph Databases”

A few other types of graph databases that have some graph superpowers. Traditional database vendors have recognized that graph can be powerful and have offered a data model to have a graph model in addition to their native model. For example, Oracle has two offerings. You can buy an add-on package that offers geospatial and graph. In addition, the company offers an in-memory graph that is separate from traditional Oracle.

You can get graph database capabilities in an Apache Hadoop stack under GraphFrames. GraphFrames works on top of Apache Spark. Given Spark’s capability to handle big data, scaling is a superpower. However, given that your requirements might lead you to layering technologies, tuning a combination of Spark, HDFS, Yarn and GraphFrames could be the challenge.

The other solutions give you a nice taste of graph functionality in a solution that you probably already have. The kryptonite here is usually about performance when scaling to billions or trillions of triples and then trying to run analytics on said triples.

The Industry is full of Ironmen

Ironman Tony Stark built his first suit out of scrap parts when he was captured by terrorists and forced to live in a cave. It had many vulnerabilities, but it served it’s one purpose: to get the hero to safety. Later, the Ironman suit evolved to be more powerful, deploy more easily and think on its own. The industry is full of Tony Starks who will evolve the graph database.

However, while evolution happens, remember that graph databases aren’t one thing.

A graph database is a generic term, but simply doesn’t get you the level of detail you need to understand which problem it solves. The industry has developed various methods of doing the critical tasks that drive value in this category we call graph databases. Whether it’s harmonizing diverse data sets, performing graph analytics, performing inferencing and leveraging ontologies, you really have to think about what you’d like to get out of the graph before you choose a solution.

WRITTEN BY

Steve Sarsfield

VP Product, AnzoGraph (AnzoGraph.com). Formerly from IBM, Talend and Vertica. Author of the book the Data Governance Imperative.

The Data-Centric Revolution: The Role of SemOps (Part 2)

In our previous installment of this two-part series we introduced a couple of ideas.

First, data governance may be more similar to DevOps than first meets the eye.

Second, the rise of Knowledge Graphs, Semantics and Data-Centric development will bring with it the need for something similar, which we are calling, “SemOps” (Semantic Operations).

Third, when you peel back what people are doing in DevOps and Data Governance, we get down to five key activities that will be very instructive in our SemOps journey:

  1. Quality
  2. Allowing/ “Permission-ing”
  3. Predicting Side Effects
  4. Constructive
  5. Traceability

We’ll take up each in turn and compare and contrast how each activity is performed in DevOps and Data Governance to inform our choices in SemOps.

But before we do, I want to cover one more difference: how the artifacts scale under management.

Code

There isn’t any obvious hierarchy to code, from abstract to concrete or general to specific, as there is in data and semantics.  It’s pretty much just a bunch of code, partitioned by silos. Some of it you bought, some you built, and some you rent through SaaS (Software as a Service).

Each of these silos represents, often, a lot of code.  Something as simple as Quick Books is 10 million lines of code.  SAP is hundreds of millions.  Most in-house software is not as bloated as most packages or software services; still, it isn’t unusual to have millions of lines of code in an in-house developed project (much of it is in libraries that were copied in, but it still represents complexity to be managed).  The typical large enterprise is managing billions of lines of code.

The only thing that makes this remotely manageable is, paradoxically, the thing that makes it so problematic: isolating each codebase in its own silo.  Within a silo, the developer’s job is to not introduce something that will break the silo and to not introduce something that will break the often fragile “integration” with the other silos.

Data and Metadata

There is a hierarchy to data that we can leverage for its governance.  The main distinction is between data and metadata.

The Data-Centric Revolution: The Role of SemOps (Part 2)

There is almost always more data than metadata.  More rows than columns.  But in many large enterprises there is far, far more metadata than anyone could possibly guess.  We were privy to a project to inventory the metadata for a large company, who shall go nameless.  At the end of the profiling, it was discovered that there were 200 million columns under management in the sum total of the firm. This is columns not rows.  No doubt there were billions of rows in all their data.

There are also other levels that people often introduce to help with the management of this pyramid.  People often separate Reference data (e.g., codes and geographies) and Master data (slower changing data about customers, vendors, employees and products).

These distinctions help, but even as the data governance people are trying to get their arms around this, the data scientists show up with “Big Data.”  Think of big data being below the bottom of this pyramid.  Typically, it is even more voluminous, and usually has only the most ad hoc metadata (the “keys” in the “key/value pairs” in the deeply nested json data structures are metadata, sort of, but you are left guessing what these short cryptic labels actually mean).

Click here to read more on TDAN.com

DCAF 2020: Second Annual Data-Centric Architecture Forum Re-Cap

Last year, we decided to call the first annual Data Centric Architecture Conference a Forum which resulted in DCAF. It didn’t take long for attendees to start calling the event “decaf” but they were equally quick to point out that the forum was anything but decaf. We had a great blend of presentations ranging from discussions about emerging best practices in the applied semantics profession to mind-blowing vendor demos. Our stretch goals from last year included growing the number of attendees, seeing more data-centric vendors, and exploring security and privacy. These were met and exceeded, and we’re on track to set even loftier stretch-goals for next year.

Throughout the Data-Centric Architecture Forum presentations, we were particularly impressed by the blockchain data security presentation by Brian Platz at https://flur.ee/. Semantic tech is an obvious choice for organizations wishing to become data centric, but we often have to rely on security frameworks that work for legacy platforms. It was exciting to see a platform that addresses security in a way that is highly compatible with semantics. They also provide a solid architecture that is consistent with the goals of the DCA, regardless of whether their clients choose to go with more traditional relational configurations, or semantic configurations.

We welcomed returning attendees from Lymba, showcasing some of the project work they’ve done while partnering with Semantic Arts. Mark Van Berkel from Schema App built an architecture based on outcomes from last year’s Data Centric Architecture Conference. It’s amazing what a small team can do in a short amount of time when they’re operating free from corporate constraints.

One of our concerns with growing the number of participants was that we would lose the energy of the room, the level of comfort in sharing ideas and networking across unspoken professional barriers (devs vs product? Not here!). Everyone was set up to learn from these presentations. The group was intimate enough that presenters could engage directly with the audience, which included developers, other vendors, and practitioners in field of semantics. We made every effort to keep presentations on target and to keep audience participation smoothly moderated, so coffee breaks were fertile ground for discussions and networking. So much of this conversation grew organically that we at Semantic Arts decided to open virtual forums to continue the discussions.

You can join us on these channels at:
LinkedIn group
Estes Park Group

While we’re on the topic of goals, here’s what we envision for next year’s Data-Centric Architecture Forum:
• Continuing with our mindset of growth – we want to see vendors bring the clients who showcase the best the tools and products have to offer. Success stories and challenges welcome.
• Academic interests – not that this is going to be a job fair, but Fort Collins IS a college town, just sayin’. Also, to that point, how do we recruit? What does it take to be a DCAF professional? What are you (vendors and clients) looking for when you want to build teams that can work on transformative tech?
• Continuing with our mindset of transparency, learning, and vulnerability. We still have to really solve the issue of security and privacy; how do we do that when we’re all about sharing data? What are our blind-spots as a profession?

Decades, Planets and Marriage

Google ontologist, Denny Vrandečić started a vigorous thread on the question of what constitutes a decade. See for example, the article: “People Can’t Even Agree On When The Decade Ends”. This, is a re-emergence of the question from 20 years ago on whether the new millennium will/did start on January 1 of 2000 or 2001.This is often posed as a mathematical conundrum, and math certainly plays a role here, but I think its more about terminology than it is about math. It reminds me of the question of whether Pluto is a planet. It is also relevant to ontologists.

The decade question is whether the 2020s did start on January 1, 2020 or will start on January 1, 2021. Denny noted that: “The job of an ontologist is to define concepts”. This is true, but ontologists often have to perform careful analysis to identify what the concepts are that really matter. Denny continued: “There are two ways to count calendar decades…”. I would put it differently and say: “The term ‘calendar decade’ is used to refer to at least two different concepts.”

At last count, there were 72 comments arguing exactly why one way or the other is correct. The useful and interesting part of the that discussion centers on identifying the nuanced differences between those two different concepts. The much less interesting part is arguing over which of these concepts deserves to be blessed with the term ‘calendar decade’. The latter is a social question, not an ontological question.

This brings us to Pluto. The interesting thing from an ontology perspective is to identify the various characteristics of bodies revolving around the sun, and then to identify which sets of characteristics correspond to important concepts that are worthy of having names. Finally, names have to be assigned: e.g. asteroid, moon, planet. The problem is that the term, ‘planet’, was initially used fairly informally to refer to one set of characteristics and it was later determined that it should be assigned to a different set of more precisely defined characteristics that scientists deemed to be more useful than the first. And so the term ‘planet’ now refers to a slightly different concept than it did before. The social uproar happened because the new concept no longer included Pluto.

A more massive social as well as political uproar arose in the past couple of decades around the term, ‘marriage’. The underlying ontological issues are similar. What are the key characteristics that constitute a useful concept that deserves a name? It used to be generally understood that a marriage was between a man and a woman, just like it used to be generally understood what a planet was. But our understanding and recognition of what is, should or could be, changes over time and so do the sets of characteristics that we think are deserving of a name.

The term planet was given a more restricted meaning, which excluded Pluto. The opposite was being argued in the case of marriage. People wanted a gender-neutral concept for a committed relationship; it was less restrictive. The term ‘marriage’ began to be used to include same-gender relationships.

I am aware that there are important differences between the decades, planets and marriages – but in all three cases, there are arguments about what the term should mean. Ironically and misnomeristically (if that’s a word), we refer to the worrying about what to call things as “just semantics”. Use of this phrase implies a terms-first perspective, i.e. you have a term, and you want to decide what it should mean. As an ontologist, I find it much more useful to identify the concepts first, and think of good terms afterwards. I wrote a series of blogs about this a few years ago.

What is my position on the decade question? If I was King, I would use the term ‘decade’ to refer to the set of years that start with the same 3 digits. Why? Maybe for the same reason that watching my car odometer change from 199999 to 200000 is more fun than watching it change from 200000 to 200001. The other proposed meaning for ‘calendar decade’ is not very interesting to me. So I would not bother to give it any name. But your mileage may vary.

Meaning-First Data Modeling, A Radical Return to Simplicity

Person uses language. Person speaks language.Meaning-First Data Modeling, A Radical Return to Simplicity Person learns language. We spend the early years of life learning vocabulary and grammar in order to generate and consume meaning. As a result of constantly engaging in semantic generation and consumption, most of us are semantic savants. This Meaning-First approach is our default until we are faced with capturing meaning in databases. We then revert to the Structure-First approach that has been beaten into our heads since Codd invented the relational model in 1970. This blog post presents Meaning-First data modeling for semantic knowledge graphs as a replacement to Structure-First modeling. The relational model was a great start for data management, but it is time to embrace a radical return to simplicity: Meaning-First data modeling.

This is a semantic exchange, me as a writer and you as a reader. The semantic mechanism by which it all works is comprised of a subject-predicate-object construct. The subject is a noun to which the statement’s meaning is applied. The predicate is the verb, the action part of the statement. The object is also generally a noun, the focus of the action. These three parts are the semantic building blocks of language and the focus of this post, semantic knowledge graphs.

In Meaning-First semantic data models the subject-predicate-object construct  is called a triple, the foundational structure upon which semantic technology is built. Simple facts are stated with these three elements, each of which is commonly surrounded by angle brackets. The first sentence in this post is an example triple. <Person> <uses> <language>. People will generally get the same meaning from it. Through life experience, people have assembled a working knowledge that allows us to both understand the subject-predicate-object pattern as well as what people and language are. Since computers don’t have life experience, we must fill in some details to allow this same understanding to be reached. Fortunately, a great deal of this work has been done by the World Wide Web Consortium (W3C) and we can simply leverage those standards.

Modeling the triple “Person uses language” in Figure 1, Triple diagram using arrows and ovals is a good start. Tightening the model by adding formal definitions makes it more robust and less ambiguous. These definitions come from gist, Semantic Arts’ minimalist upper level ontology. The subject, <Person>, is defined as “A Living Thing that is the

Meaning-First Data Modeling, A Radical Return to Simplicity
Figure 1, Triple diagram

offspring of some Person and that has a name.” The object, <Language>, is defined as “A recognized, organized set of symbols and grammar”. The predicate, <uses>, isn’t defined in gist, but could be defined as something like “Engages with purpose”. It is the action linking <Person> to <Language> to create the assertion about Person. Formal definitions for subjects and objects are useful because they are mathematically precise. They can be used by semantic technologies to reach the same conclusions as can a person with working knowledge of these terms.

 

Surprise! This single triple is (almost) an ontology. This is almost an ontology because it contains formal definitions and is in the form of a triple. Almost certainly, it is the world’s smallest ontology, and it is missing a few technical components, but it is a good start on an ontology all the same. The missing components come from standards published by the W3C which won’t be covered in detail here. To make certain the progression is clear, a quick checkpoint is in order. These are the assertions so far:

  • A triple is made up of a <Subject>, a <Predicate>, and an <Object>.
  • <Subjects> are always Things, e.g. something with independent existence including ideas.
  • <Predicates> create assertions that
    • Connect things when both the Subject and Object are things, or
    • Make assertions about things when the Object is a literal
  • <Objects> can be either
    • Things or
    • Literals, e.g. a number or a string

These assertions summarize the Resource Description Framework (RDF) model. RDF is a language for representing information about resources in the World Wide Web. Resource refers to anything that can be returned in a browser. More generally, RDF enables Linked Data (LD) that can operate on the public internet or privately within an organization. It is the simple elegance embodied in RDF that enables Meaning-First Data Modeling’s radically powerful capabilities. It is also virtually identical to the linguistic building blocks that enabled cultural evolution: subject, predicate, object.

Where RDF defines the framework that defines the triple, Resource Description Framework Schema (RDFS) provides a data-modeling vocabulary for building RDF triples. RDFS is an extension of the basic RDF vocabulary and is leveraged by higher-level languages such as Web Ontology Language (OWL), and Dublin Core Metadata Initiative (Dcterms). RDFS supports constructs for declaring that resources, such as Living Thing and Person, are classes. It also enables establishing subclass relationships between classes so the computer can make sense of the formal Person definition “A Living Thing that is the offspring of some Person and that has a name.”

Here is a portion of the schema supporting the opening statement in this post,

Figure 2, RDFS subclass property

“Person uses Language”. For simplicity, the ‘has name’ portion of the definition has been omitted from this diagram, but it will show up later.Figure 2 shows the RDFS subClassOf property as a named arrow connecting two ovals. This model is correct as it shows the subClassOf property, yet it isn’t quite satisfying. Perhaps it is even a bit ambiguous because through the lens of traditional, Structure-First data modeling, it appears to show two tables with a connecting relationship.

 

Nothing could be further from the truth.

There are two meanings here and they are not connected structures. The Venn diagram in Figure 3, RDFS subClassOf Venn diagram more clearly shows the Person set is wholly contained within the set of all Living

Figure 3, RDFS subClassOf Venn diagram

Things so it is also a Living Thing. There is no structure separating them. They are in fact both in one single structure; a triple store. They are differentiated only by the meaning found in their formal definitions which create membership criteria of two different sets. The first set is all Living Things. The second set, wholly embedded within the set of all Living Things, is the set of all Living Things that are also the offspring of some Person and that have a name. Person is a more specific set with criteria that causes a Living Thing to be a member of the Person set but is also still a member of the Living Things set.

Rather than Structure-First modeling, this is Meaning-First modeling built upon the triple defined by RDF with the schema articulated in RDFS. There is virtually no structure beyond the triple. All the triples, content and schema, commingle in one space called a triple store.

Figure 4, Complete schema

Here is some informal data along with the simple ontology’s model:

Schema:

  • <Person> <uses> <Language>

Content:

  • <Mark> <uses> <English>
  • <Boris > <uses> <Russian>
  • <Rebecca> <uses> <Java>
  • <Andrea> <uses> <OWL>

Contained within this sample data lies a demonstration of the radical simplicity of Meaning-First data modeling. There are two subclasses in the data content not   currently

Figure 5, Updated Language Venn diagram

modeled in the schema, yet they don’t violate the schema. The Figure 5 shows subclasses added to the schema after they have been discovered in the data. This can be done in a live, production setting without breaking anything! In a Structure-First system, new tables and joins would need to be added to accommodate this type of change at great expense and over a long period of time. This example just scratches the radical simplicity surface of Meaning-First data modeling.

 

 

Stay tuned for the next installment and a deeper dive into Meaning-First vs Structure-First data modeling!