Software Wasteland

Software Wasteland: Know what’s causing application development waste so you can turn the tide.

software wasteland

Software Wasteland is the book your Systems Integrator and your Application Software vendor don’t want you to read. Enterprise IT (Information Technology) is a $3.8 trillion per year industry worldwide. Most of it is waste.

We’ve grown used to projects costing tens of millions or even billions of dollars, and routinely running over budget and schedule many times over. These overages in both time and money are almost all wasted resources. However, the waste is hard to see, after being so marbled through all the products, processes, and guiding principles. That is what this book is about. We must see, understand, and agree about the problem before we can take coordinated action to address it.

Take the dive and check out Software Wasteland here.

Ontology-based Applications

Once you have your ontology, you want to put it to use. We will describe a common scenario where data is extracted from various sources including relational databases. That data is then used in conjunction with an application instead of a traditional relational database. Things have advanced from just a few years ago when the main technologies were for representing the schema (RDF, RDFS), the data (RDF), and a query language (SPARQL).  Two new and important standards have come out to address extracting data from relational databases and for specifying constraints that are not available in OWL.

One good way to go about building an ontology-based application is as follows:

  1. Create ontology
  2. Create SHACL constraints
  3. Create triples
  4. Build program logic and user interface

This parallels how to build a traditional application.  The main difference is you are going to use a triple store to answer SPARQL queries instead of posing SQL queries to a relational database. Instead of creating conceptual, logical, and physical data models along with various integrity constraints, you will be building an ontology and SHACL constraints. Instead of having just one database and one data model per application, you can reuse either or both for multiple applications around the enterprise.

Create Ontology

Create the ontology for the chosen subject matter. Start with a core ontology that can be extended and used in a variety of applications across the enterprise.  This is similar to an agile approach, in that you start small and extend.  From the start, think about the medium and long term so that additions are natural extensions of the core ontology, which should be relatively stable.

Create SHACL Constraints

The ontology is modeling the real world, independently from any particular application. To build a specific application, you will be choosing a subset of the ontology classes and properties to use. Many but not all of the properties that are optional in the real world will remain optional in your application. Some properties that necessarily hold in the real world as reflected in the ontology will be of no interest for a particular application.

SHACL is a rich and complex standard with many intended uses. Three key ones are:

  1. Communicate what part of the ontology is to be used in the application.
  2. Communicate exactly what the triples need to look like that will be created and loaded into the triple store.
  3. Communicate to a SHACL engine exactly what integrity constraints are to be respected.

This process also forces you to examine all the aspects of the ontology that are needed for the application. It usually uncovers mistakes or gaps in the ontology. See Figure 1.

Figure 1: Creating Ontology, Constraints, and Triples

 

Create Triples

Triples can come from many sources, including text documents, web pages, XML documents, spreadsheets, and relational databases. The latter two are the most common, and the vendors have supplied tools to support this process. The W3C has also created a standard for mapping a relational schema to an ontology so that triples may be extracted directly from a relational database. That standard is called R2RML[1].  See Figure 2 to see how this works. An R2RML specification for this simple example would indicate the following:

  1. Each row in the corporation table will be an instance of the :Corporation.
  2. The IRI for each instance of :Corporation will use the myd: namespace, and the local name (after the colon) is to be an underscore followed by the value in the ‘CorporationID’ column.
  3. The ‘Subsidiary Of’ column corresponds to the :isSubsidiaryOf property.
  4. The ‘CEO’ column corresponds to the :hasCEO property.
  5. There is a foreign key connecting values of the ‘CEO’ column to a Person table.

With this information, the R2RML engine can reach into the relational database table and extract triples as indicated in Figure 2. Importantly, exactly one triple results from each cell in the table. If there’s a NULL, no triple is created.

If you need to create triples from spreadsheets, you can use vendor tools, create your own tool, or write ad hoc scripts.  There is not as much by way of out-of-the-box standards and tools for extracting triples from web pages, XML documents, and text documents.  Specialized scraping and natural processing tools may be available.

Figure 2: Tables to Triples

 

Build Program Logic & User Interface

This phase works much like the development of any other application. The main difference is that instead of querying a relational store using SQL, you are using SPARQL to query a triple store. See Figure 3.

Figure 3: Semantic Application Architecture

 

[1] https://www.w3.org/TR/r2rml/

Enterprise Ontology, Semantic Silos, and Cowpaths

Paving Cow Paths

Numerous modern day streets in downtown Boston defy logic – until you realize that the city fathers literally paved over the transit system created and used by cows.*  This gave the immediate benefit of getting places faster, while losing out on longer-term gains that designing a purpose-built street plan could have yielded.  This type of thing is pervasive in today’s enterprise ranging from computerizing paper forms to the plethora of information silos requiring an enterprise ontology– the subject of today’s blog.

Figure 1: Paving the cowpaths in Boston**

Semantic Technology

Semantic Arts works with a wide variety of companies and, unlike just a few years ago, it is now common for our new clients to already have a number of efforts and groups exploring semantic technology in-house.  Gone is fear of the ‘O word’. In its place are a range of projects and activities such as creating ontologies, working with triple stores, and creating proofs of concept. Unfortunately, what we see very little of is coordination of these efforts throughout the enterprise.

It would be mistaken to regard this as a misuse of the technology, because point solutions will often result in significant benefits locally – just like paving cow paths gave immediate gains. It’s more a missed opportunity in the form of a great irony. The very technology designed to break down silos gets used to build yet more silos – Semantic Silos.

Figure 2: Avoid Semantic Silos

Building semantic silos is an easy trap to fall into, because it takes a while to fully comprehend the power of semantic technology (or any other disruptive technology).  Information silos arise for many reasons, both technological and organizational.  Key technological factors include the inability of relational databases to (1) reuse schema and (2) uniquely identify data elements globally across databases.  That’s where the URI and RDF triples come in. It is hard to overstate the power of the URI in semantic technology. URIs uniquely identify not only data elements but also the schema elements. The former eliminates the need for joins, and the coordination of URIs makes the snapping together of disparate databases, well, a snap.  The latter enables something entirely foreign to relational technology: the ability to easily share and reuse all or parts of existing schema.

Enterprise Ontology

The key to avoiding semantic silos is to use an enterprise ontology, which is a small and elegant representation of the core concepts and relationships in your enterprise that are stable over time.  It is at the same time both a conceptual model, and a computable artifact that plays the role of a physical data schema. The enterprise ontology is a foundation for building more specialized ontologies that are loaded into dozens, hundreds or thousands of graph databases, called triple stores that are populated with data.  Data elements are also shared across multiple databases.  This is depicted in figure 3.

These stores can be used by many applications, not just one or two, as is common in today’s siloed, application-centric enterprise.  Collectively, these ontologies and their data form an enterprise knowledge graph. Such graphs are hugely important for modern companies such as Google, Facebook and LinkedIn.

enterprise ontology

 

Figure 3: The triple stores depicted in the top row are not silos. Globally unique URIs snap together to form a single enterprise knowledge graph that is accessible using federated SPARQL queries.  Letters denote ontology URIs and numbers denote data URIs.

Having built enterprise ontologies now in a variety of industries, we are confident in stating the surprising result that there are only a few hundred such concepts that form this core for any given enterprise.  This is what makes it possible to build an enterprise ontology, where building enterprise-wide data models has failed for decades. There is no need to have millions of attributes in the core model.

Summary and Conclusion

  1. It is entirely possible to use semantic technology to develop point solutions around your enterprise and unwittingly end up recreating the very silos that semantic technology aims to get rid of.
  2. We see this happening in organizations that are using semantic technology.
  3. You don’t want to do that, you will miss out on some of the main benefits of the technology. The data will not snap together if there is no coordination.
  4. The answer is to use an enterprise ontology as a core data model that is shared among all the applications and data stores that collectively make up your enterprise knowledge graph.
  5. The URI is the hero: they are globally unique identifiers that allow seamless sharing of data and schema, joins are history.

Keep in mind that technology as enabler is only part of the story. To get real traction in breaking up silos also requires meeting plenty of social and organizational challenges and putting governance policies into place.  But that’s another topic for another day.

Don’t fall in the trap of paving the cow paths to semantic silos. Use an enterprise ontology to create the beginning of an integrated enterprise.

Afterward

See also the delightful and well-known poem by S.W. Foss called, “The Calf Path”.***

* Change Management: Paving the Cowpaths
https://www.fastcompany.com/1769710/change-management-paving-cowpaths

** Picture credit:
http://bostonography.com/2011/cartographic-greetings-from-boston/bostontownoldrenown/

*** https://m.poets.org/poetsorg/poem/calf-path

A Semantic Bank

What does it mean to be a “Semantic Bank"?

 

In the last two months I’ve heard at least 6 financial institutions declare that they intended to become “A Semantic Bank.”  We still haven’t seen even the slightest glimmer as to what any of them mean by that.

Allow me to step into that breach.

What follows is our take on what it would mean to be a “Semantic Bank.”

The End Game

I’m reluctant to start with the end state, because pretty much anyone reading this, including those who aspire to be semantic banks, will find this to be a “bridge too far.”  Bear with me.  I know this will take at least a decade, perhaps longer to achieve.  However, having the end in mind, allows us to understand with a clarity few currently have, exactly where it is we are wasting our money now.

If we had the benefit of time and could look back from 2026 and ask ourselves “which of our investments in 2016 were really investments, and which were wastes of money?” how would we handicap the projects we are now funding?  Now to be clear, not all expenditures need to be leading to the semantic future.  There are tactical projects that are worth so much in the short term that we can overlook the fact that we are anti-investing the future.  But we should be aware of when we are doing this, and it should be an exception. The semantic bank of the future will be the organization that can intentionally divert the greatest percent of their current IT capital spend toward their semantic future.

A Semantic Bank will be known by the extent to which its information systems are mediated by a single (potentially fractal, but with a single simple core) conceptual model.  Unlike conceptual models of the past, this one will be directly implemented.  That is, a query to the conceptual model will return production data, and a transaction expressed in conceptual model terms will be committed, subject to permissions and constraints which will also be semantically described.

Semantics?

For those who just wandered into this conversation: semantics is the study of meaning.  Semantic Technology allows us to implement systems, and to integrate systems at the level of conceptual meaning, rather that the level of structural description (which is what traditional technology relies on).

It may sound like a bit of hair splitting, but the hair splitting is very significant in this case.  This technology allows practitioners to drop the costs of development, integration and change by over an order of magnitude, and allows incorporation of data types (unstructured, semi structure and social media for instance) that hitherto were difficult to impossible to integrate.

It accomplishes this through a couple of interesting departures from traditional development:

  • All data is represented in a single format (the triple). There aren’t hundreds or thousands of different tables, there is just the triple.
  • Each triple is an assertion, a mini sentence composed of a subject, predicate and object. All data can be reduced to a set of triples.
  • All the subjects, all the predicates, and most of the objects are identified with globally unique identifiers (URIs, which are analogous to URLs)
  • Because the identifiers are globally unique, the system can join records, without an analyst or programming having to write the explicit joins.
  • A database that assembles triples like this, is called a “triple store” and is in the family of “graph databases.” A semantic triple store is different from a non semantic database in that it is standards compliant and supports a very rich schema (even though it is not dependent on having a schema).
  • Every individually identifiable thing (whether a person, a bank account or even the concept of “Bank Account”) is given a uri. Whereever the uri is stored or used it always means exactly the same thing.  Meaning is not dependent on context or location.
  • New concepts can be formed by combining existing concepts.
  • The schema can evolve in place, even in the presence of a very large database dependent on it.

A set of concepts so defined is called an “Ontology” (loosely an organized body of knowledge). When the definitions are shared at the level of the firm, this is called an “Enterprise Ontology.”

Our experience has been that using these semantic approaches an ontology can be radically simpler, and at the same time more precise and more complete, than traditional application databases.  When the semantics are done at the firm level the benefits are even greater, because each additional application is benefiting from the concepts shared with the others.

Business Value

What is the business value of rethinking information systems?  They come in two main varieties: generic and specific.

Generic Value

Dropping the cost of change by a factor of 10 has all sorts of positive value.  Systems that were too difficult to change become malleable.

The integration story is even better: once all the similar concepts are expressed in a way that their similarity is obvious and baked into their identity, systems integration, currently one of the largest costs in IT will become almost free

Back to the End Game

In the end game, a semantic bank will have all their systems directly implemented on a shared semantic model.  The scary thing is: who has a better shot at this, the established oligarchy (the “too big to fail”) or FinTech?  Each have about half the advantages.  Queue Clayton Christiansen’s “Innovators Dilemma” : in some situations a new upstart enters a market with superior technology and the incumbents crush the upstart.  In other situations, the upstart creates a beachhead in an underserved market and continually walks their way up the value chain until the incumbents are on the ropes.  What is the difference and how will it play out with the “Semantic Banks?” is the ultimate question.

A Bit More on the Target

Most vendors have a tendency to see the future in terms of the next version of their offering.

In the future, a progressive firm will have an “enterprise ontology” that represents the key concepts that they deal with.  Currently they have thousands of application systems, each of which has thousands of tables, and tens of thousands of columns that they manage.  In aggregate they are managing millions of concepts.

But really, there are a few hundred concepts that everything they deal with are based on.  When we figure out what these few hundred concepts, we have started down the road of profound simplicity.

Once you have this model (the “core ontology”) you are armed with a weapon that delivers on three fronts:

  • All integration can be mediated through this model. By mapping into and out of the shared model, all integration becomes easier
  • New development can be made incredibly simpler. Building an app on a model that is 10 times simpler than normal and 100 times simpler than the collective model of the firm, economizes the application development and integration process.
  • The economics of change become manageable. Currently there is such a penalty for changing an information system, that we spend inordinate amount of energy staving off changes. In the semantic future change is economically (not free but far far less than current costs). Once we get to that point, the low cost of change translates into rapidly evolvable systems.

What Will Distinguish the Leaders in the Next Five Years?

Only the smallest start up will be completely semantic within the next five years.  If they develop a semantic core, their challenge will be growing out to overtake the incumbents.

This white paper is mostly written for the incumbents (by the way we are happy to help FinTech startups, but our core market is established players dealing with legacy issues) .

Most financial services companies right now are executing “proof of concept” projects.  Those that do this may well be the losers.  NASA has a concept called “TRL” (Technology Readiness Level) they have a scale of 1-9 with 1-3 being levels wacky ideas that no one has any idea whether they could be implemented to 7-9 being technology that has already been commercialized and there is no more risk left in implementation.  Experiments are typically done in level 1-3, to learn what else do we need to know to make this technology real.  Proofs of Concept are typically done in levels 4-6 to narrow down some implementation parameters.  The issue is, all the important semantic technology is at level 8 or 9.  Everyone knows it works and knows how it works.  The companies who are doing “proof of concept projects” in semantic technology at this point are vamping[1] and will ultimately be eclipsed by companies who can commit when appropriate.

What are the benefits of becoming semantic?

The benefits of adopting this approach are so favorable that many people would challenge our credibility for suggesting them (sounds like hype) but these differences are really true, so we won’t shrink from our responsibility for the sake of credibility.

Integration

When you map your existing (and especially future) systems to a simple, shared model, the cost of integration plummets.  Currently integration consumes 30-60% of the already bloated IT budget because analysts are essentially negotiating agreement between systems that each have tens of thousands of concepts.  That’s hard.

What’s easy (well relatively easy) is to maps a complex system to a simple model.  Once you’ve done this, it is integrated to all the other systems that have also been mapped to that model.  It becomes the network effect of knowledge.

Application Development

A great deal of the cost of application development is the cost of programming to a complex data model.  Semantic Technology helps at two levels.  The first level is by reducing the complexity of the model, any code dependent on the model is reduced proportionately.  The second level is that semantic technology is very compatible with RESTful development.  The RESTful approach encourages a style of development that is less dependent on and less coupled to, the existing schema. We have found that a semantic  based system using RESTful APIs is amazingly resilient to changes in the model (other than those that introduce major structural changes, but that is a commercial for getting your core ontology right to start with)

New Data Types

Many leading edge projects are predicated on being able to incorporate data that was hitherto unrealistic to incorporate.  This might be unstructured data, it might be open data, it might we social media.  All of these are difficult for traditional technology, but semantic technology takes in stride.

Observations from other industries

Our observation about what has worked in other industries (which by the way are also minimally converted to semantic technology, but the early adopters provide some important signposts for what works and what doesn’t)

Vision and Constancy Trump Moon Shots

What we have seen from the firms that have implemented impressive architectures based on semantics, is that a small team, with continual funding vastly outperforms attempts to catch up with huge projects.  The most impressive firms have had a core of 3-8 people who were at it continually for 2-4 years.  Once you reach critical mass with these teams and the capability they create, putting 50-100 people on a catch up project will never catch them.  The lead that can be established now with a small focused team, will open up an insurmountable lead 3-5 years from now, when this movement becomes obvious.

The Semantic Bank Maturity Model

Eventually we will come to the point where we will want to know “how semantic are you?” Click here to take an assessment to discover the answer to this questions.

We will take this up in a separate white paper, with considerably more detail, but the central concept is: what percent of your data stores are semantically enabled and how semantic are they really?

Getting Started

Let’s assume you want to take this on and become a “Semantic Bank”.  How do you go about it?

What we know from other industries is the winner is not the firm that spends the most, or even who starts first (although at some point failing to start is going to be starting to fail).  The issue is who can have a modest, but continual initiative. This means that the winner will be the firm that can finance a continual improvement project over several years.  While you might make a bit of incremental progress through a series of tactic projects, the big wins will come from the companies that can set up an initiative and stick with it.  We have seen this in healthcare, manufacturing and publishing, we expect it to be true in financial services as well.

Often this means that the sponsor must be at a position where they can dedicate a continual (but not very large) budget to achieve this goal.  If that is not you, you may want to start the conversation with the person who can make a difference.  If this is you, what are you waiting for?

[1] Vamping is term professional jugglers use to refer to the act you perform when you drop a juggling club.  Vamping is the process of continuing the cadence with an imaginary club until you can find a moment to lift the dropped club back into the rotation.

The Inelegance of having Brothers and Sisters

This blog follows from a recent blog by Dan Carey called Screwdrivers and Properties. It points to a longer whitepaper on the topic of avoiding property proliferation.

One way we keep the number of primitives small is to avoid creating a subproperty if its meaning is essentially the same as the superproperty, but has a more restricted domain or range. We illustrate this with an example in the genealogy domain. Suppose we have the property myeo:hasSibling and we want to model brothers and sisters. One way would be to create two subproperties, myeo:hasBrother and myeo:hasSister, whose ranges are myeo:Male and myeo:Female respectively, and define the class myeo:Brother as a property restriction class that means “any individual that is the brother of some person”.  In Manchester syntax, this looks like: “myeo:brotherOf some myeo:Person” where myeo:brotherOf is the inverse of myeo:hasBrother. Similarly we can define myeo:Sister as “myeo:sisterOf some myeo:Person”. This introduces two new classes and two new properties.

However, we can easily capture the semantics of brother and sister without introducing any new properties. We define the class myeo:Brother as “myeo:Male and myeo:siblingOf some myeo:Person” and myeo:Sister is defined as “myeo:Female and myeo:siblingOf some myeo:Person”. This way we can define the brother and sister concepts entirely in terms of existing primitives with the same number of classes and without creating any new properties.

The only thing that differs about myeo:hasBrother and myeo:hasSister compared to myeo:hasSiblingis that the former two properties have more restricted ranges (myeo:Male & myeo:Female vs. myeo:Person). Otherwise the meaning is identical.  We have essentially moved the semantics of brother from the domains of two new properties into the class expression that define the classes myeo:Brother and myeo:Sister  (see figure below).

Keeping the number of primitives low is not only more elegant, but it has practical value.  The fewer things you have, the easier it is to find what you need. Not only does it help during ontology development, it also helps downstream when others evolve and apply the ontology.

property

Screwdrivers & Properties

Screwdrivers generally have only a small set of head configurations (flat, Phillips, hex) because the intention is to make accessing contents or securing parts easy (or at least uniform).Properties & Proliferations

Now imagine how frustrating it would be if every screw and bolt in your house or car required a unique screwdriver head.  They might be grouped together (for example, a bunch of different sized hex heads), but each one was slightly different.  Any maintenance task would take much longer and the amount of time spent just organizing the screwdrivers would be inordinate.

Yet that is precisely the approach that most OWL modelers take when they over-specify their ontology’s properties.

“Avoiding Property Proliferations – Part 1” discusses the pitfalls of habitually applying domains and ranges to properties.

Click here to download the whitepaper.

Greatest hits from the Data-Centric Manifesto

I was just reading through what some folks have written on the Data-Centric Manifesto web site. Thought I’d capture some of the more poignant:

“I believe [Linked] Data Centric approach is the way of the future. I am committing my company to assisting enterprises in their quest to Data-Centric transformation.” -Alex Jouravlev

 

“I have experienced first-hand in my former company the ravages of application-centric architectures. Development teams have rejected SQL-based solutions that performed 10 to 100 times better with less code and fewer resources, all because of application-centric dogma. Databases provide functional services, not just technical services – otherwise they’re not worth the money.” – Stew Ashton

 

“I use THE DATA-CENTRIC MANIFESTO as a mantra, a guide-line, a framework, an approach and a method, with which to add value as a consultant to large enterprises.” -Mark Besaans

 

“A data-centric approach will finally allow IT to really support the way we think and work instead of forcing us to think in capabilities of an application.” -Mark Schenk

 

“The principles of a data-centric approach would seem obvious, but the proliferation of application-centric implementations continues. Recognizing the difference is critical to positive change, and the benefits organizations want and need.” -Kim L Hoover

Data-centric is a major departure from the current application-centric approach to systems development and management. Migration to the data-centric approach will not happen by itself. It needs champions. If you’re ready to consider the possibility that systems could be more than an order of magnitude cheaper and more flexible, then become a signatory of the Data-Centric Manifesto.

Read more here.

Do Data Lakes Make My Enterprise Look Data-Centric?

Dave McComb discusses data lakes, schema, and data-centricity in his latest post on the Data Centric Revolution for The Data Administration Newsletter. Here’s a brief excerpt to pique your interest:The Data-Centric Revolution: Implementing a Data-Centric Architecture

“I think it is safe to say that there will be declared successes in the Data Lake movement. A clever data scientist, given petabytes of data to troll through, will find insights that will be of use to the enterprise. The more enterprising will use machine learning techniques to speed up their exploration and will uncover additional insights.

But in the broader sense, we think the Data Lake movement will not succeed in changing the economics or overall architecture of the enterprise. In a way, the Data Lake is something to do instead of dealing with the very significant problems of legacy ecosystems and dis-economics of change.

Even at the analytics level, where the Data Lake has the most promise, we think it will fall short…

Conceptually, the Data Lake is not far off from the Data Centric Revolution. The data does have a more central position. However, there are three things that a Data Lake needs in order to be Data Centric…”

Click here to read the entire article.

 

Debugging Enterprise Ontologies

Michael Uschold gave a talk at the International Workshop on Completing and Debugging the Semantic Web held in Crete on May 30, 2016.   Here is a preview of the white paper, “Finding and Avoiding Bugs in Enterprise Ontologies” by Michael Uschold:

Finding and Avoiding Bugs in Enterprise Ontologies

Abstract: We report on ten years of experience building enterprise ontologies for commercial clients. We describe key properties that an enterprise ontology should have, and illustrate them with many real world examples. They are: correctness, understandability, usability, and completeness. We give tips and guidelines for how best to use inference and explanations to identify and track down problems. We describe a variety of techniques that catch bugs that an inference engine will not find, at least not on its own. We describe the importance of populating the ontology with data to drive out more bugs. We point out some common ontology design practices in the community that lead to bugs in ontologies and in downstream semantic web applications based on the ontologies. These include proliferation of namespaces, proliferation of properties and inappropriate use of domain and range. We recommend doing things differently to prevent bugs from arising.

Introduction
In a manner analogous to software debugging, ontologies need to be rid of their flaws. The types of flaws to be found in an ontology are slightly different than those found in software, and revolve around the ideas of correctness, understandability, usability and completeness. We report on our experience (spanning more than a decade) in building and debugging enterprise ontologies for large companies in a wide variety of industries including: finance, healthcare, legal research, consumer products, electrical devices, manufacturing and digital assets. For the growing number of companies starting to use ontologies, the norm is to build a single ontology for a point solution in one corner of the business. For large companies, this leads to any number of independently developed ontologies resulting in many of the same heterogeneity problems that ontologies are supposed to solve. It would help if they all used the same upper ontology, but most upper ontologies are unsuitable for enterprise use. They are hard to understand and use because they are large and complex, containing much more than is necessary, or the focus is too academic to be of use in a business setting. So the first step is to start with a small, upper, enterprise ontology such as gist [McComb 2006], which includes core concepts relevant to almost any enterprise. The resulting enterprise ontology itself will consist of a mixture of concepts that are important to any enterprise in a given industry, and those that are important to a particular enterprise. An enterprise ontology plays the role of an upper ontology for all the ontologies in a company (Fig. 1). Major divisions will import and extend it. Ontologies that are specific to particular applications will, in turn, import and extend those. The enterprise ontology evolves to be the semantic foundation for all major software systems and databases that are core to the enterprise.

Click here to download the white paper.

Click here to download the presentation.

Evolve your Non-Temporal Database in Place

At Semantic Arts, we recently decided to upgrade our internal system to turn something that was a not temporal (our billing rates) into something that was. Normally, that would be a pretty big change.  As it turned out, it was pretty straightforward and could be done, as an in place update.  It turned out to be a pretty good mini case study for how using semantics and a graph database can make these kinds of changes far less painful.

So, Dave McComb documented it in a YouTube video.

 

Click here to view: Upgrade a non Temporal Database in Place