Knowledge Based Systems

A subject matter expert (SME) and a software application agree on a common representation of knowledge, a conceptualization, using global open standard technical formats such as XBRL, RDF+OWL+SHACL, or GQL. That means that rather than embedding subject matter logic within the code of a software application; that logic can be separated from the software application.  This makes both that common representation of knowledge more useful and modifications to the software easier.

This distinction (i.e. separating knowledge and code) makes knowledge based systems very different from typical software applications. The architecture of a knowledge based system explicitly separates knowledge from software code. This is done by representing knowledge declaratively rather than as part of procedural code. This separation of knowledge and code and representing that knowledge declaratively also makes that knowledge more broadly usable.

This types of systems have been referred to as "expert systems" and "mindful machines" and "knowledge based systems".  Additional terms to describe this notion are "deductive apparatus" and "engine" might be appropriate.

Imagine a knowledge based mindful machine specific to accountancy.  Imagine a human accountant and an computer-based "artificial mind" interacting within a single common ecosystem. That common ecosystem provides a shared conceptualization and has theories, knowledge, reasoning, governance (i.e. curation, checks and balances, scrutiny).  The knowledge in that common ecosystem is discernable because it is clearly specified. Knowledge is testable so that agreement can be confirmed by the subject matter experts (SMEs) in that community of practice. 

The knowledge is extensible, elastic; the ecosystem is flexible where it needs to be flexible. There are guardrails or "bumpers" associated with that extensibility/elasticity/flexibility that keep both the machine and human within boundaries; preventing wild behavior.  Knowledge is represented in a global open standards based form that is understandable by a machine-based process but then from that computer-based representation, a human understandable representation can also be generated.  Why?  Humans need to confirm that the global open standards machine-based representation is complete, consistent (e.g. free from contradictions), precise and accurate (e.g. properly reflects the beliefs of the community).  Having two different representations (one for machines, a different one for humans) can result in inconsistencies.

This common ecosystem can result in a virtuous cycle. Unprecedented reductions in system friction caused by rekeying of information, multiple versions of information which could be different, copying/pasting of information is minimized or even perhaps eliminated.

Self-regulation theory can be applied to knowledge based systems in order to be able to develop systems capable of monitoring and controlling their behavior to achieve specified goals in dynamic and complex environments such as accountancy.  Building self-regulating capabilities into software agents typically involves four phases based on models of human cognition: 
  • Forethought and planning: This involves setting rules, assertions, restrictions, conditions, and goals as well as planning how to achieve those goals given those rules, assertions, restrictions, and conditions. For knowledge based systems this includes defining high-level objectives and breaking those objectives down into a series of actionable steps in order to complete the required work tasks.
  • Monitoring: The knowledge based system continuously tracks its performance against its specified goals articulated as knowledge within the machine-readable knowledge base. This might mean monitoring internal state like its "confidence" or "uncertainty".
  • Control: Control is the process of adjusting behavior based on the monitoring phase. If the knowledge based system detects a contradiction, inconsistency, or an error it must have the mechanisms to correct its own actions or notify its human collaborator of the issue such that the issue can be corrected. This might include adding knowledge to the system caused by incomplete information.  This can involve modifying or adjusting the system's internal reasoning processes or strategies.
  • Reflection: The knowledge based system reviews the outcomes of its actions to learn from its performance and, perhaps, makes suggestions to improve good practices and best practices based on new emergent practices or novel practices. This might involve comparing the results to the original goals and updating its internal models to improve future behavior. This learning capability allows for ongoing self-improvement without continuous human oversight but might involve human supervision. But it also might require the addition of new rules or adjustments to existing rules.
Collaboration between humans and machines with the humans always being ultimately in charge of the system is unprecedented. But with such a system; humans can do what they are best at and machines can perform tasks which they can perform reliably.

The graphic below is inspired by a similar graphic created by DARPA and provided in this video, A DARPA Perspective on Artificial Intelligence, show the impact of a hybrid approach to artificial and human/machine teaming. #1 shows all work being performed by humans.  #2 shows the capabilities of rules-based artificial intelligence and handcrafted knowledge. #3 shows the capabilities of probability based artificial intelligence based on statistical learning.  #4 shows what can be achieved using a hybrid approach which combines #1, #2, and #3 and used a contextualized, model-driven approach. (GREEN is work performed by a human and BLUE is work performed by a machine)

Additional Information:


Comments

Popular posts from this blog

Overview

Knowledge Representation Approach

Complexity