Intellectual Provenance: Boundary Logic and Decision-Making Under Uncertainty
My postgraduate work focused on the application of mathematical logic to reasoning under uncertainty, specifically in domains where decisions cannot be meaningfully reduced to discrete choices. The central premise was that many real-world systems do not operate across well demarcated thresholds, but instead function within boundary regions or zones in which multiple actions remain possible, confidence can be low, and small changes in information, interpretation, or institutional context can materially alter outcomes.
At the time, this work sat largely in the domain of formal logic. Today, it maps directly onto the core challenges of modern artificial intelligence, clinical decision support, and regulatory intelligence.
From Discrete Decisions to Probability Fields
Contemporary AI systems are fundamentally probabilistic. Whether framed as Bayesian inference, deep learning confidence estimation, or causal modelling, they operate over continuous risk surfaces rather than binary truth states. Yet most human and institutional interfaces still force these outputs into categorical forms: high versus low risk, approve versus reject, treat versus defer.
Boundary logic addresses the structural weakness of that translation. It formalises the fact that the most consequential errors, biases, and institutional failures occur not in cases of clear signal, but at the margin where probabilities cluster near decision thresholds, evidence is incomplete, and human judgement, policy, or incentive structures begin to dominate the outcome.
In modern terms, this corresponds to what is now described as calibration failure, threshold effects, and decision instability in high-stakes AI systems.
Clinical and Regulatory Relevance
In healthcare, most predictive systems are evaluated on accuracy metrics, yet deployed through governance frameworks that rely on rules. Clinical pathways, triage protocols, and treatment guidelines often present as pass/fail decision ‘gates’, but are in practice driven by graded judgements of risk tolerance, evidence quality, clinician experience, and institutional norms.
Boundary logic provides a formal lens for modelling this. Rather than asking “What is the predicted risk?”, it asks “How close is this case to the point where different actions become equally defensible, and what factors, whether human, institutional, or technical, are likely to determine the final decision?”
This perspective underpins modern approaches to:
-
Bias detection in clinical decision-making
-
Regulatory and HTA simulation using gated evaluation frameworks
-
Personalised adaptive learning systems that move away from population averages toward individual baselines
-
Ethical reasoning that distinguish between rule-based compliance and principle-based judgement.
Alignment with Contemporary AI Safety and Governance
In current AI research, there is growing emphasis on uncertainty estimation, out-of-distribution detection, and epistemic humility or the ability of a system to recognise when it does not know. These are, in effect, operational forms of boundary logic.
Rather than optimising for confidence, such systems aim to surface areas of indeterminacy or regions where model output, human judgement, and institutional policy intersect, and where the risk of error or misalignment is highest.
This shift reflects a broader transition in AI from prediction-centric design toward decision-centric architecture or systems that not only generate probabilities, but also model how those probabilities are interpreted, acted upon, and governed in real organisational and regulatory environments.
Architectural Implications
This intellectual foundation informs the design of intelligent agents for clinical, pharmaceutical and regulatory intelligence platforms. The focus is not solely on predictive performance, but on modelling the full decision topology, including:
-
How clinicians and regulators respond to rising or falling risk trajectories over time
-
How cognitive biases and heuristics influence actions near critical thresholds
-
How institutional rules transform continuous signals into discrete outcomes
-
How ethics and governance intervene when model confidence and human confidence diverge
The result is a class of systems designed to operate explicitly in the “grey zones” where most real-world harm, cost, and controversy tend to concentrate.
Contemporary Framing
In today’s language, my work aligns with what is increasingly described as:
-
Uncertainty-aware AI
-
Probabilistic governance modelling
-
Human-AI decision co-architecture
-
Ethical and regulatory intelligence systems
The underlying principle remains consistent: in high-stakes domains, the critical design problem is not simply generating better predictions, but building systems that understand and expose the structure of decision-making itself especially where that structure is most fragile.
So, some facts as well:
I have provided consultancy since 1997 and now I have evolved my company Cassis to focus on using AI to create decision support tools for healthcare, pharma/life sciences.
I am a co-founder of Luminalis an American AI company. We recently spun off a Falls Predictor which will be commercialised by Elarin Health.
I was a founding director of Volv Global, Switzerland, which has a focus on AI and patient identification.
My experience at EDS and Kearney was excellent grounding for me.
I was a founding director of Eden Communications in the UK which launched the world’s first digital and interactive health TV channel, “Living Health”, in partnership with UK broadcasters. The channel received a number of awards for innovation.
I acquired direct healthcare experience at Hamilton Health Sciences, McMaster University in Canada as head of a department focused on improving the quality of clinical services. The work focused on clinical workflow, skill mix and patient experience. I tutored in the Faculty of Health Sciences in epidemiology. Amazing place to work!
As Senior Lecturer at HSMC, University of Birmingham, UK, I taught and published on health policy and management decision making and priority setting. I advised the European Commission and the Council of Europe and governments. I was an Associate Dean of Medicine, Deputy Director Public Service MBA and Director Masters in Health Quality Management (quantitative methods / operational research).
I did my doctorate at the University of Toronto and found my doctoral study of applied psychology consequential.
At McMaster University, my Masters was on the application of mathematical logic to reasoning under uncertainty, and the absence of discrete choices (boundary logic).