5 Comments
User's avatar
Miguel Wood's avatar

Excellent post

Travis Thompson's avatar

Appreciate it, Miguel!

Vishal Patel's avatar

I’ve reached a similar conclusion, Travis, particularly for AI-enabled workflows that require a modicum of logic and reasoning. Claude Opus is deceptive in its output; it makes the user think it has reasoned, but it’s just semantic acrobatics.

This is particularly troubling for research, where theory and hypotheses need to be logically connected to source documentation with high fidelity. I’m contemplating setting up an ontology layer to make the reasoning links explicit, and then having Claude Code run queries over the graph (as opposed to having Claude parse text). Curious if you think this approach would work and if you can point me to successful case studies.

Vinod Ganesan's avatar

Brilliant article Travis. This is so true, yet so often overlooked.

JS036215's avatar

Autonomy is less relevant than automation. Enabling decision-trace reading and writing will enable standardization of workflow. Exceptions to rules will become part of revised rules. Agent participation in workflow will increase before it decreases thanks to utilization of decision traces capturing action traces and their reasons, thereby allowing automation of agent action, not autonomy of agent action.