Employees are also installing agentic systems with unknown potential for autonomous action. Here, CoChat provides a control layer between the LLM and the agent, examining the LLM reasoning that ‘instructs’ the agent’s action. If the ‘instruction’ is considered dangerous (for example, the potential exposure of sensitive data to third parties, or the deletion of personal or enterprise data), CoChat will pause the autonomy and ask the user to explicitly approve or reject the process.CoChat enforces a human in the loop even where agentic systems are designed to operate without one. ConsiderOpenClaw– an autonomous personal assistant that directly serves the cause for shadow AI: improved personal performance. Estimates suggest OpenClaw has around 3 million active users. History suggests, metaphorically at least, it has an amoral mind that demands immediate unhindered gratification – and this can be problematic.“People feel the pain of needing to get the most out of AI, wanting to increase their performance productivity,” commented Marcel Folaron, CEO at CoChat. “So, they turn to automated AI tooling, such as OpenClaw and other locally installed tools, but not necessarily with IT’s knowledge. This can be very dangerous. These tools have access to everything on your system, and without the proper control mechanisms, they can run amok.”The LLM in an agentic system uses its own reasoning power, which is not guaranteed to be perfect, to instruct the agent on what to do next, potentially without any further reference to the user. The LLM undertakes the reasoning that guides the agents’ action. Agents, which are dynamic, adaptive and stateful, respond and take actions based on the LLM’s reasoning. Without human oversight, this can go very wrong.“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

CoChat enforces a human in the loop even where agentic systems are designed to operate without one. ConsiderOpenClaw– an autonomous personal assistant that directly serves the cause for shadow AI: improved personal performance. Estimates suggest OpenClaw has around 3 million active users. History suggests, metaphorically at least, it has an amoral mind that demands immediate unhindered gratification – and this can be problematic.“People feel the pain of needing to get the most out of AI, wanting to increase their performance productivity,” commented Marcel Folaron, CEO at CoChat. “So, they turn to automated AI tooling, such as OpenClaw and other locally installed tools, but not necessarily with IT’s knowledge. This can be very dangerous. These tools have access to everything on your system, and without the proper control mechanisms, they can run amok.”The LLM in an agentic system uses its own reasoning power, which is not guaranteed to be perfect, to instruct the agent on what to do next, potentially without any further reference to the user. The LLM undertakes the reasoning that guides the agents’ action. Agents, which are dynamic, adaptive and stateful, respond and take actions based on the LLM’s reasoning. Without human oversight, this can go very wrong.“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

“People feel the pain of needing to get the most out of AI, wanting to increase their performance productivity,” commented Marcel Folaron, CEO at CoChat. “So, they turn to automated AI tooling, such as OpenClaw and other locally installed tools, but not necessarily with IT’s knowledge. This can be very dangerous. These tools have access to everything on your system, and without the proper control mechanisms, they can run amok.”The LLM in an agentic system uses its own reasoning power, which is not guaranteed to be perfect, to instruct the agent on what to do next, potentially without any further reference to the user. The LLM undertakes the reasoning that guides the agents’ action. Agents, which are dynamic, adaptive and stateful, respond and take actions based on the LLM’s reasoning. Without human oversight, this can go very wrong.“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

The LLM in an agentic system uses its own reasoning power, which is not guaranteed to be perfect, to instruct the agent on what to do next, potentially without any further reference to the user. The LLM undertakes the reasoning that guides the agents’ action. Agents, which are dynamic, adaptive and stateful, respond and take actions based on the LLM’s reasoning. Without human oversight, this can go very wrong.“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

“If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system,” he continued.The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

The purpose of CoChat is to provide visibility into enterprise shadow AI, to impose governance over it, and to encourage AI teamwork rather than invisible, isolated silos of operation. “CoChat brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence,” said Folarun.In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

In some ways, it can be understood by how we use Slack. Slack provides channels bringing individuals into teamwork. If members think others are going astray, they can raise concerns and the issue can be discussed. In CoChat, the performance of different LLMs and agentic systems can be seen and compared.An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

An individual user might be fooled by an LLM’s innate desire to please its user; to provide the response that it assumes the user wants. But other members on the platform might question this and raise their concerns.CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

CoChat allows each user to run the LLM and agentic system of their choice and encourages the use of multiple LLMs to determine any hallucinations and potential misdirection to agentic systems. But because it is a platform, it doesn’t simply ensure a human in the loop, it allows multiple humans in each loop. The AI used via the platform may technically remain shadow AI, but a layer of visibility, transparency and governance is applied to it.CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

CoChat is fundamentally an AI collaboration platform designed for teamwork. It allows users to work together in shared chats with leading AI models, custom assistants, and autonomous agents while connecting AI workflows to the tools they already use – but interrupting potentially dangerous autonomous actions.Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon BayRelated:Can We Trust AI? No – But Eventually We MustRelated:Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive BreachesRelated:The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI ToolsRelated:Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw

Source: SecurityWeek