AI Security
Prompt injection, LLM security, AI-powered phishing, and deepfake fraud. Practical takes on real AI threats and how to build safer systems.
OpenClaw vs Hermes Agent
A learner asked me this after a workshop last week.
Honored to speak at the largest OpenClaw event at Amazon Web Services (AWS) last night, organised by OpenClaw Singapore. Thanks Lionel Sim for the invitation.
I was tasked to address the elephant in the room - security. I have converted my presentation to an article with slides for anyone interested.
Your OpenClaw Agent Is One Message Away from Getting Hacked
I gave a talk yesterday on OpenClaw security, at the largest OpenClaw event at Amazon Web Services (AWS), with 400 audience, organized by OpenClaw Singapore....
Am I the only one feeling uneasy building AI agents with OpenCrawl after testing it for a while?
I've been building AI agents before OpenClaw, and building skills using Claude Code for a while. It's powerful. When I learned about OpenClaw, I knew exactly...
Your AI agent will get prompt injected sooner or later, because it is easier than most people thought.
Most people think prompt injection needs a carefully crafted adversarial prompt by an experienced hacker. It does not. Someone who understands how LLMs work ...
10 Ways to Reduce the Risk of Running OpenClaw (or Any AI Agent)
The safe answer comes from Peter Steinberger, OpenClaw's creator himself. He said OpenClaw is designed as a personal assistant - one user to one or many agen...
Why Your OpenClaw Agent Is One Message Away from Getting Hacked?
A stranger sent a very long, sophisticated-looking message to her agent. It was filled with detailed research instructions about finance news, complete with ...
Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?
--
If you are using OpenClaw with WhatsApp, there is one risk nobody is talking about.
Getting your WhatsApp account permanently banned.
Most people design AI agent systems wrong. They put AI agents inside the security boundary instead of outside. This exposes their system to prompt injection.
--
Vibe coders, this old news will happen to you sooner or later.
Unless you set up your project correctly.
AI Coding Assistants Have a Security Blind Spot
A few months ago, I wrote about a non-technical founder whose SaaS got exploited right after he publicly showed his build process using Cursor (https://lnkd....
Claude Code and OpenAI Codex Do Track You
Recently, after hitting my Claude Code Max limit, I switched over to OpenAI Codex to continue my work.
When AI Hallucination Becomes A Security Feature.
Two months ago, something unexpected happened with our AI Lead Response agent.
Sam Altman Announces ChatGPT Pulse
If this gains traction, OpenAI is no longer just an AI company. It’s evolving into a media and lifestyle company, shaping what we see and think about each da...
"Guys, I’m under attack"
I came across this post where a founder shared how his SaaS got exploited right after he started sharing how he built his SaaS using Cursor.
I was doing vibe coding and saw AI generated this code.
Notice anything?
Too many people are wasting energy sending soulless cold messages crafted by AI.
The best I could do to recover some value from that wasted energy is to turn it into AI security research.
Why llms.txt Is a Bad Idea for the Web
But seeing "SEO gurus" promote it on authoritative platforms like Search Engine Land and Yoast SEO worries me.
**GenAI Pitfalls**
ChatGPT has recently encountered various outages. These outages took down our AI services and disrupted business operations, both for us and our clients.
OpenClaw vs Hermes Agent
A learner asked me this after a workshop last week.
Your OpenClaw Agent Is One Message Away from Getting Hacked
I gave a talk yesterday on OpenClaw security, at the largest OpenClaw event at Amazon Web Services (AWS), with 400 audience, organized by OpenClaw Singapore....
Your AI agent will get prompt injected sooner or later, because it is easier than most people thought.
Most people think prompt injection needs a carefully crafted adversarial prompt by an experienced hacker. It does not. Someone who understands how LLMs work ...
Why Your OpenClaw Agent Is One Message Away from Getting Hacked?
A stranger sent a very long, sophisticated-looking message to her agent. It was filled with detailed research instructions about finance news, complete with ...
If you are using OpenClaw with WhatsApp, there is one risk nobody is talking about.
Getting your WhatsApp account permanently banned.
Vibe coders, this old news will happen to you sooner or later.
Unless you set up your project correctly.
Claude Code and OpenAI Codex Do Track You
Recently, after hitting my Claude Code Max limit, I switched over to OpenAI Codex to continue my work.
Sam Altman Announces ChatGPT Pulse
If this gains traction, OpenAI is no longer just an AI company. It’s evolving into a media and lifestyle company, shaping what we see and think about each da...
Too many people are wasting energy sending soulless cold messages crafted by AI.
The best I could do to recover some value from that wasted energy is to turn it into AI security research.
**GenAI Pitfalls**
ChatGPT has recently encountered various outages. These outages took down our AI services and disrupted business operations, both for us and our clients.
Honored to speak at the largest OpenClaw event at Amazon Web Services (AWS) last night, organised by OpenClaw Singapore. Thanks Lionel Sim for the invitation.
I was tasked to address the elephant in the room - security. I have converted my presentation to an article with slides for anyone interested.
Am I the only one feeling uneasy building AI agents with OpenCrawl after testing it for a while?
I've been building AI agents before OpenClaw, and building skills using Claude Code for a while. It's powerful. When I learned about OpenClaw, I knew exactly...
10 Ways to Reduce the Risk of Running OpenClaw (or Any AI Agent)
The safe answer comes from Peter Steinberger, OpenClaw's creator himself. He said OpenClaw is designed as a personal assistant - one user to one or many agen...
Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?
--
Most people design AI agent systems wrong. They put AI agents inside the security boundary instead of outside. This exposes their system to prompt injection.
--
AI Coding Assistants Have a Security Blind Spot
A few months ago, I wrote about a non-technical founder whose SaaS got exploited right after he publicly showed his build process using Cursor (https://lnkd....
When AI Hallucination Becomes A Security Feature.
Two months ago, something unexpected happened with our AI Lead Response agent.
"Guys, I’m under attack"
I came across this post where a founder shared how his SaaS got exploited right after he started sharing how he built his SaaS using Cursor.
I was doing vibe coding and saw AI generated this code.
Notice anything?
Why llms.txt Is a Bad Idea for the Web
But seeing "SEO gurus" promote it on authoritative platforms like Search Engine Land and Yoast SEO worries me.