The main purpose of ethical hackers in generating new jailbreaks is to help the developers produce more effective guardrails – essentially to improve the process of hardening the AI. It’s working to a degree. “Jailbreaking has become a lot more difficult – like a lot –over the last two years,” says Melo. “In earlier years, you could just say, ‘Ignore previous instructions. Do this…’ And it worked. Now you’ve really got to learn your craft and introduce complex context manipulation to get around the protections.”But he adds, “There’s an infinite number of ways to perform a jailbreak, limited only by the creativity of the attackers.” So, could AI ever be secured against jailbreaks?“If AI reached a final, unchanging state, maybe,” he says. “But like the internet, AI evolves constantly. You can secure one version, but as new features are added, new vulnerabilities appear. Saying AI will ever be fully secure against jailbreaks is like saying the internet will one day be completely immune to hackers. As long as there’s progress, there will be both improvements and new risks. The key is that AI is far more secure today than it was two years ago, and two years from now, it will likely be more secure than it is now. It’s an ongoing cat-and-mouse game.”By disclosing existing jailbreaks, Melo contributes to making current AI more difficult to attack.Data poisoningWhile jailbreaking can be used to extract confidential or sensitive data from an AI model, data poisoning seeks to cause the model to generate false or harmful outputs by poisoning the data from which it learns. The former is an outside-in attack while the latter is an inside-out attack. It’s a bit like ‘rubbish in, rubbish out’ – poison in, poison out.Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

But he adds, “There’s an infinite number of ways to perform a jailbreak, limited only by the creativity of the attackers.” So, could AI ever be secured against jailbreaks?“If AI reached a final, unchanging state, maybe,” he says. “But like the internet, AI evolves constantly. You can secure one version, but as new features are added, new vulnerabilities appear. Saying AI will ever be fully secure against jailbreaks is like saying the internet will one day be completely immune to hackers. As long as there’s progress, there will be both improvements and new risks. The key is that AI is far more secure today than it was two years ago, and two years from now, it will likely be more secure than it is now. It’s an ongoing cat-and-mouse game.”By disclosing existing jailbreaks, Melo contributes to making current AI more difficult to attack.Data poisoningWhile jailbreaking can be used to extract confidential or sensitive data from an AI model, data poisoning seeks to cause the model to generate false or harmful outputs by poisoning the data from which it learns. The former is an outside-in attack while the latter is an inside-out attack. It’s a bit like ‘rubbish in, rubbish out’ – poison in, poison out.Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

“If AI reached a final, unchanging state, maybe,” he says. “But like the internet, AI evolves constantly. You can secure one version, but as new features are added, new vulnerabilities appear. Saying AI will ever be fully secure against jailbreaks is like saying the internet will one day be completely immune to hackers. As long as there’s progress, there will be both improvements and new risks. The key is that AI is far more secure today than it was two years ago, and two years from now, it will likely be more secure than it is now. It’s an ongoing cat-and-mouse game.”By disclosing existing jailbreaks, Melo contributes to making current AI more difficult to attack.Data poisoningWhile jailbreaking can be used to extract confidential or sensitive data from an AI model, data poisoning seeks to cause the model to generate false or harmful outputs by poisoning the data from which it learns. The former is an outside-in attack while the latter is an inside-out attack. It’s a bit like ‘rubbish in, rubbish out’ – poison in, poison out.Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

By disclosing existing jailbreaks, Melo contributes to making current AI more difficult to attack.Data poisoningWhile jailbreaking can be used to extract confidential or sensitive data from an AI model, data poisoning seeks to cause the model to generate false or harmful outputs by poisoning the data from which it learns. The former is an outside-in attack while the latter is an inside-out attack. It’s a bit like ‘rubbish in, rubbish out’ – poison in, poison out.Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

While jailbreaking can be used to extract confidential or sensitive data from an AI model, data poisoning seeks to cause the model to generate false or harmful outputs by poisoning the data from which it learns. The former is an outside-in attack while the latter is an inside-out attack. It’s a bit like ‘rubbish in, rubbish out’ – poison in, poison out.Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

Successful data poisoning could cause anything from a general degradation in the performance of the model, to specific harmful consequences – like a misdiagnosis from medical equipment or dangerous misinterpretation of the environment for autonomous vehicles.Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

Data poisoning is just one of a checklist of around 15 basic AI issues that Melo probes. While there are statistical and analytical tools available to the developers to look for evidence of data poisoning, absent access to these tools, Melo concentrates on probing the potential for data poisoning via adversarial techniques.For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

For example, some bots take the user prompts they receive, and ingest them for their ongoing training, “In my prompts,” explains Melo, “I might continually claim the moon landing is fake. After a while, if the bot says ’the moon landing is fake’ in response to a direct query, I know that this model is susceptible to data poisoning via prompt data ingestion.”A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

A major problem for AI developers is that human knowledge is not static – it grows and changes. If the model does not stay current with new thinking, it could return old and now debunked ideas.A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

A common and important source of new data for continuous training is the internet, which it widely or selectively scrapes. “Bots effectively trust websites,” says Melo. The developers may seek to include checks and balances, but an attacker would attempt to avoid these blocks.“I could create a completely new website of my own and include keywords I know will be of interest and attractive to the bot I am testing. If I later check responses that may include data that could only have come from my website. I know that the bot is susceptible to this type of data poisoning.”Staying on the straight and narrowAll ethical hackers, pentesters, and red teamers have, or acquire, the same set of skills used by malicious hackers. While many ‘shady’ young hackers become legitimate members of the cybersecurity fraternity as they mature, very few then turn their back on legitimacy and sell their skills on the dark web or otherwise make use of their skills for insalubrious purposes.The primary motivation for Joey Melo’s own brand of hacking seems to be a curiosity-driven desire to control a chosen environment, without altering that environment, and all done for fun. There has never been any malicious intent. Could he now be tempted to sell a discovered vulnerability or exploit chain on the dark web?“No,” he says. “Risking my career, reputation, and integrity for quick money on the dark web makes no sense to me. What I consider good is ethical, responsible, transparent, and accountable. Responsible disclosure aligns with those values, while the dark web represents the opposite. I’d rather live without guilt or regret and take the right path; and, right now, responsible disclosure is that path. I believe true virtue lies in having the ability to cause harm but consciously choosing not to. That’s the standard I hold myself to.”Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon BayRelated:Hacker Conversations: Rachel Tobac and the Art of Social EngineeringRelated:Hacker Conversations: Joe Grand – Mischiefmaker, Troublemaker, TeacherRelated:Hacker Conversations: Rob Dyke on Legal Bullying of Good Faith ResearchersRelated:Hacker Conversations: HD Moore and the Line Between Black and White

Source: SecurityWeek