A Practical Guide to Responsible Disclosure
You've found a security vulnerability in a company's system. Maybe during a CTF, a bug bounty, or just while using a product. Now what?
This guide covers how to disclose it responsibly — protecting yourself legally, giving the company a fair chance to fix it, and ensuring the vulnerability actually gets addressed.
What is responsible disclosure?
Responsible disclosure (also called coordinated vulnerability disclosure, or CVD) is the practice of reporting a security vulnerability to the affected organization privately, giving them time to fix it before making it public. It's a middle path between two problematic extremes: never disclosing (the company never fixes it, users stay at risk) and immediate public disclosure (users are at risk before a patch exists).
The standard timeline is 90 days — pioneered by Google Project Zero. After 90 days, you can publish regardless of whether the company has patched, which creates the right incentives for vendors to actually fix issues promptly.
Step 1 — Find the right contact
Check for a security.txt file at /security.txt or /.well-known/security.txt— this is the standard place for companies to publish their disclosure contact info. If that doesn't exist, try security@company.com, check their bug bounty program (HackerOne, Bugcrowd), or look for a dedicated security page.
Avoid posting on social media, forums, or reaching out through customer support. Those channels aren't equipped to handle security reports and may expose the issue before it's fixed.
Step 2 — Write a clear report
A good disclosure report includes:
- Vulnerability type — IDOR, XSS, SSRF, etc.
- Affected endpoint or component — exact URL or file
- Reproduction steps — step-by-step, reproducible
- Impact assessment — what can an attacker do with this?
- Evidence — screenshots, request/response pairs, PoC code
- Your contact info — so they can follow up
Keep it factual and professional. You're not trying to impress anyone — you're trying to help them understand and fix the problem quickly.
Step 3 — Set a timeline
State clearly when you intend to publish: “I plan to disclose this publicly on [date 90 days from now] or earlier if a patch is released.” This isn't a threat — it's a commitment to the community that the vulnerability will eventually be public knowledge, which motivates the company to act.
Be reasonable: complex infrastructure vulnerabilities take longer to patch than a simple XSS. You can extend the timeline if the company is making clear progress and communicating transparently.
Legal considerations
In most jurisdictions, accessing a system without authorization is illegal even if your intent is to help. The legal landscape varies:
- In the US, the CFAA is broad and has been applied aggressively against researchers
- In the EU, Directive 2013/40/EU criminalizes unauthorized system access
- In France, Articles 323-1 to 323-8 of the Penal Code cover this
If you found the vulnerability while using the service normally (as an authenticated user) and didn't access other users' data, your legal exposure is generally lower. If you went beyond what was needed to confirm the vulnerability exists, that's a different story. When in doubt, consult a lawyer before disclosing.
What to do if you're ignored
Wait. Follow up once after a week, then again at the 45-day mark. If you're still getting no response at 90 days, publish with full technical details. This is the correct outcome — the incentive structure only works if researchers follow through on their timeline commitments.
Consider notifying CERT/CC, national CERTs, or relevant industry bodies (like CISA in the US) if the vulnerability is critical and the vendor is unresponsive. They can sometimes facilitate contact.
Found a vulnerability in IntrudR? See our Responsible Disclosure Policy for how to report it.