Skip to content

How to Write an Effective Security Policy for GitHub Repositories

A repository security policy is one of those documents that people often add because GitHub expects it, not because they have thought through what it needs to do. That is how you end up with policies that say little more than "email us if you find a problem" or, worse, tell researchers to open a public issue for a vulnerability report. An effective security policy is not a box-checking exercise. It is an operational document: it tells security researchers how to contact you safely, tells users which versions you still support, and tells everyone what kind of response process they can reasonably expect.

This post walks through how to write a security policy that is actually useful on GitHub: what sections it needs, how specific to be, which mistakes to avoid, and a practical SECURITY.md template you can adapt for your own repositories. The goal is not just to help you publish the file, but to help you publish one that will still read as clear, credible, and operational when someone actually needs it.

Useful beyond GitHub, too!

Although this post focuses on GitHub, most of the guidance applies equally well to GitLab and other source hosting platforms. The platform-specific details, such as where the policy is surfaced and how private vulnerability reporting works, may differ, but the core job of a security policy stays the same: define a private reporting path, set expectations, and make support boundaries clear.

What the Security Policy Does on GitHub

GitHub recognizes SECURITY.md as a community health file. When present, it surfaces the security policy in places where maintainers, users, and researchers expect to find security reporting guidance:

  • The repository's Security and quality tab, under the Policy section
  • The repository's community profile / community standards view

That placement matters. A researcher who believes they have found a vulnerability should not need to guess whether you want a GitHub issue, an email, a security advisory, or a carrier pigeon. Your security policy should answer that question immediately.

If you want the platform mechanics straight from the source, GitHub documents security policies as part of its vulnerability reporting configuration docs.

GitHub's UI flow, as of this writing, for adding the file through the web interface is also straightforward: open the repository, click Security and quality, and if that tab is not visible, open the overflow dropdown and select Security and quality there. In the left sidebar, under Reporting, choose Policy, then click Start setup to create SECURITY.md.

Where GitHub Looks for It

GitHub checks three locations for a repository-level security policy:

  1. SECURITY.md at the repository root
  2. docs/SECURITY.md
  3. .github/SECURITY.md

If your account or organization uses a dedicated .github repository for default community health files, GitHub can also fall back to the SECURITY.md defined there for repositories that do not have their own policy. That is useful for consistency, but it also means the security policy needs to be general enough to apply across multiple repositories.

Tip

Default community health files from a .github repository only apply when that special .github repository is public. If it is private, GitHub will not use its SECURITY.md as the default policy for other repositories.

Tip

A default SECURITY.md in a .github repository is a good baseline for many projects, but repositories with materially different support windows, report channels, or response processes should still have their own repository-specific policy.

What an Effective Security Policy Needs to Do

At a minimum, a good security policy should answer five questions:

  1. Which versions are still supported with security fixes?
  2. How should someone report a vulnerability privately?
  3. What information should they include in the report?
  4. What should they expect from you after reporting it?
  5. What reporting paths are explicitly not appropriate?

If the document does not answer those five questions clearly, it is probably not ready.

Just as importantly, the security policy is an intake and expectation-setting document. It is not your full incident response plan, not your complete disclosure policy, and not a substitute for an internal security process if your project needs one.

The Sections That Matter Most

If you want a practical way to evaluate your draft, ask whether each major section does a distinct job:

  • The supported versions section defines your maintenance boundary
  • The reporting section gives people a private intake path
  • The report contents section improves triage quality
  • The response expectations section tells people what happens next
  • The disclosure section explains how validated issues are communicated

That structure keeps the document focused. Each section answers a different operational question, and together they create a reporting path that is clear enough to use under pressure.

Start with Supported Versions

GitHub's generated SECURITY.md examples usually start with a supported versions table, and that is a good default. A security policy is not only about how to report a problem; it is also about setting expectations about where you will spend maintenance time.

A simple version table is often enough:

## Supported Versions

| Version | Supported          |
|---------|--------------------|
| 3.x     | :white_check_mark: |
| 2.x     | :white_check_mark: |
| 1.x     | :x:                |

This section matters for two reasons.

First, it prevents ambiguity. If you stopped patching 1.x a year ago, say so directly. Users running unsupported versions may still report issues, but they should not expect a security fix on that branch.

Second, it helps you make consistent triage decisions. A vulnerability affecting only an unsupported branch is still important, but the remediation path may be "upgrade to a supported release" rather than "we will backport a patch."

Note

If your repository does not use semantic versioning or formal release lines, adapt the section to your actual support model. A statement like "only the latest release is supported" is better than pretending you have a versioning policy that you do not.

Define a Private Reporting Path

The single most important thing a security policy does is keep vulnerability reports out of public issues and pull requests.

A strong reporting section should say this explicitly:

## Reporting a Vulnerability

Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.

Then give the correct private path.

On GitHub, the best option is usually one of these:

  • Private vulnerability reporting, if enabled for the repository
  • A dedicated security email alias such as [email protected]
  • A formal disclosure intake system if your organization uses one

GitHub's private vulnerability reporting flow is usually the cleanest option for a repository that actively uses GitHub's security tooling, because it keeps initial disclosure private inside the platform and aligns well with GitHub Security Advisories.

If you offer email reporting, prefer a role-based address over a personal inbox. A personal inbox creates an operational dependency on one person being available, attentive, and still involved with the project months from now.

Pick one primary reporting path and make it the default in the document. Only list a secondary fallback path if you actually monitor and operate both.

Verify the Reporting Channel

If your policy tells reporters to use GitHub private vulnerability reporting, make sure the feature is enabled for the repository before publishing the policy. GitHub documents the setup and behavior here: Privately reporting a security vulnerability.

Tell Reporters What to Include

Security reports are dramatically easier to triage when you ask for the right information upfront. You do not need a giant questionnaire, but you do need enough detail to reproduce and assess the issue.

I like to ask for:

  • A clear description of the vulnerability
  • The affected version, tag, or commit
  • Steps to reproduce
  • Proof-of-concept code or payloads, if applicable
  • The expected security impact
  • Any proposed mitigations or workarounds already identified

In Markdown, that can look like this:

When reporting a vulnerability, please include:

- The affected project, version, tag, or commit SHA
- A description of the issue and why you believe it is security-sensitive
- Reproduction steps or a proof of concept
- Any relevant logs, screenshots, or payloads
- The potential impact
- Suggested mitigations or fixes, if known

That list does not need to be exhaustive. It only needs to reduce the number of "can you clarify what you mean?" exchanges after the report lands.

Set Response Expectations

Many security policies stop after telling people where to send a report. That is not enough. Researchers also need to know what kind of response window to expect.

A good security policy should describe:

  • When you expect to acknowledge receipt
  • When you expect to begin triage
  • How you will communicate follow-up questions
  • What happens if the report is confirmed

Here is a simple, credible example:

You can expect an acknowledgment within 3 business days.

After acknowledgment, we will assess the report and follow up with next steps. If the issue is
confirmed, we will work on a fix and coordinate disclosure timing with the reporter when
appropriate.

The important word there is credible. Do not promise a 24-hour response time if this is a personal open-source project that you maintain on nights and weekends. A realistic response window that you consistently meet is better than an aggressive one you constantly miss.

Explain Your Disclosure Approach

You do not need a full vulnerability management programme for a useful security policy, but you should say a little about how disclosure works after validation.

For example:

  • Will you publish a GitHub Security Advisory for confirmed vulnerabilities?
  • Will you coordinate disclosure timing with the reporter?
  • Will you patch first and disclose second?
  • Will unsupported versions receive fixes?

A short paragraph is enough:

If a report is validated, we will work on a fix and release it as soon as practical. We may publish
a GitHub Security Advisory once remediation details are ready to share publicly.

That gives reporters a basic sense of what happens next without overcommitting to a detailed incident response playbook.

If you do use GitHub Security Advisories, it helps to say so plainly. Reporters then know there is an established path for coordinated disclosure and public publication once a fix is ready.

Be Explicit About What Not to Do

This is where many security policies are too polite to be useful.

Spell out the paths that are not appropriate for vulnerability reporting:

  • Public issues
  • Public pull requests
  • Public discussions
  • Social media
  • Direct messages to maintainers' personal accounts

That is not being unfriendly. It is protecting users and maintainers from accidental public disclosure.

You can also state behavioral boundaries if needed:

  • Do not include production secrets or private customer data in reports
  • Do not perform destructive testing against shared hosted environments
  • Do not use public exploit releases before coordinated disclosure if the issue is unpatched

Keep the Scope Honest

Some repositories try to make a security policy sound like a corporate bug bounty policy. That is usually a mistake unless there really is a formal programme behind it.

If you do not have:

  • a bug bounty,
  • a staffed security team,
  • a 24/7 response process,
  • or formal safe-harbor language reviewed by counsel,

do not imply that you do.

Be straightforward instead. For many open-source repositories, a perfectly respectable policy is:

  • private reports only,
  • best-effort triage,
  • supported versions listed clearly,
  • coordinated disclosure when practical.

That is enough.

A Short Pointer File Can Also Be Valid

Not every repository needs a fully self-contained security policy. If your project follows a central company-wide or organization-wide vulnerability disclosure process, a short repository policy that points to the canonical external policy can be perfectly valid.

This works especially well when:

  • Multiple repositories share one disclosure process
  • A company security team, rather than individual maintainers, handles intake
  • The authoritative process lives on a company website rather than in a single repository

In that case, keep the repository file short but explicit:

# Security Policy

Please do not report security vulnerabilities through public GitHub issues, discussions, or pull
requests.

For vulnerability reporting instructions, response expectations, and disclosure guidance, refer to
the organization's security policy:

https://example.com/security

The key requirement is that the linked page must be current, monitored, and clear. A short pointer file is only useful if it takes the reader somewhere operationally better than the repository file itself.

If the repository has exceptions to the central policy, such as a different support window or a different reporting contact, say so in the repository file before sending the reader elsewhere. The pointer approach works best when it reduces duplication without hiding repository-specific reality.

Common Mistakes

These are the patterns I see most often in weak security policies.

Telling people to open a public issue

This is the big one. If a vulnerability report can land in a public tracker before you have even looked at it, your policy is not doing its job.

No supported versions section

Without a support statement, every report implicitly looks like it applies to everything. That creates confusion for users and maintainers alike.

Using an unmonitored contact address

[email protected] looks professional, but only if someone actually reads it. A monitored alias with clear routing is better than a polished dead mailbox.

Promising unrealistic response times

"We respond within 24 hours" sounds reassuring until you fail to do it repeatedly. Promise what you can sustain.

Being too vague about the reporting process

"Contact us privately" is not enough. Privately how? Through which channel? With what information? Who responds? Ambiguity creates delay.

Asking for too much formalism

Do not require encrypted mail with rotating keys, highly structured CVSS scoring, or complex legal language unless your project truly has the process maturity to support that. Friction at intake time often means good-faith reporters give up.

A Practical SECURITY.md Template

Here is a practical baseline template for a GitHub repository:

# Security Policy

## Supported Versions

| Version | Supported          |
|---------|--------------------|
| 3.x     | :white_check_mark: |
| 2.x     | :white_check_mark: |
| 1.x     | :x:                |

## Reporting a Vulnerability

Please do not report security vulnerabilities through public GitHub issues, discussions, or pull
requests.

Instead, report vulnerabilities through GitHub private vulnerability reporting for this repository.

If private vulnerability reporting is unavailable or unusable for your report, email the
maintainers at `[email protected]`.

When reporting a vulnerability, please include:

- The affected version, tag, or commit SHA
- A description of the issue and why you believe it is security-sensitive
- Steps to reproduce or a proof of concept
- Any relevant logs, payloads, or screenshots
- The potential impact
- Any suggested mitigations or fixes, if known

You can expect an acknowledgment within 3 business days.

After acknowledgment, we will assess the report and follow up with next steps. If the issue is
confirmed, we will work on a fix and coordinate disclosure timing with the reporter when
appropriate.

If a report is validated, we may publish a GitHub Security Advisory once remediation details are
ready to share publicly.

This template is intentionally modest. It does not try to be a bug bounty programme, a full incident response plan, or a legal safe-harbor statement. It is a practical starting point for a normal GitHub project.

When to Go Beyond the Basic Template

Some projects need more than the baseline.

You may want additional sections if:

  • You maintain a hosted service and need testing boundaries
  • You have multiple products or repositories with different security contacts
  • You publish CVEs or GitHub Security Advisories regularly
  • You have legal safe-harbor language reviewed by counsel
  • You operate a formal bug bounty or disclosure programme

In those cases, consider adding sections for:

  • Scope and out-of-scope targets
  • Safe-harbor language
  • Disclosure timing expectations
  • Severity classification
  • Encryption preferences
  • Advisory or CVE publication process

The key is still the same: only document the process you can actually operate.


An effective security policy is short, specific, and operational. It tells people how to contact you without putting users at risk, sets realistic expectations, and makes clear where your support commitments begin and end.

That is the real standard to aim for on GitHub. Not elegance for its own sake, not corporate security theatre, just a clear process that works when somebody finds a real problem.

References