Guidelines

The guidelines section contains various guidelines for contributing to Authelia. We implement various guidelines via automatic processes that will provide feedback in the PR, but this does not cover every situation. You will find both those which are automated and those which are not in this section.

While it’s expected that people aim to follow all of these guidelines we understand that there are logical exceptions to all guidelines and if it makes sense we’re likely to agree with you. So if you find a situation where it doesn’t make sense to follow one just let us know your reasoning when you make a PR if it’s not obvious.

General Guidelines

Some general guidelines include:

  • It’s recommended people wishing to contribute discuss their intended changes prior to contributing
    • This helps avoid people doubling up on contributions
    • This helps avoid conflicts between contributions
    • This helps avoid contributors wasting their precious limited time in a contribution that may not be accepted

Generative AI Guidelines

We welcome the use of generative AI from our contributors in a general sense. We however have several rules which dictate the way in which it is used. It’s an amazing tool that can save countless hours of work, but we want to ensure that it is used responsibly.

These rules form part of and augment our Code of Conduct. As such these rules may be enforced using the remediation process described in the Code of Conduct.

  1. The human is 100% responsible for the content of the generated content, and every element of the proposed change.
  2. The content of the proposed change must be reviewed by a human prior to making a pull request, and;
    1. The relevant linters and tests must pass.
    2. If you used AI tools in the creation of the content you must explicitly disclose this fact in the first line of the description of the pull request.
    3. You must fully understand the content of the proposed change. Inability to explain any given change may result in the pull request being rejected summarily, especially if the reasons for the change cannot be articulated in a clear manner.
  3. The reviewers, and author of the pull request; must not use generative AI in the formal review process itself i.e. when questions are asked, changes are requested, or responses to the reviewers are made. The use of AI tools within the review process must be explicit and assistive in nature.
  4. Large changes must not solely be produced by generative AI.
  5. The generative AI tools or their companies must not be listed as participants in the change as a commit trailer i.e. in the Co-authored-by, Signed-off-by, Reviewed-by, Reported-by, Assisted-by, Co-developed-by, or similar trailers.

In such instances were assistive tools are used in the review process, i.e. we currently use CodeRabbit we suggest not blindly accepting the changes. Instead either wait for a reviewer to agree with the changes after they perform an assessment, perform an assessment yourself, or ask if the maintainers believe the changes are acceptable.

It’s important to note that this is not a comprehensive list of rules, and users of the technology should be aware of the limitations of the technology and the limitations of the tools used to generate the content; and they should use these tools responsibly.

Guidelines similar to these are very common in the open source community, and while this is not a rational argument for these guidelines, they are a good starting point in this fairly new phenomenon. We expect these guidelines and ideas will evolve over time. Regardless of your personal view of generative AI, we expect community members to abide by these guidelines as a matter of professionalism.

There are a few reasons for these rules. In no particular order:

  1. Several studies have shown a clear indication that while these tools are getting better in their general outputs they are not getting better at generating secure code. In fact many studies indicate that more than 40% of all code generated by AI has significant security vulnerabilities. It is imperative that in a project like this we are fully aware of any additional considerations we must make in the review process.
  2. There is not a lot of clarity around the liability and legality elements in these contributions. In particular there are very few countries which recognize the ability to legally license or copyright any content unless it is made by human input; and some countries outright reject this. This is probably highly dependent on the jurisdiction.
  3. It’s also unclear if the code generated by AI can be claimed as being copyrighted by the author of the content used to train the AI model, or the owners of the AI model themselves. This is probably highly dependent on the jurisdiction.
  4. We want to know we’re interacting with an actual human when we’re resolving concerns about a change.