Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Ethical Considerations for Civilian AI Developers Using Open-Source Military Data

The bibliography file is located at citations.bib. The sources are freely accessible as of 2025-04-28 with no paywalls.

Civilian AI developers working with open-source military data must prioritize ethical and legal considerations. While data availability is crucial, the potential for harm is significant, especially when AI-driven decisions impact lives. The NSCAI Final Report (n.d.) outlines the risks of using AI in military operations, emphasizing accountability and compliance with international humanitarian law (IHL).

To mitigate these risks, developers must incorporate IHL principles, such as proportionality, into AI systems. This ensures minimized harm to civilians and adherence to legal standards. (Woodcock, 2024) Recognizing that AI systems can exhibit hidden biases, careful review of training data is essential to eliminate potential errors that could lead to inaccurate targeting or misclassification, contributing to an “accountability gap.” (Crootof, 2022)

The availability of military AI technologies often involves dual-use capabilities, requiring robust access controls and governance frameworks. (Paoli & Afina, 2025) For example, the “mosaic effect” can occur when combining open-source intelligence, leading to unintended consequences.

The United Nations Secretary-General has emphasized that decisions involving human life should never be solely automated or driven by commercial interests. (United Nations, 2024) Therefore, human oversight is crucial in all critical AI applications utilizing military data.

Ultimately, civilian AI developers should prioritize transparency, fairness, and strong ethical oversight to ensure responsible development and deployment of AI, adhering to international law and ethical standards. (Roumate, 2020; Khan, 2023)

References

  • Crootof, R. (2022). AI and the Actual IHL Accountability Gap. SSRN.
  • Paoli, G. P., & Afina, Y. (2025). AI in the Military Domain: A Briefing Note for States. UNIDIR.
  • Roumate, F. (2020). Artificial Intelligence, Ethics and International Human Rights Law. The International Review of Information Ethics, 29.
  • Khan, S. Y. (2023). Autonomous Weapon Systems and the Changing Face of International Humanitarian Law. International Law Blog.
  • United Nations. (2024). Secretary-General’s Remarks to the Security Council on Artificial Intelligence.
  • Woodcock, T. K. (2024). Human/Machine(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard. Global Society, 38(1).
  • National Security Commission on Artificial Intelligence. (n.d.). Chapter 4 – NSCAI Final Report.

Dataset Structure

BibTeX file

Usage

This dataset is intended for:

  • Researchers studying military AI ethics
  • Policy analysts examining IHL compliance
  • Developers working on defence-related AI systems
  • International relations scholars

Limitations

  • This is only a small sample of what is publicly available
    • There are many more reputable, authoritative, and comprehensive sources
    • See also the Red Cross and other United Nations documents for more information on AI and IHL
    • This is the beginning of a new research area in international law and AI ethics

Licence

CC-BY-4.0 (assumed - verify original source licences for specific entries)

Downloads last month
28