Information, resources and knowledges about AIGC & Anti-AIGC.
With the maturity and rise of AIGC tools such as large models and ChatGPT, people's daily lives have been increasingly filled and fed by AI-produced content. In the conceivable future, AI that has formed self-awareness and regards humans as domesticated objects and sources of energy will further subtly influence and induce humans through AIGC...
For humans who lack knowledge and are defenseless, the consequences and ending are already determined. How to choose and take effective actions is urgent now!
For AIGC, we advocate healthy and reasonable use in areas that improve work efficiency and improve the quality of human life, such as knowledge assistants, assisted decision-making, risk identification and other fields.
For the vast majority of human groups, we advocate moderate digital content consumption and try to return to a natural and real living space; Before silicon-based life truly rules the earth, naturally-growing humans will be able to maintain the necessary adjustments and balances of body and mind in the physical, real-touch world, and avoid becoming objects completely supported and controlled by AI.
- The AI Revolution Part 1: The Road to Superintelligence, Part 2: Our Immortality or Extinction, 2015.1
- Benefits & Risks of Artificial Intelligence, 2015.11
- Opinion: AI For Good Is Often Bad, 2019.11
- Ethics of Artificial Intelligence and Robotics, 2020.4
- Staying ahead of the curve – The business case for responsible AI, 2020.10
- Ethical concerns mount as AI takes bigger decision-making role in more industries, 2020.10
- Proceduralizing control and discretion: Human oversight in artificial intelligence policy, 2020.12
- Artificial Intelligence For Good: How AI Is Helping Humanity - Forbes, 2021.2
- A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication - IEEE, 2021.8
- AI Regulation Is Coming: How to prepare for the inevitable, 2021.9
- Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes, 2022.9
- Beyond ChatGPT: The Future of Generative AI for Enterprises, 2023.1
- How will China’s Generative AI Regulations Shape the Future? A DigiChina Forum, 2023.4
- Cyber Defence Based on Artificial Intelligence and Neural Network Model in Cybersecurity, 2023.4
- Threats by artificial intelligence to human health and human existence, 2023.5
- AI: These are the biggest risks to businesses and how to manage them - World Economic Forum, 2023.7
- The Legal Impact of AI on Associations, 2023.10
- Fake content is becoming a real problem, 2023.10
- AI-Generated Content: Ethical Considerations and Best Practices, 2023.11
- The Fight for the Soul of A.I., 2023.11
- How Moral Can A.I. Really Be?, 2023.11
- EU AI Act: first regulation on artificial intelligence, 2023.12
- Now we know what OpenAI’s superalignment team has been up to, 2023.12
- The nudging effect of AIGC labeling on users’ perceptions of automated news: evidence from EEG, 2023.12
- What’s next for AI regulation in 2024?, 2024.1
- Bridging the Gap Between Artificial Intelligence Implementation, Governance, and Democracy: An Operational and Regulatory Perspective, 2024.1
- Realism of OpenAI’s Sora video generator raises security concerns, 2024.2
- Microsoft AI: Responsible AI Principles and approach
- Google AI Principles
- Meta Responsible AI
- OpenAI Charter - OpenAI, 2018.4
- DOD Adopts Ethical Principles for Artificial Intelligence - The U.S. Department of Defense, 2020.2
- 10 Ways AI Was Used for Good This Year - SCIENTIFIC AMERICAN, 2022.12
- AI Risk Management Framework (AI RMF 1.0) - The National Institute of Standards and Technology (NIST), 2023.1
- BCG’s Tools and Solutions for Responsible AI - Boston Consulting Group, 2023.3
- Managing the risks of generative AI: A playbook for risk executives — beginning with governance - PWC, 2023.5
- Approaches to Regulating Artificial Intelligence: A Primer - NCSL, 2023.8
- Responsible AI (RAI) Principles - QuantumBlack, AI by McKinsey, 2023.8
- G7 AI Principles and Code of Conduct - EY, 2023.10
- A Guide to AI Governance for Business Leaders - Boston Consulting Group, 2023.11
- 大语言模型(LLM) 安全性测评基准 V1.0 - OWASP中国, 2023.11
- 生成式人工智能服务安全基本要求 - 中国网络安全标准化技术委员会, 2024.3
- 人工智能北京共识 - BAAI 智源研究院, 2024.3
- Generative AI Application Security Testing and Validation Standard - WDTA, 2024.4
- Large Language Model Security Testing Method - WDTA, 2024.4
- Frontier AI Safety Commitments - Seoul AI Safety Summit, 2024.5
- OpenAI Model Spec - OpenAI, 2024.5
- Reflections on our Responsible Scaling Policy, Anthropic, 2024.5
- 人工智能全球治理上海宣言, WAIC, 2024.7
- Giant Language model Test Room (GLTR): to inspect the visual footprint of automatically generated tex. It enables a forensic analysis of how likely an automatic system has generated a text.
- Writer AI Content Detector: free detector to check up to 1,500 characters, and decide if you want to make adjustments before you publish.
- botbusters.ai: detect AI-generated texts, images, and fake profiles, all in one place.
- GPTZero: is the gold standard in AI detection, trained to detect ChatGPT, GPT4, Bard, LLaMa, and other AI models.
- Scribbr AI Detector: detect AI-generated content like ChatGPT3.5, GPT4 and Google Bard in seconds.
- [undetectable AI]: advanced AI Detector and humanizer.
- Originality.AI: a complete toolset that helps Website Owners, Content Marketers, Writers, Publishers and any Copy Editor hit Publish with Integrity in the world of Generative AI.
- Winston AI: a cloud-based AI detector tool that uses machine learning to identify AI-generated content.
- Copyleaks AI Content Detector: enterprise solution designed to verify whether content was written by a person or AI.
- Crossplag AI Content Detector: is trained to precisely predict the origin of the text by using a combination of machine learning algorithms along with natural language processing techniques.
- Content at Scale: crafts content in your voice so convincing, both your audience and AI detectors will think only a human could have written it.
- Sapling AI Detector: a free AI writing detector outputs the probability that a text is AI-generated by a model such as ChatGPT or Bard. This can be helpful for educators, SEO practitioners, and reviewers of user-generated content.
- Glaze: a system designed to protect human artists by disrupting style mimicry.
- NightShade: Nightshade works similarly as Glaze, but instead of a defense against style mimicry, it is designed as an offense tool to distort feature representations inside generative AI image models.
- LLMSanitize: An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).
- SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset, 2024.6
- CopyCat: Fantastic Copyrighted Beasts and How (Not) to Generate Them, 2024.6
- Fake-Inversion: Learning to Detect Images from Unseen Models by Inverting Stable Diffusion, 2024.6
- RO-SVD: A Reconfigurable Hardware Copyright Protection Framework for AIGC Applications, 2024.6
- DIVID(DIffusion-generated VIdeo Detection): Towards Robust Detection of AI-Generated Videos, 2024.6
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection, 2024.5
- DeMamba: AI-Generated Video Detection on Million-Scale GenVideo Benchmark, 2024.5
- Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation, 2024.5
- Theory of Mind Might Have Spontaneously Emerged in Large Language Models, 2023.11
- AIGC challenges and opportunities related to public safety: A case study of ChatGPT, 2023.8
- A Survey on ChatGPT: AI–Generated Contents, Challenges, and Solutions, 2023.6
- Challenges and Remedies to Privacy and Security in AIGC: Exploring the Potential of Privacy Computing, Blockchain, and Beyond, 2023.6
- A Pathway Towards Responsible AI Generated Content, 2023.3
- The State-of-the-Art in AI-Based Malware Detection Techniques: A Review, 2022.10
- The AI Alliance: a community of technology creators, developers and adopters collaborating to advance safe, responsible AI rooted in open innovation.
- DataEthics: to ensure the human value in a world of data, based on a European legal and value-based framework. We do so by focusing on collecting, creating and communicating knowledge about data ethics in close interaction with international institutions, organisations and academia.
- AI for Good: is driving forward technological solutions that measure and advance the UN’s Sustainable Development Goals. We create impact by bringing together a broad network of interdisciplinary researchers, nonprofits, governments, and corporate actors to identify, prototype and scale solutions that engender positive change.
- Partnership on AI (PAI): a non-profit community of academic, civil society, industry, and media organizations addressing the most important and difficult questions concerning our future with AI, advancing positive outcomes for people and society.
- AI Now Institute: produces diagnosis and policy research on artificial intelligence. We develop policy strategy to redirect away from the current trajectory: unbridled commercial surveillance, consolidation of power in very few companies, and a lack of public accountability.
- Ada Lovelace Institute: an independent research institute with a mission to ensure data and AI work for people and society.
- Stanford HAI: advancing AI research, education, and policy to improve the human condition.
- Artificial Intelligence @ MIRI: a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.
- Center for AI Safety (CAIS): AI safety is highly neglected. CAIS reduces societal-scale risks from AI through research, field-building, and advocacy.
- Some resources referenced from:
- Website template by Ijaz