OpenAI releases ChatGPT Security Statement



On the early morning of April 6th, OpenAI released "Our Approach to AI Safety" on their official website to ensure the safe and reliable provision of ChatGPT services for global users. The statement contains four parts as follows:

  1.  Building safe and reliable AI products.
  2.  Learning, optimizing, and improving from practice.
  3.  Protecting children.
  4.  Respecting privacy


Build safe and reliable AI products


Open AI says it conducts rigorous testing before releasing any new system. Hire external experts to provide feedback, improve the behavior of models through techniques such as reinforcement learning with human feedback, and build extensive security and monitoring systems.


For example, after Open AI's latest model, GPT-4, completed training, it took more than six months to work across the organization to make it safer and more consistent before its public release.


Open AI believes that powerful AI systems should undergo rigorous safety assessments. Regulation is needed to ensure such practices, and Open AI will actively work with governments to develop the best framework for such regulation.


Learn, optimize, improve from practice


Open AI is working hard to prevent foreseeable risks before deployment. However, the feedback obtained in the process of experimentation and research and development is limited. Despite rigorous and extensive testing, it is still impossible to predict the various unexpected consequences of people using ChatGPT . Therefore, learning, optimizing and improving products from practical projects has become the top priority.


Open AI helps developers integrate the most powerful large language models directly into products through its own services and APIs. This enables Open AI to monitor abuse and take action, and continuously build comprehensive mitigation measures to ensure that Open AI products are used in safe and compliant projects.


With the in-depth use of users around the world, this prompts Open AI to formulate more detailed and comprehensive countermeasures to deal with various possible dangers.


Protect Children


Protecting children's safety has always been a priority for Open AI. Open AI requires people to be at least 18 years old, or 13 years old with parental consent to use Open AI products. Open AI is working on validation options.


Open AI does not allow its products to be used to generate hate, harassment, violent or adult content, etc. Open AI's latest model, GPT-4, is 82% less likely to respond to requests for illegal content than GPT-3.5, and has built a robust system to monitor abuse. GPT-4 is now available to ChatGPT Plus users, and Open AI hopes to make it available to more people over time.


Respect Privacy


Open AI's large language models are trained on an extensive corpus of text, including publicly available, licensed content as well as content generated by human review.


Open AI doesn't use data to sell services, advertise, or build profiles of people -- it uses data to make big language models more helpful to people. For example, ChatGPT uses conversations with humans to further train and improve products.


While some of Open AI's training data includes personal information that is available on the public internet. But just want big language models to understand the world, not individuals. Therefore, Open AI strives to remove personal information from training datasets where feasible, fine-tune models to reject requests for private personal information, and respond to requests from individuals to have their personal information removed from the database.


Improve the accuracy of generated data


Large language models predict the next word, sentence, or long text based on previous training patterns. In some cases, however, data inaccuracies may arise.


Improving the accuracy of the data generated is a focus of OpenAI and many other AI developers that are making progress. Through user feedback on ChatGPT output of primary data sources flagged as incorrect, GPT-4 was 40% more performant than GPT-3.5 on generating real data.


Ongoing Research and Engagement


OpenAI believes that the practical way to address AI safety is to devote more time and resources to researching effective mitigation and alignment techniques and testing them for real-world abuse.


While OpenAI waited more than six months to deploy GPT-4 to better understand its capabilities, benefits, and risks, it can sometimes take longer than that to improve the safety of AI systems.


Therefore, policymakers and AI vendors need to ensure that the development and deployment of AI is effectively managed globally so that no one cuts corners in order to succeed. This is a daunting challenge that requires technological and institutional innovation, but OpenAI will go all out to make the best effort to improve.  

Post a Comment

Previous Post Next Post