OpenAI API: More control and security for enterprise customers

Customers who use OpenAI's API access get new functions at their fingertips: above all, cost control.

Save to Pocket listen Print view
Ein stilisierter Kopf, der von Linien, Verknüpfungen und leuchtenden Punkten als Symbol für KI umgeben ist

New functions for enterprise customers should ensure greater security, compliance and control with OpenAI.

(Bild: metamorworks/Shutterstock.com)

2 min. read
This article was originally published in German and has been automatically translated.

OpenAI introduces new functions for enterprise customers. These are intended to improve security, compliance and control. The updates include a project tool, as well as improved control over how employees use the services. This is intended not least to control costs, as the use of AI tools is known to be expensive.

The new security functions include native multi-factor authentication and "Private Link" - for direct communication between Azure and OpenAI, which is intended to "minimize exposure to the open Internet". For enterprise customers, of course, there is already a wide range of security functions for data encryption, access control and more.

Projects can now be created and edited individually. This includes the option of assigning roles and API keys to specific projects and people. Usage and rate-based limits can also be set. You can give individual users or a project access to one or more selected models and set a budget that this person or group is allowed to use. After that, it's over. This should prevent unwanted high bills at the end of the month.

There are also two new ways of reducing costs: for a defined throughput, there are discounts of 10 to 50 percent, depending on the scope of the commitment, i.e. the defined purchase. And then there is a new batch API where non-urgent workloads can be executed asynchronously. These requests cost 50 percent less than previous shared requests. Results are delivered within 24 hours. According to OpenAI's blog post, this is ideal for "use cases such as model evaluation, offline classification, summarization and synthetic data generation".

The Assistant API has also been enhanced. Developers can use it to integrate OpenAI's services into their own. Here, too, there is improved cost control through token limits, new "Tool Choice" parameters for file_search, code_interpreter or function, for example, and support for fine-tuned GPT-3.5 Turbo models.

(emw)