Earlier this year, the National Telecommunications and Information Administration (NTIA) released a request for comment on specific types of AI models called “Dual Use Foundation AI Models with Widely Available Model Weights.” Unlike some AI models that keep data private (i.e., “closed models”) dual use models share data openly, allowing anyone to see the model parameters (i.e., “weights”) of how they work. These models are called “open foundation models.”
At Unlearn, we innovate generative AI models that forecast individual health outcomes. While our models don’t exactly fit the type NTIA described, our demonstrated leadership in carving a regulatory pathway for AI in drug development led us to share our thoughts. On March 25th, we responded to NTIA’s request, emphasizing the importance of sector-specific regulation of AI models dependent on the risk associated with their specific context of use. We also addressed the following key points:
- In the world of AI, there’s a historical precedent of closed models gradually opening up over time. A good example of this is Google’s BERT and OpenAI’s GPT-2 models. The shift from closed to open can take time, depending on things like community engagement and data availability. Some AI models open up quickly, while others stay closed because of commercial interests, privacy issues, and control over usage.
- It’s challenging to define what it means for a model to be “widely available” because even one public link can be spread far and wide. This makes it difficult to control who can access these models and highlights the need for thoughtful and nuanced approaches to governance.
- People can access AI models in different ways, such as through online platforms (APIs) or by running them directly on their own computers (local hosting). Each method has its own benefits and risks. While APIs provide ease of use and control over usage, they also centralize control and raise privacy concerns. In contrast, local hosting offers user freedom but may raise accountability issues.
- Because these AI models are used in so many different ways across various industries, the approach to governing them needs to be both risk-based and specific to each sector. Tailored regulatory frameworks are needed to acknowledge the diverse applications and potential risks associated with AI technologies across different industries.
- During model development, metrics related to computational resources are complex but important for understanding and mitigating their potential risks in the long term. While such metrics aim to gauge AI capabilities and associated risks, their enforceability and accuracy pose challenges, requiring careful consideration in regulatory efforts.