Yesterday, Lodestone Co-Founder & CSO and Interparliamentary Forum on Emerging Technologies Co-Founder Fran O'Leary O’Leary spoke alongside Emma Wright at Harbottle & Lewis’ AI Breakfast. She covered off the latest developments in international AI and broader tech regulation, including: 📑 Labour Party manifesto commitments 📊 10 key developments on AI and tech during Labour's first 100 days: 1. The political context; 2. the King's Speech; 3. the AI Opportunities Action Plan; 4. the latest steers on AI legislation and an AI strategy; 5. Private Members Bills that may influence Government thinking; 6. Peter Kyle's vision; 7. the Regulatory Innovation Office; 8. the Digital Centre Panel; 9. plans to make the sharing of intimate images without consent to be made a priority offence under the Online Safety Act; and 10. actions on Cyber Security. → A forward look, with 10 developments to look out for: 1. the autumn Budget; 2. the publication of the #AIOpportunitiesActionPlan; 3. the #AISafetyInstitute hosting a conference with developers ahead of the AI Action Summit; 4. the announcement of the Chair of the #RegulatoryInnovationOffice; 5. the introduction of the #Cyber-Security and Resilience Action Bill in early 2025; 6. the introduction of the Digital Information and Smart Data Bill; 7. the progress of the Employment Rights Bill; 8. the Planning and Infrastructure Bill and implications for datacentres; 9. the Skills England Bill; 10. the Industrial Strategy and the potential for other developments. Well done on such an insightful event to all involved in the debate and discussions afterwards including Anna Thomas MBE, Suki Fuller, James Aylett, Victoria Collett, Richard Grove, Bryan Turner, Dr Sally Leivesley, Sophie de Schwarzburg-Gunther, Irene Manautou, Shraddha Kaul, Laura Wright, Chris Francis, Ella Dunham and many others. And thanks to Dea Dragashi andJessica Young for supporting this event. 👏 If you’d like to learn more about the forces shaping the UK’s AI agenda, and how your organisation can play its part, get in touch at info@lodestonecommunications.com.
Lodestone Communications’ Post
More Relevant Posts
-
There's a challenge for smaller organisations to maintain a minimum level of profitability when providing services. This is particularly true with artificial intelligence and machine learning, where we need enough data points for machine learning to understand the business and recognise normal versus abnormal behaviour. This situation has led more organisations setting minimum thresholds for the number of services, licences, or employee licences they offer, whatever their commercial models are. Some thresholds can be relatively low, 1 or even 200 users within an organisation. As a business leader, I fully respect that we need to ensure a service is commercially viable. But we're seeing more organisations implement these thresholds from a position where they didn’t have them before, which poses a real challenge. The addressable market below, say, 200 employees is absolutely huge globally. Are we disadvantaging these businesses by denying them access to industry-leading software and security solutions because of these thresholds? By collaborating and considering the broader implications of our policies, we can ensure that all organisations, regardless of size, benefit from the latest advancements in AI and ML. This inclusive approach fosters a healthier, more competitive market where innovation thrives and businesses of all sizes can succeed. If you're contemplating setting a minimum threshold, I urge you to engage with your partners and end-users to explore alternatives that support inclusivity. We can work together to create solutions that are commercially viable and accessible to all.
To view or add a comment, sign in
-
We are pleased that WOPLLI has joined the United Nations Global Digital Compact. Following is the charter of the compact adopted by 193 countries, which closely matches WOPLLI's vision for safe, fair, trusted experiences. --- Negotiated by 193 Member States and informed by global consultations, the Compact commits governments to upholding international law and human rights online and to taking concrete steps to make the digital space safe and secure. The Compact recognizes the critical contributions of the private sector, technical communities, researchers and civil society to digital cooperation. It calls on all stakeholders to engage in realizing an open, safe and secure digital future for all. --> Close all digital divides and deliver an inclusive digital economy Connect all people, schools and hospitals to the Internet Make digital technologies more accessible and affordable to everyone, including in diverse languages and formats Increase investment in digital public goods and digital public infrastructure Support women and youth innovators and small and medium enterprises --> Build an inclusive, open, safe, and secure digital space Strengthen legal and policy frameworks to protect children online Ensure that the Internet remains open, global, stable and secure Promote and facilitate access to independent, fact-based and timely information to counter mis- and disinformation --> Strengthen international data governance and govern AI for humanity Support the development of interoperable national data governance frameworks Establish an international scientific panel on AI and a global AI policy dialogue Develop AI capacity-building partnerships and consider options for a Global Fund on AI
To view or add a comment, sign in
-
Happy Memorial Day! As we honor those who have served, let's discuss SB 1047, a bill with significant implications for AI and open-source software. Key Points: - Safety and Transparency: SB 1047 mandates stricter standards and risk assessments for AI development. - Reporting Requirements: Developers must report AI usage and safety incidents extensively. Bottlenecks: -Broad Definitions: The bill's wide-reaching definitions could inadvertently affect benign open-source projects. -Compliance Burden: Heavy reporting and compliance requirements strain small businesses and individual developers. -Innovation Barriers: High costs and legal risks may stifle innovation and limit competition, favoring large corporations. **Impact on Open Source:** -Reduced Collaboration: Restrictions may hinder the transparency and collaborative spirit essential to open-source innovation. -Innovation Stifled: Legal fears and bureaucratic hurdles could discourage contributions, reducing diversity in AI development. Balancing safety with fostering innovation is crucial. Let's ensure policies support both goals. #MemorialDay #AI #OpenSource #Innovation #TechRegulation #SB1047 #AIDevelopment #SmallBusiness #TechCommunity https://lnkd.in/gjSRrE6E
To view or add a comment, sign in
-
Can Technology Be the Key to Fixing Broken Systems? ⚖️ Imagine a world where the gears of bureaucracy turn smoothly across the globe , outdated and unfair laws crumble before common-sense solutions , and decisions – from local councils to international summits – are based on unbiased data, not ingrained biases . This vision might not be a utopia, but a future fueled by technology, including powerful AI tools , that tackles the systemic imbalances plaguing our societies. Technology can be a powerful tool for untangling the knots of bureaucracy in all corners of the world . AI, for instance, can analyze vast troves of data ️, identifying inefficiencies in government processes across nations ️ and suggesting ways to streamline them ⏱️✅. This could translate to faster permit approvals in developing countries ️, quicker access to social services everywhere , and governments that are more responsive to the needs of their citizens, regardless of location (everyone wins!). But technology's impact goes beyond mere efficiency. It can be used to identify and close the loopholes in existing laws that allow some to exploit the system, creating a more level playing field . By analyzing legal precedents across geographical boundaries ️, it can pinpoint areas where laws are unclear or easily manipulated. This information can then be used to draft clearer, more concise legislation on a global scale, making it harder for bad actors to game the system no matter where they operate. Perhaps most importantly, technology can help remove bias from decision-making on a global level. By analyzing data objectively, AI can identify patterns of discrimination or unfairness embedded within existing policies across societies. This information can then be used to create more equitable systems that serve all citizens fairly , dismantling historical imbalances and ensuring a future based on true justice. Technology is simply a tool, and its effectiveness relies on the collective will of the global community . However, the potential benefits are undeniable . By harnessing the power of technology, we can create a future with more efficient, transparent, and just societies around the world ✨. Spread the word, like and share! Together we can use technology to build a better tomorrow for all! 🌐🗺🤝 #TechForGood #FixingTheSystem #EqualityForAll #DataDrivenChange #BreakthroughTech #FutureofGovernance #CitizenEmpowerment #UnbiasedDecisions #StreamlinedProcesses #ClosingLoopholes #BuildingABetterWorld #TechForSocialChange #LikeAndShare
To view or add a comment, sign in
-
🌍🤖 I recently had the opportunity to participate in a thought-provoking panel at #VivaTech titled "Sovereignty and the Role of Public Sector in AI" and hosted by PwC. Our discussion centered on the critical intersection of digital sovereignty and AI, especially as AI continues to drive digital transformation globally amidst rising geopolitical tensions. Here are a few key takeaways: 1 - Regional Approaches to Digital Sovereignty: We explored how different jurisdictions are tackling the challenge of digital sovereignty. From Europe's stringent data protection laws and its newly enacted AI Act to the more open approaches in other regions, it's clear that there's no one-size-fits-all solution. Time will tell whether the EU's approach succeeds in protecting citizens without unnecessarily stifling innovation. 2 - Startups Innovating Government: Startups have a major role to play in innovating Government, especially in AI. However, they need a dedicated B2G approach to engaging public authorities to drive change effectively and ensure their solutions align with public sector needs. At the same time, governments must rethink public procurement rules to acquire innovation faster while ensuring public money is spent wisely. 3 - Government's Role in AI: The panel underscored the significant role that governments and the public sector play in de-risking the AI agenda. By driving innovation policies and initiatives, including supporting ecosystems such as the #GovTechCampus, and ensuring robust regulatory frameworks, governments can foster a safe yet dynamic AI playing field. Thank you Agnieszka Gajewska and Aga Sala for inviting me, and Laurent Daudet for the interesting exchange. How do you see the role of the #publicsector evolving in the AI landscape? What steps should governments take to balance innovation with regulation? Comment below! 💬👇 #AI #DigitalSovereignty #PublicSector #AIRegulation #TechPolicy #GovTech Nils Hoffmann Viktoria Grzymek Mathias Keller Dr.-Ing. Denis Krechting Jean-François Marti Jamal Basrire Laurence Baba Aissa Mélissa Valentin
To view or add a comment, sign in
-
Bredec Group Bredec Ecosystem CCIA Submits Comments on the United Nations Global Digital Compact Consultation: ... digital ecosystems and hinder cross-border commerce. This is particularly true for burgeoning AI technologies and systems, which are key to ... inquiry@bredec.com Inquiry@bredec.com
To view or add a comment, sign in
-
Europe’s digital leader, Margrethe Vestager, executive vice president, European Commission has been at the forefront of shaping policies related to technology, AI, and democracy. Today, April 16, at Politico’s Tech and AI Summit in Brussels Vestager discussed the enormous potential of AI and the risks. Vestager believes in holding powerful companies accountable to ensure fair competition and her actions demonstrate a commitment to maintaining a level playing field and protecting consumers’ interests. In a speech held April 9, Vestager stressed that “Digital technologies change the world as we know it. And I see three ways this is happening, in particular: - First – with the dominance of large digital platforms, technology is challenging democracy. - Second – with the rise of General Purpose Artificial Intelligence, technology is challenging humanity. - And third – with the global race for the technologies we need the most, technology is challenging our economic security. And shaping a new geopolitical world order. Europe is on its way to answering all three of these challenges.” Danske Medier and DPCMO hope and trust that a new EU Commission will continue Vestager's ambitious and persistent efforts that are ultimately about our core human values and democracy. The values extend beyond borders, aiming to promote peace, security, sustainability, solidarity, and mutual respect globally. As Vera Jourová, vice president for transparency and values, mentioned - tech platforms are attacking young people's brain and if we are defensive, we will not win. That is why enforcement is absolutely crucial. Jean-Nöel Carrot, minister for EU affairs, France, emphasised the need for a democracy shield as we must not underestimate the disinformation problem. MEP Dita Charanzová, Miapetra Kumpula-Natri, and Kim van Sparrentak also discussed disinformation and social media's role and responsibility. How we must protect children online, keep our democracy safe and empower people to act. Disinformation poses significant threats to democracy. Tech and AI companies share responsibility for addressing these challenges. It will require collective efforts to protect citizens and democracy. We look forward to learning that tech comply with the European legislation and cooperate. Dicle Duran Nielsen
To view or add a comment, sign in
-
Day 2 of the LGA Annual Conference is in full swing, and the discussions are more crucial than ever. This morning, Cllr Sullivan spoke at a session titled "Steering Clear of the Cliff Edge - Navigating Financial Uncertainty," highlighting the £2.3bn funding gap that English councils are set to face next year. She also referenced an LGA survey published yesterday, predicting that 1 in 4 councils will find themselves in financial difficulty within the next two years—stark statistics that underline the urgency for sustainable solutions. One of the most exciting innovations discussed today was North Yorkshire Council’s AI tool, designed to support social workers in delivering better care for children. It’s a brilliant example of how councils are leveraging technology to address complex challenges, and we can’t wait to see its continued impact. Stay tuned for more updates as we navigate these key conversations around policy, innovation, and the future of public services. 🚀 #LGA2024 #LocalGovernment #AI #PublicSectorInnovation #SocialCare #FinancialSustainability #SEND
To view or add a comment, sign in
-
I am often surprised - by the surprise - at the lack of governance of tech companies. Their power to influence regulation is immense - see the #EUAIAct, which requires assessment of a system's #risk based on what it does (not as effective as other approaches) or more recently, as a great example, California's #AIsafetybill, explained clearly by Gary Marcus. https://lnkd.in/gkTerjHB 'SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.' #existentialrisks #longtermism Boards were established during the Hanseatic period to extend the reach of Government's ability to do business and control it (for a whole range of reasons, which is a whole other post). #Techcompanies are now powers and sources of wealth more powerful than many governments. They fund governments, universities, research and have immense reach and power. I have worked at universities where publishing anything critical of tech was not permitted/silenced. One example of how this works here: https://lnkd.in/gwteziGK #Australia is not immune. Always follow the money.
Why California’s AI Safety Bill should (still) be signed into law - and why that won’t be nearly enough
garymarcus.substack.com
To view or add a comment, sign in
-
Amidst a turbulent election year in the US, it's crucial to remember that the federal government allocates approximately 80% of its IT budget to maintaining outdated systems. Citizens rightfully expect top-tier digital services that they can trust. Are we giving this topic sufficient attention? #government #innovation #AI #leadership #digitaltransformation
A simpler life for citizens
fastcompany.com
To view or add a comment, sign in
1,625 followers