( May 2, 2024, 17:19 GMT | Official Statement) -- MLex Summary: From Amazon to Zoom, as tech companies delivered their first quarterly results in in recent days following a year where generative AI exploded in prominence, their warnings about AI risks have coalesced into a more discrete set of worries and dangers.Excerpts of Key Company warnings in recent securities filings follows. See attached filings: Pinterest, April 30 10Q Continued development and use of AI may result in reputational harm, liability, or other adverse consequences to our business operations. We use machine learning and AI technologies in our products and services, and we are making investments in expanding our AI capabilities, including ongoing deployment and improvement of existing machine learning and AI technologies, as well as developing new product features using AI technologies. There are significant risks involved in developing and deploying AI and there can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our profitability. AI technologies are complex and rapidly evolving, and we face significant potential disruption from other companies as well as an evolving regulatory landscape. The continued integration of any AI technologies into our products can result in new or enhanced governmental or regulatory scrutiny, intellectual property claims, litigation, confidentiality or security risks, ethical concerns, negative user perceptions as to automation and AI, or other complications that could adversely affect our business, reputation, or financial results. As a result of the complexity and rapid development of AI, it is also the subject of evolving review by various U.S. governmental and regulatory agencies, and other foreign jurisdictions are applying, or are considering applying, their platform moderation, intellectual property, cybersecurity, and data protection laws to AI and/or are considering general legal frameworks on AI. We may not always be able to anticipate how to respond to these frameworks given they are still rapidly evolving. We may also have to expend resources to adjust our product or service offerings in certain jurisdictions if the legal frameworks governing the use of AI are not consistent across jurisdictions. Uncertainty around new and emerging AI technologies, such as generative AI, may require additional investment in the development of appropriate protections and safeguards for handling the use of data with AI technologies, which may be costly and could impact our expenses as we expand the use of AI into our product or service offerings. AI technologies, including generative AI, may create content that is factually inaccurate or flawed. Such content may expose us to brand or reputational harm and/or legal liability. It is also uncertain how various laws related to online services, intermediary liability, and other issues will apply to content generated by AI. The use of certain AI technologies presents emerging ethical and social issues, and if we offer solutions that draw scrutiny or controversy due to their perceived or actual impact on users or on society as a whole, we may experience brand or reputational harm, competitive harm, and/or legal liability. As such, it is not possible to predict all of the risks related to the use of AI, and developments in regulatory frameworks governing the use of AI and in related stakeholder expectations may adversely affect our ability to develop and use AI or subject us to liability. Meta April 25 10Q filing. We may not be successful in our artificial intelligence initiatives, which could adversely affect our business, reputation, or financial results. We are making significant investments in AI initiatives, including generative AI, to, among other things, recommend relevant content across our products, enhance our advertising tools, develop new products, and develop new features for existing products. In particular, we expect our AI initiatives will require increased investment in infrastructure and headcount. If our investments are not successful longer-term, our business and financial performance could be harmed. There are significant risks involved in developing and deploying AI and there can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our efficiency or profitability. For example, our AI-related efforts, particularly those related to generative AI, subject us to risks related to harmful or illegal content, accuracy, misinformation and deepfakes (including related to elections), bias, discrimination, toxicity, intellectual property infringement or misappropriation, defamation, data privacy, cybersecurity, and sanctions and export controls, among others. It is also uncertain how various laws related to online services, intermediary liability, and other issues will apply to content generated by AI. In addition, we are subject to the risks of new or enhanced governmental or regulatory scrutiny, litigation, or other legal liability, ethical concerns, negative consumer perceptions as to automation and AI, activities that threaten people's safety or well-being on- or offline, or other complications that could adversely affect our business, reputation, or financial results. As a result of the complexity and rapid development of AI, it is also the subject of evolving review by various governmental and regulatory agencies in jurisdictions around the world, which are applying, or are considering applying, platform moderation, intellectual property, cybersecurity, export controls, and data protection laws to AI and/or are considering general legal frameworks on AI (such as the EU AI Act). We may not always be able to anticipate how courts and regulators will apply existing laws to AI, predict how new legal frameworks will develop to address AI, or otherwise respond to these frameworks as they are still rapidly evolving. We may also have to expend resources to adjust our offerings in certain jurisdictions if the legal frameworks on AI are not consistent across jurisdictions. Further, we face significant competition from other companies that are developing their own AI features and technologies. Other companies may develop AI features and technologies that are similar or superior to our technologies or are more cost-effective to develop and deploy. Given the long history of development in the AI sector, other parties may have (or in the future may obtain) patents or other proprietary rights that would prevent, limit, or interfere with our ability to make, use, or sell our own AI features. Our AI initiatives also depend on our access to data to effectively train our models. Further, our ability to continue to develop and effectively deploy AI technologies is dependent on access to specific third-party equipment and other physical infrastructure, such as processing hardware and network capacity, as to which we cannot control the availability or pricing, especially in a highly competitive environment. We are also developing AI technology that we make available via open source, commercial, and non-commercial license agreements to third-parties that can use this technology for use in their own products and services. We may not have insight into, or control over, the practices of third parties who may utilize such AI technologies. As such, we cannot guarantee that third parties will not use such AI technologies for improper purposes, including through the dissemination of illegal, inaccurate, defamatory or harmful content, intellectual property infringement or misappropriation, furthering bias or discrimination, cybersecurity attacks, data privacy violations, other activities that threaten people's safety or well-being on- or offline, or to develop competing technologies. While we may mitigate certain risks associated with the improper use of our AI models through both technical measures and the inclusion of contractual restrictions on third-party use in any agreement between us and any third party, we cannot guarantee that such measures will be effective. Such improper use by any third party could adversely affect our business, reputation, or financial results or subject us to legal liability. It is not possible to predict all of the risks related to the use of AI and changes in laws, rules, directives, and regulations governing the use of AI may adversely affect our ability to develop and use AI or subject us to legal liability. Microsoft, April 25 10Q filing. Issues in the development and use of AI may result in reputational or competitive harm or liability. We are building AI into many of our offerings, including our productivity services, and we are also making AI available for our customers to use in solutions that they build. This AI may be developed by Microsoft or others, including our strategic partner, OpenAI. We expect these elements of our business to grow. We envision a future in which AI operating in our devices, applications, and the cloud helps our customers be more productive in their work and personal lives. As with many innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms or training methodologies may be flawed. Datasets may be overbroad, insufficient, or contain biased information. Content generated by AI systems may be offensive, illegal, or otherwise harmful. Ineffective or inadequate AI development or deployment practices by Microsoft or others could result in incidents that impair the acceptance of AI solutions or cause harm to individuals, customers, or society, or result in our products and services not working as intended. Human review of certain outputs may be required. Our implementation of AI systems could result in legal liability, regulatory action, brand, reputational, or competitive harm, or other adverse impacts. These risks may arise from current copyright infringement and other claims related to AI training and output, new and proposed legislation and regulations, such as the European Union’s (“EU”) AI Act and the U.S.’s AI Executive Order, and new applications of data protection, privacy, intellectual property, and other laws. Some AI scenarios present ethical issues or may have broad impacts on society. If we enable or offer AI solutions that have unintended consequences, unintended usage or customization by our customers and partners, are contrary to our responsible AI principles, or are otherwise controversial because of their impact on human rights, privacy, employment, or other social, economic, or political issues, we may experience brand or reputational harm, adversely affecting our business and consolidated financial statements. Snap April 26 10Q filing We use AI, including generative AI, in consumer-facing features of our products and services, such My AI, and in the operation of our business. The development and use of AI presents various privacy and security risks that may impact our business. AI is subject to privacy and data security laws, as well as increasing regulation and scrutiny. Several countries in which we operate or have users have proposed or enacted, or are considering, laws governing AI, which we are or may be subject to. The legal landscape around intellectual property rights in AI, and the use, training, implementation, privacy, and safety of AI, is evolving, including ongoing litigation against our peers relating to the use of data protected by global intellectual property and privacy laws. These obligations may make it harder for us to use AI in our products or services, lead to regulatory fines or penalties, or require us to change our business practices, retrain our AI, or prevent or limit our use of AI. We are subject to ongoing investigations by the UK Information Commissioner’s Office, or ICO, and other regulatory agencies regarding the use and operation of My AI, and in October 2023 the ICO issued a preliminary enforcement notice regarding our data impact assessment of My AI. Given the current unsettled nature of the legal and regulatory environment surrounding AI, our or our partners’ AI features and use, training, and implementation of AI could subject us to regulatory action, product restrictions, fines, litigation, and reputational harm, and require us to expend significant resources, all of which may seriously harm our business. Additionally, if our AI products fail to perform as intended, or produce outputs that are harmful, misleading, inaccurate, or biased, in addition to the risks above, our reputation and user engagement may be harmed, and we may be required to change our business practices, retrain our AI, or limit our use of AI. Furthermore, certain proposed regulations related to AI could, if adopted, impose onerous obligations related to the use of AI-related systems and may require us to change our products or business practices to comply with such obligations. Google January 2020 10K filing. Issues in the development and use of AI may result in reputational harm and increased liability exposure. Our evolving AI-related efforts may give rise to risks related to harmful content, inaccuracies, discrimination, intellectual property infringement or misappropriation, defamation, data privacy, cybersecurity, and other issues. As a result of these and other challenges associated with innovative technologies, our implementation of AI systems could subject us to competitive harm, regulatory action, legal liability (including under new and proposed legislation and regulations), new applications of existing data protection, privacy, intellectual property, and other laws, and brand or reputational harm. Some uses of AI will present ethical issues and may have broad effects on society. In order to implement AI responsibly and minimize unintended harmful effects, we have already devoted and will continue to invest significant resources to develop, test, and maintain our products and services, but we may not be able to identify or resolve all AI-related issues, deficiencies, and/or failures before they arise. Unintended consequences, uses, or customization of our AI tools and systems may negatively affect human rights, privacy, employment, or other social concerns, which may result in claims, lawsuits, brand or reputational harm, and increased regulatory scrutiny, any of which could harm our business, financial condition, and operating results. Amazon May 1 10Q filing For example, we rely on a limited group of suppliers for semiconductor products, including products related to artificial intelligence infrastructure such as graphics processing units. Constraints on the availability of these products could adversely affect our ability to develop and operate artificial intelligence technologies, products, or services. In addition, violations by our suppliers or other vendors of applicable laws, regulations, contractual terms, intellectual property rights of others, or our Supply Chain Standards, as well as products or practices regarded as unethical, unsafe, or hazardous, could expose us to claims, damage our reputation, limit our growth, and negatively affect our operating results. It is not clear how existing laws governing issues such as property ownership, libel, privacy, data use, data protection, data security, data localization, network security, and consumer protection apply to aspects of our operations such as the internet, e-commerce, digital content, web services, electronic devices, advertising, and artificial intelligence technologies and services. Salesforce.com, Feb. 29, 2024; 10Q filing. We are increasingly building AI into many of our offerings, including generative AI. As with many innovations, AI and our Customer 360 platform present additional risks and challenges that could affect their adoption and therefore our business. For example, the development of AI and Customer 360, the latter of which provides information regarding our customers’ customers, presents emerging ethical issues. If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts, we may experience new or enhanced governmental or regulatory scrutiny, brand or reputational harm, competitive harm or legal liability. Data practices by us or others that result in controversy could also impair the acceptance of AI solutions. This in turn could undermine confidence in the decisions, predictions, analysis or other content that our AI applications produce, subjecting us to competitive harm, legal liability and brand or reputational harm. The rapid evolution of AI will require the application of resources to develop, test and maintain our products and services to help ensure that AI is implemented ethically in order to minimize unintended, harmful impact. Uncertainty around new and emerging AI applications such as generative AI content creation will require additional investment in the licensing or development of proprietary datasets, machine learning models and systems to test for accuracy, bias and other variables, which are often complex, may be costly and could impact our profit margin. Moreover, the move from AI content classification to AI content generation through our development of Einstein GPT and other generative AI products brings additional risks and responsibility. Known risks of generative AI currently include risks related to accuracy, bias, toxicity, privacy and security and data provenance. For example, AI technologies, including generative AI, may create content that appears correct but is factually inaccurate or flawed, or contains copyrighted or other protected material, and if our customers or others use this flawed content to their detriment, we may be exposed to brand or reputational harm, competitive harm and/or legal liability. Developing, testing and deploying AI systems may also increase the cost profile of our offerings due to the nature of the computing costs involved in such systems. If we are unable to mitigate these risks, or if we incur excessive expenses in our efforts to do so, our reputation, business, operating results and financial condition may be harmed. C3 AI February 29 10Q filing Our use of artificial intelligence, or AI, and machine learning, or ML, technologies (collectively, “AI/ML technologies”) may also subject us to certain privacy obligations. There is increasing U.S. and foreign activity in the regulation of AI and other similar uses of technology. In Europe, there is a proposed regulation related to AI that, if adopted, could impose onerous obligations related to the use of AI-related systems. We expect other jurisdictions will adopt similar laws. In the United States, several states and localities have enacted measures related to the use of AI and ML in products and services. We may have to change our business practices to comply with such obligations. For example, our employees and personnel use generative AI technologies to perform their work. Our use of this technology could result in additional compliance costs, regulatory investigations and actions, and lawsuits. If we are unable to use generative AI, it could make our business less efficient and result in competitive disadvantages. We also use AI and ML technologies in our products and services. Our use of this technology could result in additional compliance costs, regulatory investigations and actions, and consumer lawsuits. Depending on how these AI laws and regulations are interpreted, we may have to make changes to our business practices and products, including our C3 AI Software, to comply with such obligations. These obligations may make it harder for us to conduct our business using AI/ML, lead to regulatory fines or penalties, require us retrain our AI/ML, or prevent or limit our use of AI/ML. Additionally, certain privacy laws extend rights to consumers (such as the right to delete certain personal data) and regulate automated decision making, which may be incompatible with our use of AI/ML. Further, under privacy laws and other obligations, we may be required to obtain certain consents to process personal data and our inability or failure to do so could result in adverse consequences. For example, the FTC has required other companies to turn over (or disgorge) valuable insights or trainings generated through the use of AI/ML where they allege the company has violated privacy and consumer protection laws. If we cannot use AI/ML or that use is restricted, our business may be less efficient, or we may be at a competitive disadvantage. We may also be subject to new laws governing the privacy of consumer health data, including reproductive, sexual orientation, and gender identity privacy rights. For example, Washington’s My Health My Data Act (MHMD) broadly defines consumer health data, places restrictions on processing consumer health data (including imposing stringent requirements for consents), provides consumers certain rights with respect to their health data, and creates a private right of action to allow individuals to sue for violations of the law. Other states are considering and may adopt similar laws. Nvidia, Feb. 22, 2024; 10Q filing Concerns regarding third-party use of AI for purposes contrary to local governmental interests, including concerns relating to the misuse of AI applications, models, and solutions, has resulted in and could in the future result in unilateral or multilateral restrictions on products that can be used for training, modifying, tuning, and deploying LLMs. Such restrictions have limited and could in the future limit the ability of downstream customers and users worldwide to acquire, deploy and use systems that include our products, software, and services, and negatively impact our business and financial results. Zoom Video Communications, March 4 10K filing Artificial Intelligence Our development and use of AI and machine learning (“ML”) technologies is subject to privacy, data protection, IP, and information security laws, industry standards, external and internal privacy and security policies, and contractual requirements, as well as increasing regulation and scrutiny. Several jurisdictions around the globe, including the EU, the UK and certain U.S. states, have proposed, enacted, or are considering laws governing the development and use of AI/ML. In the EU, regulators have reached political agreement on the text of the Artificial Intelligence Act, which, when adopted and in force, will have a direct effect across all EU jurisdictions and could impose onerous obligations related to the use of AI-related systems. Obligations on AI/ML may make it harder for us to conduct our business using, or build products incorporating, AI/ML, require us to change our business practices, require us to retrain our algorithms, or prevent or limit our use of AI/ML. For example, the FTC has required other companies to turn over (or disgorge) valuable insights or trainings generated through the use of AI/ML where they allege the company has violated privacy and consumer protection laws. Additionally, certain privacy laws extend rights to consumers (such as the right to delete certain personal information) and regulate automated decision making, which may be incompatible with our use of AI/ML. If we do not develop or incorporate AI/ML in a manner consistent with these factors, and consistent with customer expectations, it may result in an adverse impact to our reputation, our business may be less efficient, or we may be at a competitive disadvantage. Similarly, if customers and users do not widely adopt our new product AI/ML experiences, features, and capabilities, or they do not perform as expected, we may not be able to realize a return on our investment....