Category: Artificial Intelligence

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. AI encompasses various techniques such as machine learning, natural language processing, and computer vision to analyze vast amounts of data, recognize patterns, and make decisions. In financial markets, AI is utilized for a range of applications including algorithmic trading, where AI models analyze market data and execute trades at high speed and accuracy. AI also aids in fraud detection by identifying unusual patterns in transactions, enhances customer service through chatbots that provide personalized responses, and improves risk management by predicting market trends and assessing portfolio risks in real-time. Moreover, AI-driven tools assist in credit scoring, loan approval processes, and optimizing investment strategies by leveraging predictive analytics and data-driven insights. The versatility of AI continues to expand its role in financial markets, transforming how decisions are made and operations are managed in the industry.

CFTC-PR-8905-24

CFTC NEWS - CFTC-PR-8905-24CFTC-PR-8905-24 (MAY 2, 2024)

PRESS RELEASE | CFTC-PR-8905-24

CFTC Technology Advisory Committee Advances Report and Recommendations to the CFTC on Responsible Artificial Intelligence in Financial Markets. Commissioner Christy Goldsmith Romero Heralds the AI Expertise of the Committee and the Committee’s Foundational, Iterative Approach to Recommendations

Washington, D.C. — The Commodity Futures Trading Commission’s Technology Advisory Committee (TAC),[1] sponsored by Commissioner Christy Goldsmith Romero, released a Report on Responsible AI in Financial Markets (Report).  The TAC, which has as its members many well-respected experts in AI, issued a Report that facilitates an understanding of the impact and implications of the evolution of AI on financial markets.  The Committee made five recommendations to the Commission as to how the CFTC should approach this AI evolution in order to safeguard financial markets.  The Committee urges the CFTC to leverage its role as a market regulator to support the current efforts on AI coming from the White House and Congress.

Commissioner Goldsmith Romero said, “I herald the foundational, iterative approach of the Committee to recognize both that AI has been used in financial markets for decades, and that the evolution of generative AI introduces new issues and concerns, as well as opportunities.  Given the collective decades of AI experience of Committee members, their findings regarding the need for responsible AI practices, as well as the importance of the role of humans, governance, data quality, data privacy, and risk-management frameworks targeting AI-specific risks, should be taken seriously by the financial services industry and regulators.  These expert recommendations are geared towards more responsible AI systems with greater transparency and oversight to safeguard financial markets.  I am tremendously grateful for the subcommittee who drafted this report.  I hope that this report will drive future work by the CFTC and other financial regulators as we navigate this evolving technology together.”

Findings of the Committee

Without appropriate industry engagement and relevant guardrails (some of which have been outlined in existing national policies), potential vulnerabilities from using AI applications and tools within and outside the CFTC could erode public trust in financial markets, services, and products.

AI has been used in impactful ways in the financial industry for more than two decades.  In theory, AI represents a potentially valuable tool to improve automated processes governing core functions, and to improve efficiency.  This includes, for example, in areas of risk management, surveillance, fraud detection, back-testing of trading strategies, predictive analytics, credit risk management, customer service, and to analyze information about customers and counterparties.  As AI continues to learn from the trusted dataset, it can adapt and optimize its algorithms to new market conditions.  As automation and digitization proliferate in financial markets, it is crucial that markets simultaneously prioritize operational resilience, such as cybersecurity measures that are robust to AI-enabled cyberattacks.  AI can monitor transactional data in real-time, identifying and flagging any unusual activities.  Advanced machine learning algorithms can aid in the prediction of future attack vectors based on existing patterns, providing an additional layer of cybersecurity.

AI systems can be less beneficial and, in some instances, more dangerous if the potential challenges and embedded biases in AI models are regressive to financial gains, and in worst case scenarios, prompt significant market instabilities due to the interjection of misguided training data, or the goals of bad actors to disrupt markets.

Other well-known AI risks include:

  • Lack of transparency or explainability of AI models’ decision-making process (the “black box”);
  • Risks related to data relied on by AI systems, including overfitting of AI models to their training data or “poisoning” real world data sources encountered by the AI model;
  • Mishandling of sensitive data; 
  • Fairness concerns, including the AI system reproducing or compounding biases;
  • Concentration risks that arise from the most widely deployed AI foundation models relying on a small number of deep learning architectures, as well as the relatively small number of firms developing and deploying AI foundation models at scale; and
  • Potential to produce false or invalid outputs, whether because of AI reliance on inaccurate “synthetic” data to fill gaps or because of unknown reasons (hallucinations).

Additionally, where AI resembles more conventional forms of algorithmic decision-making, the risks likely include a heightened risk of market stability, and, especially when combined with high frequency trading, potential institutional and wider market instability.

Where firms use their versions of generative AI, there could be increased risks of institutional and market instability if registered entities do not have a complete understanding of, or control over, the design and execution of their trading strategies and/or risk management programs.  The Financial Stability Oversight Council discussed this in its 2023 Annual Report, stating, “With some generative AI models, users may not know the sources used to produce output or how such sources were weighted, and a financial institution may not have a full understanding or control over the data set being used, meaning employment of proper data governance may not be possible.”  If proper data governance is not currently possible for some registered entities employing generative AI models, the CFTC may have to propose some rules or guidance to enable firms to access the AI models in a semi-autonomous way that will balance the intellectual property protection of the AI model provider with the imperative of the firm using the model to properly manage and report its trading and clearing data.

Additionally, new issues and concerns emerge with the use of generative AI particularly if the generated content is deemed offensive, the AI system hallucinates, and/or humans use AI to produce fake content which is not distinguishable from reality (deep fakes).  Generative AI also raises a host of legal implications for civil and criminal liability.  It also may be unclear who, if anyone, is legally liable for misinformation generated by AI.

More specific areas of risk can be identified within the context of specific use cases in CFTC-regulated markets; therefore, it should be a central focus for the CFTC and registered entities to identify the specific risks that have high saliency for CFTC-regulated markets, and measure the potential harm if risks are insufficiently managed.  To aid in these efforts, the Committee identified a partial list of use cases for AI and likely-relevant risks.  These use cases fall in the areas of trading and investment, customer advice and service, risk management, regulatory compliance, and back office and operations.

These use cases may introduce novel risks to companies, markets, and investors, especially in high-impact, autonomous decision-making scenarios (for example, business continuity risks posed by dependence on a small number of AI firms; procyclicality risks or other risks caused by multiple firms deploying similar AI models in the same market; erroneous AI output or errors caused by sudden, substantial losses to a particular firm, asset class or market such as a flash crash; data privacy risks; and the potential inability to provide a rationale demonstrating fiduciary duties; etc.).

The Commission (including through the Technology Advisory Committee) should start to develop a framework that fosters safe, trustworthy, and responsible AI systems.

Responsible AI is defined as five typical properties which speak to how AI models are designed and deployed: (1) Fairness refers to the processes and practices that ensure that AI does not make discriminatory decisions or recommendations; (2) Robustness ensures that AI is not vulnerable to attacks to its performance; (3) Transparency refers to sharing information that was collected during development and that describes how the AI system has been designed and built, and what tests have been done to check its performance and other properties; (4) Explainability is the ability of the AI system to provide an explanation to users and other interested parties who inquire about what led to certain outputs in the AI’s modeling (which is essential to generate trust in AI from users, auditors, and regulators); and (5) Privacy ensures that AI is developed and used in a way that protects users’ personal information.

The use of AI by CFTC-registered entities will require further exploration and discussion, particularly raising awareness as to the function of automated decision-making models, and the necessary governance.  Even where firms disclose their use of technologies, it is not always clear the type of AI they are using (e.g. generative or predictive).  Additionally, financial institutions should take care to respect the privacy of customers’ financial data and behaviors, particularly in the collection and surveillance of financial information.  They should be encouraged to follow proper procedures and compliance with disclosures to the federal government, especially in the face of concerns about national security and financial risk management.  Also, the responsible and trustworthy use of AI will require the creation of a talent pipeline of professionals trained in the development and use of AI products.

The typical properties and deployment practices of AI are beginning to require governance structures, which enable relevant guardrails that protect both the consumers and contexts in which the technology is deployed.  Governance can set guiding principles for standards and practices such as the Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” and the NIST AI Risk Management Framework.

Governance also applies to various stages of the technology.  The first type of AI governance is focused on a series of checks, consultations, reporting and testing at every phase of the lifecycle to make sure that the resulting AI is trustworthy and responsible.  This includes “value alignment”—restricting an AI model to only pursue goals aligned to human values, such as operating ethically.  This also includes determinations of where humans fit in the design, deployment, and oversight of models.  In particular, the role of a human-in-the-loop, and human-out-the-loop will impact governance strategies.  Second, AI governance can refer to corporate or public policies/regulations, including the concept of responsible AI.  Third is the AI governance that companies put in place internally.  Fourth are guardrails set by governments that can influence or set requirements for AI governance.

The Committee’s Recommendations

  1. The CFTC should host a public roundtable discussion and CFTC staff should directly engage in outreach with CFTC-registered entities to seek guidance and gain additional insights into the business functions and types of AI technologies most prevalent within the sector.

    The intended purpose of these roundtables and supervisory discussions and consultations is to inform the CFTC about key technical and policy considerations for AI in financial markets, develop common understanding and frameworks, build upon the Committee’s Report, and establish relationships.  Discussion topics should include but not be limited to: humans-in-or-around-the-loop of the technology; acceptable training data use cases; and the development of best practices and standards as it relates to the role of AI.  This will aid the CFTC in ascertaining how AI systems are used in markets and how future AI developments may impact markets.

  2. The CFTC should consider the definition and adoption of an AI Risk Management Framework (RMF) for the sector, in accordance with the guidelines and governance aspects of the National Institute of Standards and Technology’s (NIST), to assess the efficiency of AI models and potential consumer harms as they apply to regulated entities, including but not limited to governance issues.

    The intended purpose of this recommendation is to ensure some certainty, understanding and integration of some of the norms and standards being developed by NIST, and to introduce these practices to regulated industries and firms.  A potential outcome is a proposed CFTC rule implementing the NIST framework, thus ensuring financial markets and a regulatory system that is more resilient to emerging AI technologies and associated risks.

    The Committee is not recommending that the CFTC impose additional enumerated AI-related risks to existing risk management requirements (at least initially) and instead to develop appropriate firm-level governance standards over AI systems.  This would accord with the NIST Framework.

  3. The CFTC should create an inventory of existing regulations related to AI in the sector and use it to develop a gap analysis of the potential risks associated with AI systems to determine compliance relative to further opportunities for dialogue on their relevancy, and potential clarifying staff guidance or potential rulemaking.

    The Committee recognizes that existing regulations require registrants to manage risks—regulations that likely already reach many AI-associated risks.  In other areas, regulations may need to be clarified through staff guidance or amended through rulemaking.  The intended purposes of this recommendation confirms the CFTC’s oversight and jurisdiction over increasingly autonomous models, and to make more explicit compliance levers. 

  4. The CFTC should strive to gather and establish a process to gain alignment of their AI policies and practices with other federal agencies, including the SEC, Treasury, and other agencies interested in the financial stability of markets.

    The intended purpose of this recommendation is to leverage and utilize best practices across agencies, and potentially drive more interagency cooperation (including through interagency meetings) and enforcement.  The Committee notes the strong nexus between the remits of the SEC and CFTC and the presence of many dual registrants.

  5. The CFTC should work toward engaging staff as both observers and potential participants in ongoing domestic and international dialogues around AI, and where possible, establish budget supplements to build the internal capacity of agency professionals around necessary technical expertise to support the agency’s endeavors in emerging and evolving technologies.

    The intended purpose is to build the pipeline of AI experts and to ensure necessary resources for responsible engagement by internal and external stakeholders.

About the TAC

The Technology Advisory Committee (TAC) was created in 1999 to advise the Commission on complex issues at the intersection of technology, law, policy, and finance.  The TAC’s objectives and scope of activities shall be to conduct public meetings, to submit reports and recommendations to the Commission, and to otherwise assist the Commission in identifying and understanding the impact and implications of technological innovation in the financial services, derivatives, and commodity markets. The TAC will provide advice on the application and utilization of new technologies in financial services, derivatives, and commodity markets, as well as by market professionals and market users.  The TAC may further provide advice to the Commission on the appropriate level of investment in technology at the Commission to meet its surveillance and enforcement responsibilities, and inform the Commission’s consideration of technology-related issues to support the Commission’s mission of ensuring the integrity of the markets and achievement of other public interest objectives.

In September 2022, Commissioner Goldsmith Romero reconstituted the TAC after it had been dormant.  She included well-known AI experts as members of the TAC.  Commissioner Goldsmith Romero included the study of AI in financial services in every TAC meeting.  Additionally, Commissioner Goldsmith Romero created the TAC Subcommittee on Emerging and Evolving Technologies, who drafted this Report.  Tony Biagioli serves as the TAC Designated Federal Officer.  Ben Rankin serves as the TAC Assistant Designated Federal Officer for the Subcommittee on Emerging and Evolving Technologies.  Scott Lee serves as the Commissioner’s senior counsel advising on TAC.

There are five active Advisory Committees[2] overseen by the CFTC.  They were created to provide advice and recommendations to the Commission on a variety of regulatory and market issues that affect the integrity and competitiveness of U.S. markets.  These Advisory Committees facilitate communication between the Commission and market participants, other regulators, and academics.  The views, opinions, and information expressed by the Advisory Committees are solely those of the respective Advisory Committee and do not necessarily reflect the views of the Commission, its staff, or the U.S. government.


[1] A complete list of Members of the Technology Advisory Committee is available at https://www.cftc.gov/About/AdvisoryCommittees/TAC.

[2] A list of the active Advisory Committee is available at https://www.cftc.gov/About/AdvisoryCommittees/index.htm.


RELATED INFORMATION:

CFTC-PR-8903-24

CFTC NEWS - CFTC-PR-8903-24CFTC-PR-8903-24 (MAY 1, 2024)

PRESS RELEASE | CFTC-PR-8903-24

Chairman Behnam Designates Ted Kaouk as the CFTC’s First Chief Artificial Intelligence Officer

Washington, D.C. — Commodity Futures Trading Commission Chairman Rostin Behnam today announced the designation of Dr. Ted Kaouk as the agency’s first Chief Artificial Intelligence OfficerDr. Kaouk currently serves as the CFTC’s Chief Data Officer and Director of the Division of Data. In this newly expanded role as the CFTC’s Chief Data & Artificial Intelligence Officer, Dr. Kaouk will be responsible for leading the development of the CFTC’s enterprise data and artificial intelligence strategy to further integrate CFTC’s ongoing efforts to advance its data-driven capabilities. 

“Enhanced data analytics and artificial intelligence have the potential to transform the CFTC’s long-term capabilities for oversight, surveillance, and enforcement in the derivatives markets. As one of my top priorities, the CFTC has been deeply engaged in efforts to deploy an enterprise data and artificial intelligence strategy to modernize staff skillsets, instill a data-driven culture, and begin to leverage the efficiencies of AI as an innovative financial markets regulator,” said Chairman Rostin Behnam. “Ted has the requisite technical and leadership experience needed to lead and implement the CFTC’s data and AI roadmap at this critical stage to achieve the best outcomes for the CFTC and those it serves.”

Before joining the CFTC in December of 2023, Dr. Kaouk served as the Chief Data Officer and Responsible Official for AI at the Office of Personnel Management (OPM), where he was responsible for developing the agency’s first federal government-wide human capital data strategy and data products. Prior to joining OPM, Dr. Kaouk was the Chief Data Officer at the U.S. Department of Agriculture, where he was responsible for establishing the agency’s first enterprise data analytics and AI platform and a data strategy to improve organizational decision-making and outcomes for citizens. Dr. Kaouk served as the first Chair of the Federal Chief Data Officers Council from its inception in 2020 until January 2024.

Dr. Kaouk began his career as a surface warfare officer in the United States Navy. Dr. Kaouk earned a bachelor’s degree from the U.S. Naval Academy, a master’s degree from the University of Virginia, and a PhD from the University of Maryland, College Park.


RELATED INFORMATION:
  • n/a

SEC-PR-2024-70

SEC NEWS - SEC-PR-2024-70SEC-PR-2024-70 (JUN. 11, 2024)

PRESS RELEASE | 2024-70

Securities and Exchange Commission Charges Founder of AI Hiring Startup Joonko with Fraud

Washington D.C., June 11, 2024 — The Securities and Exchange Commission today charged Ilit Raz, CEO and founder of the now-shuttered artificial intelligence recruitment startup Joonko, with defrauding investors of at least $21 million by making false and misleading statements about the quantity and quality of Joonko’s customers, the number of candidates on its platform, and the company’s revenue.

According to the SEC’s complaint, Joonko claimed to use artificial intelligence to help clients find diverse and underrepresented candidates to fulfill their diversity, equity, and inclusion hiring goals. To raise money for Joonko, the complaint alleges that Raz falsely told investors that Joonko had more than 100 customers, including Fortune 500 companies, and provided investors with fabricated testimonials from several companies expressing their appreciation for Joonko and praising its effectiveness. Raz also allegedly lied to investors that Joonko had earned more than $1 million in revenue and was working with more than 100,000 active job candidates. When an investor grew suspicious of Raz’s claims, Raz allegedly provided the investor with falsified bank statements and forged contracts in an effort to conceal the fraud. According to the complaint, the scheme unraveled in mid-2023 when the investor confronted Raz, who admitted to forging bank statements and contracts and lying about Joonko’s revenue and number of customers.

“We allege that Raz engaged in an old school fraud using new school buzzwords like ‘artificial intelligence’ and ‘automation,’” said Gurbir S. Grewal, Director of the SEC’s Division of Enforcement. “As more and more people seek out AI-related investment opportunities, we will continue to police the markets against AI-washing and the type of misconduct alleged in today’s complaint. But at the same time, it is critical for investors to beware of companies exploiting the fanfare around artificial intelligence to raise funds.”

The SEC’s complaint, filed in the U.S. District Court for the Southern District of New York, charges Raz with violating the antifraud provisions of the federal securities laws and seeks a permanent injunction, civil money penalties, disgorgement with prejudgment interest, and an officer-and-director bar against Raz.

In a parallel action, the U.S. Attorney’s Office for the Southern District of New York today announced criminal charges against Raz.

The SEC’s investigation was conducted by Alicia Guo, Ariel Atlas, Neil Hendelman, and Lindsay S. Moilanen and was supervised by Sheldon L. Pollock of the New York Regional Office. The litigation will be led by Ms. Guo and Ms. Atlas, and supervised by Daniel Loss and Mr. Pollock. The SEC appreciates the assistance of the U.S. Attorney’s Office for the Southern District of New York and the FBI.


RELATED INFORMATION:
  • SEC Complaint ⊗ (PDF)

CFTC-PR-8897-24

CFTC-PR-8897-24


“AI Day” Meeting To Be Held on May 2, 2024 by the Commodity Futures Trading Commission Technology Advisory Committee


CFTC News Release - CFTC-PR-8897-24

CFTC-PR-8897-24

Date: Apr. 24, 2024

Date Accessed: Aug. 1, 2024

Source URL: https://www.cftc.gov/PressRoom/PressReleases/8897-24

Categories:

PDF Notes:

  • The PDF viewer below disables the links inside the PDF file.
  • The PDF file may be downloaded if you wish.
  • All links within the PDF Download are fully functional.
CFTC-PR-8897-24

SEC-PR-2024-36

SEC NEWS - SEC-PR-2024-36SEC-PR-2024-36 (MAR. 18, 2024)

PRESS RELEASE | 2024-36

SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence

Washington D.C., March 18, 2024 — The Securities and Exchange Commission today announced settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their purported use of artificial intelligence (AI). The firms agreed to settle the SEC’s charges and pay $400,000 in total civil penalties.

“We find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not,” said SEC Chair Gary Gensler. “We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.”

“As more and more investors consider using AI tools in making their investment decisions or deciding to invest in companies claiming to harness its transformational power, we are committed to protecting them against those engaged in ‘AI washing,’” said Gurbir S. Grewal, Director of the SEC’s Division of Enforcement. “As today’s enforcement actions make clear to the investment industry – if you claim to use AI in your investment processes, you need to ensure that your representations are not false or misleading. And public issuers making claims about their AI adoption must also remain vigilant about similar misstatements that may be material to individuals’ investing decisions.”

According to the SEC’s order against Delphia, from 2019 to 2023, the Toronto-based firm made false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning that incorporated client data in its investment process. For example, according to the order, Delphia claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.” The order finds that these statements were false and misleading because Delphia did not in fact have the AI and machine learning capabilities that it claimed. The firm was also charged with violating the Marketing Rule, which, among other things, prohibits a registered investment adviser from disseminating any advertisement that includes any untrue statement of material fact.

In the SEC’s order against Global Predictions, the SEC found that the San Francisco-based firm made false and misleading claims in 2023 on its website and on social media about its purported use of AI. For example, the firm falsely claimed to be the “first regulated AI financial advisor” and misrepresented that its platform provided “[e]xpert AI-driven forecasts.” Global Predictions also violated the Marketing Rule, falsely claiming that it offered tax-loss harvesting services, and included an impermissible liability hedge clause in its advisory contract, among other securities law violations.

Without admitting or denying the SEC’s findings, Delphia and Global Predictions consented to the entry of orders finding that they violated the Advisers Act and ordering them to be censured and to cease and desist from violating the charged provisions. Delphia agreed to pay a civil penalty of $225,000, and Global Predictions agreed to pay a civil penalty of $175,000.

The SEC’s Office of Investor Education and Advocacy has issued an Investor Alert about artificial intelligence and investment fraud.

The SEC’s investigations were conducted by Anne Hancock, HelenAnne Listerman, and John Mulhern under the supervision of Kimberly Frederick, Brent Wilner, Corey Schuster, and Andrew Dean with the Division of Enforcement’s Asset Management Unit. Ragni Walker, Thomas Grignol, and Peter J. Haggerty of the Division of Examinations and Roberto Grasso of the Division’s Office of Risk and Strategy assisted with the investigations.


RELATED INFORMATION:

FTC-PR-240227-2

FTC-PR-240227-2 (Feb. 27, 2024)


FTC Action Leads to Ban for Owners of Automators AI E-Commerce Money-Making Scheme – Settlement requires scheme owners and operators to turn over millions in assets for refunds to consumers harmed by bogus earnings promises

The owners of a money-making scheme that claimed to use artificial intelligence to boost earnings for consumers’ e-commerce storefronts have agreed to surrender millions in assets to settle the FTC’s case against them. In addition, all the businesses and two of their owners face a lifetime ban on selling business opportunities or coaching programs involving ecommerce stores.


FTC News Release - FTC-PR-240227-2

PR-240227-2

Date: Feb. 27, 2024

Accessed: Sep. 6, 2024

Source URL: https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-action-leads-ban-owners-automators-ai-e-commerce-money-making-scheme

Categories:

PDF Notes:

  • The PDF viewer below disables the links inside the PDF file.
  • The PDF file may be downloaded if you wish.
  • All links within the PDF Download are fully functional.
FTC-PR-240227-2

SEC-PR-2024-13

SEC NEWS - SEC-PR-2024-13SEC-PR-2024-13 (FEB. 2, 2024)

PRESS RELEASE | 2024-13

SEC Charges Founder of American Bitcoin Academy Online Crypto Course with Fraud Targeting Students. The defendant claimed his hedge fund would use sophisticated tools like artificial intelligence to generate returns.

Washington D.C., Feb. 2, 2024 — The Securities and Exchange Commission today announced that Brian Sewell and his company, Rockwell Capital Management, agreed to settle fraud charges in connection with a scheme that targeted students taking Sewell’s online crypto trading course known as the American Bitcoin Academy. The SEC alleges that the fraudulent scheme cost 15 students $1.2 million.

According to the SEC’s complaint, from at least early 2018 to mid-2019, Sewell encouraged hundreds of his online students to invest in the Rockwell Fund, a hedge fund that he claimed he would launch, and which would use cutting-edge technologies like artificial intelligence and trading strategies involving crypto assets to generate returns for investors. The complaint alleges that Sewell, who resided in Hurricane, Utah, before relocating to Puerto Rico, received approximately $1.2 million from 15 students but never launched the fund nor executed the trading strategies he advertised to investors, instead holding on to the invested money in bitcoin. The complaint further alleges that the bitcoin was eventually stolen when Sewell’s digital wallet was hacked and looted.

“We allege that Sewell defrauded students in his online American Bitcoin Academy of over a million dollars through a series of lies about investment opportunities in his purported crypto hedge fund. Among other things, he falsely claimed that his investment strategies would be guided by his own ‘artificial intelligence’ and ‘machine learning’ technology which, like the fund itself, never existed,” said Gurbir S. Grewal, Director of the SEC’s Division of Enforcement. “Whether it’s AI, crypto, DeFi or some other buzzword, the SEC will continue to hold accountable those who claim to use attention-grabbing technologies to attract and defraud investors.”

The SEC’s complaint, filed in U.S. District Court for the District of Delaware, charges the defendants with violating antifraud provisions of the federal securities laws. The defendants have agreed to settle the charges. Without admitting or denying the allegations in the complaint, the defendants have consented to injunctive relief. Defendant Rockwell Capital Management also agreed to pay disgorgement and prejudgment interest totaling $1,602,089 and Defendant Sewell agreed to a civil penalty of $223,229. The settlement is subject to court approval. 

The SEC’s investigation was conducted by Matthew S. Raalf and Jacquelyn D. King with assistance from Gregory Bockin and Karen M. Klotz, all of the Philadelphia Regional Office. It was supervised by Assunta Vivolo, Scott A. Thompson, and Nicholas P. Grippo.

The SEC’s Office of Investor Education and Advocacy cautions investors to check the background of anyone selling them an investment and to always independently research investment opportunities and has issued Investor Alerts on investment frauds touting new technologies ⊗.  Additional information is available at https://www.investor.gov/ and https://www.sec.gov/.


RELATED INFORMATION:

CFTC-PR-8854-24

CFTC-PR-8854-24


CFTC Customer Advisory Cautions the Public to Beware of Artificial Intelligence Scams


CFTC News Release - CFTC-PR-8854-24

CFTC-PR-8854-24

Date: Jan.25, 2024

Date Accessed: Aug. 1, 2024

Source URL: https://www.cftc.gov/PressRoom/PressReleases/8854-24

Categories:

PDF Notes:

  • The PDF viewer below disables the links inside the PDF file.
  • The PDF file may be downloaded if you wish.
  • All links within the PDF Download are fully functional.
CFTC-PR-8854-24

CFTC-PR-8853-24

CFTC-PR-8853-24

CFTC Staff Releases Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets


CFTC News Release - CFTC-PR-8853-24

CFTC-PR-8853-24

Date: Jan. 25, 2024

Date Accessed: Aug. 1, 2024

Source URL: https://www.cftc.gov/PressRoom/PressReleases/8853-24

Categories:

PDF Notes:

  • The PDF viewer below disables the links inside the PDF file.
  • The PDF file may be downloaded if you wish.
  • All links within the PDF Download are fully functional.
CFTC-PR-8853-24

CFTC-PR-8846-24

CFTC-PR-8846-24


Commissioner Goldsmith Romero Announces Agenda for January 8 Technology Advisory Committee Meeting on Artificial Intelligence, Cybersecurity, Decentralized Finance


CFTC News Release - CFTC-PR-8846-24

CFTC-PR-8846-24

Date: Jan. 5, 2024

Accessed: Aug. 1, 2024

Source URL: https://www.cftc.gov/PressRoom/PressReleases/8846-24

Categories:

PDF Notes:

  • The PDF viewer below disables the links inside the PDF file.
  • The PDF file may be downloaded if you wish.
  • All links within the PDF Download are fully functional.
CFTC-PR-8846-24