The Interim Commerce and Economic Development Committee met to discuss artificial intelligence (AI) last week. Specifically, the goal was to learn more about what it is, the current federal and state regulation, risks and challenges facing Hoosiers, and opportunities for AI utilization in existing industries, jobs and the provision of government.

The committee, chaired by Sen. Scott Baldwin (R-Noblesville), asked every witness who testified some version of the following question: “What role should the General Assembly have in the regulation of AI?”

Generally, the business perspective is that lawmakers should not interfere with innovation and scientific progress through new laws and, potentially, subsequent administrative oversight or regulations. Chamber board member and serial entrepreneur, John McDonald, answered Sen. Baldwin differently, however, and in a way the committee felt was quite compelling. McDonald encouraged lawmakers to contemplate legislation, if any, that would encourage competition within the AI marketplace.

The AI example McDonald used in his response was mobile driving/map applications. Currently, one must agree to an application’s terms of service (TOS) before using it. Users have two choices: either agree to the TOS or don’t use the app.

McDonald suggests improving the transparency between app manufacturers/distributors and the customer. His argument is that by requiring app makers to do a better job of disclosing to consumers how their data will be used or monetized, consumers are more equipped to decide if they want to use Google Maps, Apple Maps, Waze, etc. Consumers do not understand how valuable data they provide to AI companies truly is, and AI companies like to keep that information hidden from the consumers.

McDonald – and others who testified – touted AI as a revolutionary technology that will change the lives of everyone and every company. The lingering question is: What statutory and/or regulatory actions are necessary to protect Hoosier residents and businesses from the nefarious use of AI while avoiding stifling entrepreneurship, innovation and research and development?

The Chamber accepted an invitation to testify at Wednesday’s hearing and summarized for the committee other jurisdictions’ legislative approaches to governing AI.

According to the National Conference of State Legislators, approximately half of states have either proposed or enacted legislation pertaining to AI, with many of the measures seeking to create task forces or fund studies. Other states are launching initiatives outside of the legislature. Utah, for example, created a “deep technology talent initiative” within higher education – its goal is to leverage existing technology to develop new products based on scientific discovery or meaningful engineering innovations, including those related to AI.

On behalf of the Chamber, I provided three areas of potential legislation lawmakers may consider in the future: (i) anti-discrimination, (ii) deceptive online communication/solicitation and (iii) transparency in the provision of government services.

Anti-Discrimination: In 2019, Illinois enacted the Artificial Intelligence Video Interview Act to help curb potential discrimination in the course of hiring. The law requires employers to notify applicants before a videotaped interview if AI will be used to analyze their body movements, hand gestures, breathing patterns or other physical characteristics to help determine their fitness for the position. Employers also must provide each applicant with information before the interview explaining how AI will be applied and what information it uses to evaluate every applicant.

One can envision other fields in which AI might be used to make key decisions; e.g., education, housing, health care, criminal justice, etc. A study from Brookings suggests that states could require businesses to register the AI tools they use along with “results from a system evaluation and bias assessment” to serve as a check against any potential discriminatory practices.

Deceptive Online Communication/Solicitation: Enacted in 2018, California’s Bolstering Online Transparency Act went into effect in 2019. This law makes it unlawful for any person to use a bot to communicate or interact online with another person, in California, with the intent to mislead that person about its artificial identity as an attempt to incentivize a purchase or sale of goods or services or influence an election.

Lawmakers on the committee reported first-hand experience with bots that have interacted with their campaigns’ social media pages. In fact, the hearing concluded with members deliberating whether the 2024 election constituted an emergency need to pass legislation to prohibit the use of AI to mislead voters.

Transparency in the Provision of Government Services: Several states (Texas, North Dakota and West Virginia) have appropriated funds to one or more agencies or established a new agency (or council) to inventory and monitor AI systems developed, employed or procured by state agencies.

If the goal is to ensure transparency of the use of AI in government, then lawmakers could require agencies to issue public notices to constituents (including businesses) any time AI is used in the provision of their services.

My testimony immediately followed Ted Cotterill, Indiana’s chief privacy officer at the state’s Management Performance Hub (MPH). Cotterill detailed MPH’s efforts to enact an administrative rule that will require MPH to disclose publicly how and why it uses AI in the course of managing the state’s data. Cotterill suggested that MPH’s “partner agencies” will pursue similar rules once MPH’s AI rule becomes final.

In advance of Wednesday’s hearing, I contacted a member of the Chamber’s executive committee whose company is at the forefront of AI technology. In lieu of testifying, he offered, in part, the following commentary that he permitted me to share with committee members:

“[AI] is very nuanced and complicated. AI will destroy and create many jobs. It’s unlike any tech innovation that has come before it. When people compare the impact on humanity to that of discovering fire, I don’t think it’s too much of an exaggeration. Is that next year or 10 years from now? Don’t know. All that said, its near-term applications will touch every company in the state.”

The Chamber is not advocating for immediate legislative action or regulation of AI. However, we believe government will eventually have a role to play.

The Chamber argues that the most important consideration for lawmakers is to ensure Indiana remains hospitable to technology entrepreneurs while simultaneously maximizing transparency in the government’s use of AI and safeguarding Hoosier residents and companies from the nefarious use of AI by bad actors.

At minimum, state leaders must continue investing resources – time and money – to educate themselves about AI and guarantee Indiana is prepared to manage any potential disruption caused by AI to our safety, education, economy, workforce or public services.

Adam H. Berry is vice president of economic development and technology at the Indiana Chamber of Commerce. He joined the organization in 2019.