Social Media

# Meta Partners with Stanford on Forum Around Responsible AI Development

Meta Partners with Stanford on Forum Around Responsible AI Development

Amid ongoing debate about the parameters that should be set around generative AI, and how it’s used, Meta recently partnered with Stanford’s Deliberative Democracy Lab to conduct a community forum on generative AI, in order to glean feedback from actual users as to their expectations and concerns around responsible AI development.

The forum incorporated responses from over 1,500 people from Brazil, Germany, Spain and the United States, and focused on the key issues and challenges that people see in AI development.

And there are some interesting notes around the public perception of AI, and its benefits.

The topline results, as highlighted by Meta, show that:

  • The majority of participants from each country believe that AI has had a positive impact
  • The majority believe that AI chatbots should be able to use past conversations to improve responses, as long as people are informed
  • The majority of participants believe that AI chatbots can be human-like, so long as people are informed.

Though the specific detail is interesting.

Stanford AI report

As you can see in this example, the statements that saw the most positive and negative responses were different by region. Many participants did change their opinions on these elements throughout the process, but it is interesting to consider where people see the benefits and risks of AI at present.

The report also looked at consumer attitudes towards AI disclosure, and where AI tools should source their information:

Stanford AI report

Interesting to note the relatively low approval for these sources in the U.S.

There are also insights on whether people think that users should be able to have romantic relationships with AI chatbots.

Stanford AI report

Bit weird, but it is a logical progression, and something that will need to be considered.

Another interesting consideration of AI development not specifically highlighted in the study is the controls and weightings that each provider implements within their AI tools.

Google was recently forced to apologize for the misleading and non-representative results produced by its Gemini system, which leaned too heavily towards diverse representation, while Meta’s Llama model has also been criticized for producing more sanitized, politically correct depictions based on certain prompts.

Meta AI example

Examples like this highlight the influence that the models themselves can have on the outputs, which is another key concern in AI development. Should corporations have such control over these tools? Does there need to be broader regulation to ensure equal representation and balance in each tool?

Most of these questions are impossible to answer, as we don’t fully understand the scope of such tools as yet, and how they might influence broader response. But it is becoming clear that we do need to have some universal guard rails in place in order to protect users against misinformation and misleading responses.

As such, this is an interesting debate, and it’s worth considering what the results mean for broader AI development.

You can read the full forum report here.

If you want to read more like this article, you can visit our Social Media category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button