SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (1459333)5/29/2024 4:30:57 PM
From: Broken_Clock1 Recommendation

Recommended By
longz

  Read Replies (1) | Respond to of 1572094
 
TenQ

Why am I not surprised?

hbr.org

"There is another gulf, however, that ought to be given equal, if not higher, priority when thinking about these new tools and systems: the AI trust gap. This gap is closed when a person is willing to entrust a machine to do a job that otherwise would have been entrusted to qualified humans. It is essential to invest in analyzing this second, under-appreciated gap — and in what can be done about it — if AI is to be adopted widely.

The AI trust gap can be understood as the sum of the persistent risks (both real and perceived) associated with AI; depending on the application, some risks are more critical. These cover both predictive machine learning and generative AI. According to the Federal Trade Commission, consumers are voicing concerns about AI, while businesses are worried about several near to long term issues. Consider 12 AI risks that are among the most commonly cited across both groups:

  • Disinformation
  • Safety and security
  • The black box problem
  • Ethical concerns
  • Bias
  • Instability
  • Hallucinations in LLMs
  • Unknown unknowns
  • Job loss and social inequalities
  • Environmental impact
  • Industry concentration
  • State overreach"