SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : 2026 TeoTwawKi ... 2032 Darkest Interregnum
GLD 385.42-0.3%4:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: marcher8/29/2023 3:37:51 PM
   of 218257
 
teotwaski...

increasingly, human finance employees are not allowed to override decisions made by ai and algorithms...
indicating that humans are 'risky' and ai/algorithms must protect institutions from humans...
rhymes with turn of the screw...
extends to humans-not-needed...
at all???

not sure where to start with this exploration.
maybe here:

Compliance and Risk Management
In conversations with FINRA staff, industry participants noted that they are spending significant
time and resources in developing AI-based applications to enhance their compliance and risk
management functions. This is consistent with FINRA’s 2018 research on the use of regulatory
technology (RegTech), where we observed that “market participants are increasingly looking
to use RegTech tools to help them develop more effective, efficient, and risk-based compliance
programs.”20 According to an April 2018 research study conducted by Chartis Research and IBM,
which surveyed more than 100 relevant risk and technology professionals, 70% of respondents

noted using AI in risk and compliance functions.

Broker-dealers have to keep pace with complex and evolving domestic and international
regulations, as well as a rapidly changing risk landscape (e.g., cybersecurity, internal threats, and
financial risks). At the same time, they now have access to vast amounts of data, inexpensive
computing power, and innovative technologies that present opportunities for them to develop
automated compliance and risk-management tools. Below are some examples that firms shared of

how they are incorporating AI in their compliance and risk management tools.

Surveillance and monitoring – AI technology offers firms the ability to capture and surveil
large amounts of structured and unstructured data in various forms (e.g., text, speech, voice,
image, and video) from both internal and external sources in order to identify patterns and
anomalies. This enables firms to holistically surveil and monitor various functions across
the enterprise, as well as monitor conduct across various individuals (e.g., traders, registered
representatives, employees, and customers), in a more efficient, effective, and risk-based
manner. Market participants noted that these tools could significantly reduce the number of
false positives, which in turn, free up compliance and supervisory staff time to conduct more
thorough reviews of the remaining alerts, resulting in higher escalation rates. Firms indicate
that these tools offer the ability to move beyond “traditional rule-based systems to a predictive,
risk-based surveillance model that identifies and exploits patterns in data to inform decision-
making.”23 For example, some firms noted the use of AI-based surveillance tools to monitor
communications with customers across various channels, such as emails, social media, and text
messaging. Firms noted that these tools gave them the ability to move beyond a traditional
lexicon-based review to a more risk-based review, such that they could decipher tone, slang, or

code words, which may be indicative of potentially risky or non-compliant behavior.

Customer identification and financial crime monitoring – AI-based tools are also being developed
for customer identification (also referred to as “know-your-customer” (KYC)) and financial crime
monitoring programs, for example, to detect potential money laundering, terrorist financing,
bribery, tax evasion, insider trading, market manipulation, and other fraudulent or illegal
activities.24 Market participants noted that many traditional KYC and financial crime monitoring
methods are cumbersome and not as effective as desired, often resulting in high rates of false
positives. Consequently, firms have started incorporating AI technologies, such as ML, NLP, and
biometrics, to make their programs more effective and risk based. Firms indicated that these
tools enable them to identify and track customer activity with greater accuracy and efficiency,
and to conduct more holistic and detailed analysis of customer transactions.25

Regulatory intelligence management – Broker-dealers use a variety of regulatory intelligence
management programs and processes to identify, interpret, and comply with new and changing
rules and regulations across jurisdictions. While this has traditionally been a manual process,
firms are now exploring the use of AI tools to digitize, review, and interpret new and existing
regulatory intelligence (including rules, regulations, enforcement actions, and no-action
letters) and to incorporate appropriate changes into their compliance programs. Some
industry participants noted that automated regulatory intelligence management programs
have the potential to increase overall compliance, while reducing both costs and time spent
implementing regulatory change. According to a research report that explores the use of AI by
financial institutions for risk and compliance functions, “[a]utomating the process of regulatory
change management is something of a ‘holy grail’ in the use of AI.”26 Some regulators are also
exploring and adopting the concept of “machine-readable” rulebooks, which could potentially
enable firms to automate the process of identifying, categorizing, and mapping the rules to
relevant regulatory obligations within their internal workflows.27

Liquidity and cash management – In our discussions, some firms noted that they are employing
ML applications to optimize their financial liquidity and cash management. Such applications
analyze substantial historical data along with current market data to identify trends, note
anomalies, and make predictions, for example, related to intra-day liquidity needs, peak liquidity

demands, working capital requirements, and securities lending demand.

Credit risk management – Broker-dealer firms are also employing AI-based models to assess
creditworthiness of their counterparties, which both speeds up the credit review process and
allows the incorporation of non-traditional criteria (e.g., information available through social
media). However, some AI-based credit-scoring systems have faced criticism for being opaque
and potentially biased and discriminatory. These models not only analyze traditional credit-
evaluation criteria, such as current financial standing and historical credit history, but may also
identify other demographic factors as deterministic criteria, which could lead to unfair and
discriminatory credit scoring based on biases present in the underlying historical data. (Refer to
Section III for additional discussion on the topic.)

Cybersecurity – Cybersecurity continues to be a top challenge for the financial services industry.
Perpetrators are continuously evolving and using sophisticated technology, including AI, to
conduct their attacks. In addition, regulators are requiring financial institutions to develop
comprehensive cybersecurity controls. In response, broker-dealers are starting to incorporate AI
as an essential component of their cybersecurity programs. A recent research report noted that
“sixty-nine percent of organizations believe AI will be necessary to respond to cyberattacks.”28
Incorporating AI into cybersecurity programs may allow firms to assist overwhelmed
cybersecurity staff to predict potential attacks, detect threats in real-time, and respond to
them faster and at lower costs. Use of AI in cybersecurity programs often begins within insider
risk programs where normal behavior can be learned and then deviations or anomalies can be
flagged as a risk and reviewed.

finra.org
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext