Industry News

Evolution, Application And Auditing Of Artificial Intelligence

George Ng2024-04-26 09:49:04Frobes

As artificial intelligence (AI) progresses at a breakneck pace, the complexity of its applications necessitates rigorous oversight. This exploration traces AI's development, highlights its diverse implementations and underscores the importance of reviewing and managing emerging risks effectively.

Modeling History

1990s

Empirical analysis relied heavily on parametric estimation due to its computational efficiency and the small datasets of the era. Despite the existence of decision trees and neural networks, their use was limited by technological constraints.

2000s

The new millennium ushered in significant advances in computing power and data handling, shifting focus toward non-parametric methods. The rise of open-source tools like Scikit-learn and high-profile competitions like the Netflix Prize showcased the effectiveness of new algorithms. In particular, ensembles, such as aggregation with variations of decision trees like Random Forests and Gradient Boosting Machines, became popular.

2010s

Marked by the mainstream adoption of neural networks and deep learning, fueled by GPU-accelerated computations and a surge in data availability from sensors and smartphones. Innovations in deep learning—RNNs, CNNs and particularly Transformers—were significantly advanced by algorithmic improvements such as dropout, better activation functions and the attention mechanism, enhancing training efficiency and model focus.

Today

Transformers, leveraging the attention mechanism, have become central to AI, enabling unprecedented understanding and content generation. Backed by vast computational resources and sophisticated algorithms, these models showcase superior accuracy, adaptability and human-like output.

Example Applications

AI’s impact spans various sectors, each facing unique risks associated with model inaccuracies.

Defense And Security: Errors in threat detection could compromise national and global security.

Healthcare: Diagnostic inaccuracies can lead to grave health consequences.

Government: Predictive policing and public assistance errors may severely impact marginalized communities.

Financial Technology: Fraud detection and credit scoring inaccuracies can significantly affect financial outcomes.

Entertainment: Minor inconveniences arise from errors in content recommendation algorithms.

Applicable Regulation

Existing consumer protection and individual rights laws are in place and continue to evolve, in part due to concerns surrounding new technologies like AI.

Safety

Autonomous Vehicles: Bodies like the NHTSA oversee autonomous vehicle programs by Tesla, Uber and Waymo.

Healthcare Diagnostics: Practice Fusion was fined $145 million for accepting kickbacks and having a model that overly prescribed opioids. Systems such as IBM Watson Health have recommended unsafe and incorrect cancer treatment options.

Online Safety: Enforcement of COPPA by fining platforms like YouTube and Epic Games showcases efforts to bolster content moderation and age verification to protect underage users in digital environments. Recent initiatives such as EU’s Digital Services Act (DSA) aim to have platform owners be more accountable.

Anti-Discriminatory

Financial Loans: The Consumer Financial Protection Bureau (CFPB) issues guidance on credit denials by Lenders using AI.

Insurance Underwriting: Regulatory oversight ensures fairness in AI applications within the insurance industry, with state-specific regulations aiming to prevent discriminatory insurance based on factors like zip code.

Hiring Practices: Amazon self-regulated by discontinuing the use of an AI recruiting tool after discovering it favored male over female candidates.

Privacy

The proliferation of AI and machine learning models has been significantly fueled by access to vast amounts of data. However, concerns arise when this data is collected or utilized without explicit consent.

Clearview AI faced fines from Italy's IDPA (€20 million), the U.K.'s ICO (£7.5 million) and orders from France's CNIL in 2021 for scraping images from the web without consent.

Facebook was fined $5 billion by the FTC in 2019 for its role in the Cambridge Analytica scandal, where user data was shared without proper authorization.

TikTok settled with the FTC for $5.7 million in 2019 over the illegal collection of children's data, highlighting the critical need for protecting minors online. Additionally, TikTok faced a €750,000 fine from the Netherlands' DDPA in 2021 for similar violations.

Model Review

Effective internal and external review mechanisms are crucial for managing AI risks and include:

Documentation: Comprehensive documentation of data sources, training processes, model iterations and the model's purpose and intended users is foundational for transparency.

Bias And Fairness Evaluation: Clearly define the model's objective, acknowledging that optimization for business metrics can unintentionally promote harmful behaviors.

Regularly assess training data representativeness, especially when the application context diverges from the original dataset.

Employ unrelated models for cross-evaluation and maintain ongoing evaluation against a diversified corpus with clear, objective fairness metrics (e.g., balancing accuracy across demographics for loan approvals).

Bias Adjustments: Implement algorithmic adjustments such as re-weighting, adversarial debiasing or incorporating fairness constraints to mitigate bias.

Use supplementary models for additional oversight and adjustment.

Performance And Ongoing Monitoring: Understanding model performance and detecting deviations are pivotal for confidence in AI applications, which vary significantly across domains.

Establish robust mechanisms for continuous monitoring and feedback, adapting to changes in rules, technology and societal expectations.

Performance expectations should be aligned with the application's criticality (e.g., higher for healthcare diagnostics compared to entertainment recommendations).

Conclusion

Adopting these review practices requires careful evaluation of AI's application and the level of scrutiny needed. Sometimes, simpler models or human oversight might be safer, balancing innovation with risk. As reliance on external AI solutions grows, companies bear the responsibility for their use. Proactively managing these risks is crucial to avoid legal and reputational damage, underscoring the importance of a strategic, ethical approach to AI deployment.

Declare:The sources of contents are from Internet,Please『 Contact Us 』 immediately if any infringement caused