Ethics In Financial Services On-Demand Webcasts
Using Behavioral Science to Achieve Ethical Excellence
Listen in as our team of thought leaders provide insights on behavioral science discoveries, discuss ethics education programs, and provide a look into the Center for Ethics’ strategy on ethics education.
Ethics In Financial Services Insights
Azish Filabi Ethical Risks of AI in Financial Services
This Financial Planning article highlights Filabi’s presentation on the ethical concerns of AI in financial services. The concerns include transparency, data privacy, racial bias, and over-reliance on AI-driven tools. Filabi emphasized the importance of human oversight, long-term thinking, and the need for regulatory accountability in AI use. She also discussed how historical biases in data, such as big data used as inputs into AI-enabled underwriting, can lead to unfair discrimination, an issue already being addressed by some state regulators and the NAIC through testing and risk management frameworks.
More from The College
Read on to discover additional insights on Filabi’s presentation!
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
AI Ethics in Financial Services
This panel discussion, led by Kevin Crawford of Northwestern Mutual, highlighted AI’s current impact, its future potential, and the ethical implications tied to its adoption.
Filabi emphasized the importance of managing AI systems ethically and transparently. Drawing from her policy background, she raised concerns about the potential for AI to reinforce faulty data or human biases, warning without proper oversight, AI could lead to significant harm. Filabi illustrated this by questioning the fairness of using AI for underwriting that relies on private data from individuals without their consent, stressing the misuse of AI in such ways could have severe consequences. While recognizing AI’s potential to improve fairness and efficiency, she noted these benefits can only be realized through ethical management.
Ethical Concerns Surrounding AI
Filabi also discussed the potential of generative AI to enhance financial education, particularly for underserved populations, by increasing access to critical knowledge and fostering trust in the financial industry. However, she warned this opportunity comes with ethical responsibilities, as AI systems must be competent and transparent, and misinformation or misuse of personal data could erode trust. Bennetts responded there’s a challenge to maintaining privacy in an era where AI systems often access personal data without user awareness. He expressed concern people have become complacent with these invasions of privacy, accepting them as part of daily life. Bennetts also noted while AI has the potential to serve as an equalizer, it may also deepen inequality if access to technology remains uneven, particularly for those without the resources to understand or use AI effectively.
The Path Forward: Ethical Management of AI
Ludwig echoed these concerns, emphasizing the need for AI literacy. He highlighted professionals and consumers alike must understand how AI operates and where their data is being used. Ludwig pointed out AI's growing complexity could create a divide between those who master the technology and those left behind, reinforcing the need for clear, accessible education on AI systems.
Filabi turned the conversation to regulatory developments, noting states like Colorado have taken first steps with comprehensive AI laws aimed at protecting consumers and ensuring fairness in AI-driven decisions. She stressed the importance of setting clear standards on data quality and transparency to help regulate AI’s impact on financial services.
In her final remarks, Filabi emphasized professionals must remain accountable for their use of AI, even in the absence of robust regulation. While AI holds great promise for advancing fairness and efficiency, its success depends on responsible, transparent management. The panel concluded with the consensus that AI, while a powerful tool, must be approached thoughtfully to ensure that its implementation in financial services serves the greater good and enhances trust in the industry.
More from The College
- To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
- Learn More About Our Center for Military and Veterans Affairs
- Read Our 2024 Military Summit Recap
Ethics In Financial Services Philanthropic Planning Insights
Ethics Through the Lens of Philanthropic Planning
Managing director of the American College Cary M. Maguire Center for Ethics in Financial Services, Azish Filabi, JD, MA sits down with The College’s Chartered Advisor in Philanthropy® (CAP®) Program director and assistant professor of philanthropy Jennifer Lehman PhD, JD, CFP®, CAP® to discuss the ethical considerations advisors and other financial professionals must make when offering philanthropic planning services in a new continuing education (CE) opportunity available on Knowledge Hub+.
Filabi kicks off the discussion by reflecting on her history as a professional, stating that she has always worked to ensure organizations have the right governance structures in place and the right tools in place so they can consider ethics in their own personal decision-making as well as the impact they’re having on society.
She goes on to discuss the work performed at the Center for Ethics in Financial Services, stating the importance of the group’s research mission and outreach. In reference to this research, Filabi explains the purpose as “learning about the challenges that leaders and individuals are facing with respect to ethics so that can reflect back on the work that we do.” By completing this research, Filabi believes that the Center for Ethics will be able to provide the industry with valuable lessons relating to ethical concerns in the field.
Trust in Financial Services
One of these key lessons focuses on the topic of trust in financial services. Filabi shares that “Everyone I talk to highlighted trust as being a key factor in effective work that we do because it's essentially the glue that brings it all together. Some people went as far as to say that they're not in the business of selling financial products. They're selling trust because people have to trust us as professionals to be able to have their money in our good hands. Because trust is so important to financial services, the Center offers a certificate program on Advanced Strategies for Building Trust to help leaders access tools and frameworks to address this challenge.”
Lehman ties this back to the mission of The American College of Financial Services as a whole, stating a goal of providing applied financial knowledge and education, promoting lifelong learning, and advocating for ethical standards to benefit society. As Lehman points out, philanthropy is a key part of the profession tied to social impact.
Filabi weighs in on this, providing a description of ethics in the industry. She emphasizes the importance of doing no harm and acting in accordance with legal requirements while navigating opportunities. However, she points out that this is a more simplistic view on ethics. When providing her perspective, she states, “We at the center like to think about ethics, not only about the compliance and legal challenges that people face in their day to day, but about the gap between these minimum standards that are expected of us and the day to day challenges that people face in their work…what is the standard that clients expect from you so that they can trust you that might not already be codified in the law?”
How Do We View Ethics in the Context of Philanthropy?
Filabi continues by tying this to the field of philanthropy, discussing concepts such as conflicts of interest, duty of care, and loyalty. She admits this to be a challenging balancing act that also requires financial professionals to consider social impact as part of the equation.
Filabi contends that the importance of social impact is especially critical for the philanthropic sector in recent years. She supports this assertion by stating, “Government budgets are really crunched, and so that means that the philanthropic sector is playing a huge role in addressing some of the business and (societal) challenges that we face in the economy, and I think that should be part of an understanding of ethical duties and obligations as we think about social impact.”
“Government budgets are really crunched, and so that means that the philanthropic sector is playing a huge role in addressing some of the business and (societal) challenges that we face in the economy, and I think that should be part of an understanding of ethical duties and obligations as we think about social impact.”
Lehman and Filabi go on to discuss several additional topics relating to ethics in the philanthropic sector including the Donor Bill of Rights, what an organization should do if a donor’s values don’t align with the organization’s values, key items to consider when weighing the ethical implications of our choices, and more in this discussion, available exclusively on Knowledge Hub+!
To access this learning opportunity and other valuable CE, visit Knowledge Hub+.
More From The College:
- Gain philanthropic and legacy planning knowledge with our CAP® Program.
- Learn about the American College Center for Philanthropy and Social Impact.
- Join the waitlist to be notified when enrollment opens for the TPCP™ Program.
- Learn about the American College Cary M. Maguire Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Drivers of Trust in Consumer Financial Services
The article uses the Center for Ethics’ Trust in Financial Services Study (2021 Consumer Survey) to explore the drivers of trust in consumer financial services. By contextualizing the Center’s research within existing academic research, the study highlights how both corporate reputation and a consumer’s personal values play a critical role in establishing and maintaining trust in the financial services sector.
The Importance of Building Trust
The research, based on responses from nearly 1,700 U.S. consumers, examines trust levels associated with seven types of financial service providers including national banks, credit unions, and online-only financial institutions. One of the key findings is the stark contrast in how trust is built among "familiar non-customers" and "customers." For familiar non-customers – respondents who don’t have a relationship with a firm but are familiar with the services provided – trust tends to be influenced by external indicators such as reviews, third-party recommendations, and the overall reputation of the institution.
This dynamic is especially important for digital-only providers, who are newer to financial services; trust is often built through indirect experiences for such firms. In contrast, for customers who already have established relationships with a provider, trust is more deeply rooted in personal interactions. These customers value shared ethics, protection of their interests, and personalized services, particularly from institutions like credit unions, national banks, and investment firms.
Values Associated with Trust
The study underscores the need for financial institutions to differentiate their trust-building strategies for these two groups. For institutions aiming to attract familiar non-customers, focusing on reputation management and enhancing their public image is critical. By prioritizing transparency, aligning operations with core values, and offering tailored customer experiences, financial service providers can strengthen trust with clients. Conversely, when maintaining existing customer relationships, reinforcing trust through personalized, value-aligned services are key. In addition, these customers consider whether firms are actively protecting their interests.
These findings offer valuable insights for financial institutions looking to navigate the competitive and increasingly digital marketplace. Moreover, the research offers practical guidance for building, maintaining, and repairing trust differentiated by the type of financial entity and the type of customer in the relationship.
More From The College
For further details on the research findings, you can access the full report in the Financial Planning Review.
Pattit, J. M., & Pattit, K. G. (2024). An empirical exploration of the drivers of trust in consumer financial services.
Financial Planning Review, e1190.
For more information on the Center’s research on trust's role in financial services, get our full report.
Author
Subscribe to Newsletter
Related Posts
How Firms Can Strive for Ethical Excellence
Ethics In Financial Services Insights
AI Governance in Life Insurance
The afternoon panel on unfair discrimination in insurance underwriting was a presentation by Azish Filabi, JD, MA, managing director of the Center for Ethics in Financial Services, and Sophia Duffy, JD, CPA, AEP®, associate professor of business planning at the American College of Financial Services, about the ethical and governance challenges of artificial intelligence (AI) in the life insurance industry.
The panel highlighted the ethical and regulatory challenges of AI in the life insurance industry, drawing insights from a 2022 academic paper with the National Association of Insurance Commissioners (NAIC), "AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations," and a 2021 white paper, "AI Ethics and Life Insurance: Balancing Innovation with Access."
The panelists emphasized that AI differs from traditional algorithms because complex machine learning systems can obscure the decision-making rationales in underwriting, which creates new legal and ethical challenges. Moreover, once AI systems are embedded within a process, their operations become difficult to disentangle. The opacity of these systems, often referred to as "black box" systems, poses significant technical challenges, necessitating increased technical literacy and education. The proprietary nature of many AI systems adds another layer of complexity. This opacity and complexity make it difficult to ensure that these systems comply with anti-discrimination laws, particularly those that prohibit discrimination based on legally protected characteristics, like race.
AI systems can inadvertently result in unfair discrimination by using data sources that have a historical bias or serve as proxies for protected characteristics, the panelists shared. This can lead to outcomes that are not just unfair, but also potentially illegal. However, determining who is responsible for these decisions is not straightforward. The chain of data ownership involves big data aggregators, algorithm developers, and insurers/lenders. While insurers are ultimately accountable for their products, they may lack the technical expertise to fully understand the intricacies of the AI systems they use. This creates a disconnect where insurers may not have the ability to shape or even fully comprehend the systems they deploy.
Another issue presented was the difficulty in defining and measuring proxy discrimination when it comes to AI-enabled underwriting. Insurers are permitted to use an underwriting factor if it’s related to actual or reasonably anticipated experience, but there’s no clear-cut standard for how effective that factor needs to be. This ambiguity means each insurer’s justification for using a particular factor can be unique, making regulation even more challenging.
Ensuring insurers' systems align with regulations while integrating various external consumer data points is crucial. A major concern is consumers may remain unaware of which data is used, such as credit scores, credit history, and social media data, raising questions about fairness and the ability to correct inaccuracies. The use of irrelevant and incorrect data can lead to mistakes that get embedded in data chains earlier in the process. Embedded mistakes could be particularly pernicious in complex AI systems that use proxy factors to render decisions. In such systems, it's possible the mistaken data input will render an answer false.
To mitigate these risks, researchers at The College recommend a three-part framework: establishing national standards to set boundaries for acceptable design and behavior, implementing a certification system to verify that systems are developed in accordance with these standards, and conducting periodic audits of system outputs to ensure ongoing compliance.
Developing nationally accepted standards would involve the creation of guidelines to ensure AI systems adhere to best practices in system design and actuarial principles. This process requires collaborative research and careful consideration of who should define these standards. Key areas to address include: behavioral validity, or ensuring that data accurately reflects the behavior of interest; actuarial significance, assessing how inputs contribute to risk evaluation; and social welfare outcomes, defining a financially inclusive marketplace.
As the panel discussion ended, the conversation turned to the importance of testing for unfair discrimination in AI-enabled underwriting. Emerging rules suggest both objective and subjective approaches. For instance, an objective method might involve a 5% threshold for evaluating disparate impact on race, while a subjective approach would permit insurers to develop their own AI testing methodologies.
Critical questions remain. Should there be a unified approach to testing for unfair discrimination resulting from insurance underwriting? Who should have the authority to determine this approach? And how transparent should insurers be with consumers about data usage and privacy rights? These considerations are essential as we navigate the complexities of AI-enabled underwriting and strive for a fair and equitable system.
The future of insurance underwriting is undoubtedly tied to AI, and regulators and industry can together make sure that future is fair and equitable. We hope our study sparks a necessary conversation within the industry and among regulators.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Unpacking Fairness in Insurance
Panelists included Lisa A. Schilling, FSA, EA, FCA, MAAA, Director of Practice Research, Society of Actuaries Research Institute and Peggy Tsai, Chief Data Officer, BigID. The session underscored the challenges posed by AI, emphasizing the importance of strong governance, transparency, and ongoing process enhancements to maintain fairness in data practices and ensure equitable outcomes in insurance.
Fairness in insurance products and processes has been a long-time hallmark of good management for successful insurance companies. Regulations require companies not be unfairly discriminatory to consumers in their processes and practices. This issue has come to the forefront in the industry recently amid advances in artificial intelligence (AI). Panelists underscored that AI and advanced analytics have heightened both the positive potential and negative implications of existing insurance practices. The discussion emphasized the need for a nuanced approach to fairness that addresses the complexities introduced by these technologies.
A pivotal theme was the significance of data quality and governance in ensuring fairness. Highlighting the inherent biases that can emerge during data collection, panelists stressed the ongoing recalibration and transparency necessary in model outputs to mitigate these biases effectively. Robust stewardship practices should prioritize data integrity before model building and decision-making. Ensuring accurate risk classification aligned with expected claims values can serve as a fundamental aspect of actuarial fairness.
The panel then examined the challenges posed by data proxies and synthetic data in insurance models. Synthetic data is data that is produced by machines, sometimes to represent human behaviors. Data proxies similarly involve analysis informed by machines processes to represent real-world behavior. Concerns were raised about the accuracy and representativeness of these proxies, particularly in reflecting real-world demographics. The difficulty of removing synthetic data once integrated into models underscored the importance of rigorous validation and transparency throughout the modeling process, including at the beginning of a development process. A critical aspect of the discussion addressed the use of proxies for race and ethnicity in insurance, highlighting the ethical and regulatory implications. Panelists stressed the necessity of rigorous data management and model validation processes to ensure compliance and fairness in risk assessment practices.
The discussion concluded with a consensus on the imperative for continuous monitoring, recalibration, and transparent communication in insurance practices. Balancing data-driven decision-making with fairness and objectivity remains a paramount challenge, requiring ongoing efforts to align technological advancements with ethical standards.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Insights and Highlights Self-Regulatory Approaches to AI Governance
The panelists emphasized that good model development practices, irrespective of regulatory requirements, lead to better performance and predictability in tech investments. Companies implementing self-governance ahead of regulations often perform better by integrating risk management with economic considerations. The NIST framework, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” addresses both technical and social impacts of AI, ensuring comprehensive governance.
Mandated by Congress in 2021, NIST developed a risk-based framework for managing AI models and practices. This flexible resource aids organizations in governing, mapping, measuring, and managing bias in AI. By focusing on governance, policies, procedures, and organizational culture, organizations can take a comprehensive approach to this challenge. By taking a proactive approach to governance, the aim is to help organizations promote trustworthy AI practices, including model validity, reliability, security, resilience, explainability, accountability, transparency, privacy, fairness, and bias mitigation.
The panel also discussed the relationship between federal and state initiatives and the role of self-regulation in AI governance. One panelist mentioned the AI Executive Order's contribution to defining real risks and the ongoing work on an AI risk management profile for generative AI. Another stressed the need for clear documentation and repeatable practices to provide assurance to partners.
The conversation also covered the challenges of accountability within organizations, highlighting the need for a cultural shift towards responsible AI use. The panel emphasized the importance of integrating AI risk management with broader enterprise risk management frameworks and adopting a shared responsibility model with third-party vendors.
Looking forward, one panelist predicted that AI risk management would become a distinct job category, with an increased focus on the societal impacts of AI. Another anticipated a progressive impact on software quality control driven by AI, leading to more regulated software development practices.
In summary, the panel highlighted that given the evolving regulatory landscape there is a need for clear and transparent AI governance practices, as well as the importance of interdisciplinary collaboration and cultural shifts towards responsible AI use.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Practice Management Insights
Optimizing Your Practice’s Relationship With AI
With the growth of AI in the workplace, it may be time to consider how you can best implement AI tools to automate busywork and turn your attention to more valuable tasks that directly support your clients. Fortunately, our new practice management workbook is here to assist!
Learn more about common AI topics like the risks associated with AI, how much time you can save, and the best way to get started. We’ll also provide a closer look into more specific topics such as evaluating the prompts you provide to your AI tools and which activities in your workplace are best to delegate to AI.
Increase your knowledge on these trending topics and more as you propel your practice into the modern era of advising with help from our informative practice management workbook.