24 Bonus Chapter 12: AI, the Future and You
-
- AI as a tool helps carry out tasks; AI as a compass tries to define direction and values — but only humans can set values.
- To trigger emotional responses that influence behavior (behavioral design, not just math).
- By analyzing nontraditional data (like rent or utility payments) it can approve more people, but it can also unfairly reject groups based on biased data.
- A robo-advisor automates portfolio management; the trade-off is convenience vs. lack of personal understanding.
- AI may push for maximum efficiency, but this can cause emotional burnout or make financial life feel restrictive.
- It means mistaking algorithmic recommendations for guaranteed safety, even though AI doesn’t know your personal context.
- They save “invisibly,” helping build funds, but users may not learn saving habits themselves.
- Because AI cannot eliminate risk — diversification protects against market and system failures.
- Algorithmic bias occurs when AI reflects historical inequalities, like penalizing certain ZIP codes or demographics.
- People without access to smartphones, internet, or financial literacy are excluded, widening inequality.
- Because AI suggestions need human evaluation; only critical thinking ensures alignment with values.
- Values guide whether decisions “make sense” for a person’s life; AI alone cannot account for meaning or cultural context.
- Independence may shift from “doing everything manually” to “choosing when to trust and when to override algorithms.”
- Reflection builds awareness of how the tool affected feelings, habits, and alignment with goals.
- “What kind of human do I want to be in the age of AI?”