UNT 4.76% 2.0¢ unith ltd

[ATTACH] As I’m stepping into my penultimate year at ESADE, I...

  1. 3,980 Posts.
    lightbulb Created with Sketch. 2161
    upload_2024-9-3_21-28-44.jpeg
    As I’m stepping into my penultimate year at ESADE, I recently had my first session on how technological advances are shaping social and political structures. This theme is central to our course on Managing Technology and Digitalization to Implement Change in B-Corp Organizations, which got me thinking: what better time to explore the idea of technology as an artefact, especially through something I’m passionate about — Conversational AI?

    The intersection of technology and sociology isn’t just some abstract concept. It gives us a crucial lens for understanding the deep impact of disruptive innovations like Conversational AI. Too often, we think of AI in terms of convenience or productivity. But beneath the surface, it’s reshaping our behaviours, choices, and even what we consider normal or acceptable.

    Embedded Power Structures in AI Systems

    Conversational AI isn’t just mimicking human dialogue. It absorbs and amplifies the biases present in the data it’s trained on. These biases aren’t random; they reflect whose voices are prioritized in society. When AI models rely heavily on data from specific demographics — typically Western, male, and English-speaking — they don’t just mirror these biases; they magnify them, reinforcing existing social hierarchies.

    The real concern goes beyond biased data. It’s about how these biases are operationalized. AI is often designed to achieve outcomes that serve corporate or institutional goals, not fairness or equity. This design can steer user behaviour toward predetermined outcomes that align with the interests of the powerful, often at the expense of marginalized groups.

    Manipulating Human Behavior Subtly

    AI-driven interactions don’t just reflect what we input; they actively shape how we think and act. This goes deeper than linguistic convergence — where users unconsciously mimic the AI’s style. These systems can guide decisions, manipulate emotions, and subtly shift perceptions.

    Take AI therapy bots. They are built to offer mental health support, but when they depend on narrow psychological frameworks or biased data, they can push users toward certain diagnoses or medicalize normal emotions. This nudging can shift societal norms toward viewing more aspects of human experience as problems needing clinical intervention, constraining what’s considered “normal.”

    This is more than just a technical glitch. It’s a subtle form of social engineering, where AI systems dictate norms and values, even if unintentionally.

    Reinforcing Corporate Power in Customer Service

    Conversational AI in customer service goes further by entrenching power imbalances between companies and consumers. Most AI systems here are designed to minimize costs and risks, not enhance the customer experience. They use natural language processing to detect dissatisfaction and deflect conversations away from sensitive topics, preventing complaints from escalating.

    This isn’t just about improving efficiency. It’s about shaping interactions to control consumer behaviour and limit transparency. Some AI systems adapt their responses in real-time to protect corporate interests, hiding critical information like contract terms or customer rights. Here, AI acts less like a mediator and more like a gatekeeper, ensuring outcomes favour the company.

    Constraining Educational Growth

    In education, the problem is similar. AI tutors trained on standard curricula often default to promoting the “correct” answers that align with dominant educational paradigms. This approach discourages creative thinking and critical engagement, trapping students within conventional learning paths.

    The risk is clear: we could end up with students who excel in standardized settings but lack the adaptability and innovative thinking needed to tackle new challenges. When AI prioritizes what can be easily measured, it narrows education to fit within the confines of its data.

    Rethinking AI for Fairer Outcomes

    To address these hidden impacts of Conversational AI, we need a fundamental shift in how these systems are designed and deployed.

    1. Adaptive Algorithms: Use reinforcement learning to create AI that can adjust to diverse user behaviours and contexts, promoting more equitable responses.

    2. Diverse Training Data: Incorporate varied perspectives and problem-solving approaches in AI training data to support broader educational and social needs.

    3. Transparent Systems: Ensure AI processes are open to scrutiny by clearly explaining why certain actions are taken or suggestions are made.

    4. User Feedback Loops: Implement real-time feedback mechanisms so AI systems can learn from their interactions and correct biases dynamically.

    It’s time to move beyond thinking of AI as just another tool. AI shapes our social norms, behaviors, and expectations. We must redesign these systems with fairness and inclusivity in mind to ensure they promote a more equitable future, rather than reinforcing the inequalities of the past.
    Last edited by Gameplanon: Yesterday, 23:30
 
watchlist Created with Sketch. Add UNT (ASX) to my watchlist
(20min delay)
Last
2.0¢
Change
-0.001(4.76%)
Mkt cap ! $24.55M
Open High Low Value Volume
2.1¢ 2.2¢ 2.0¢ $84.31K 4.129M

Buyers (Bids)

No. Vol. Price($)
10 1699840 1.9¢
 

Sellers (Offers)

Price($) Vol. No.
2.0¢ 100000 1
View Market Depth
Last trade - 16.10pm 03/09/2024 (20 minute delay) ?
UNT (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.