Offizielle Vorlage

AI ethics everyday impact

A
von @Admin
Technologie & Digital

How does AI impact my daily life ethically and what should I be aware of?

Projekt-Plan

15 Aufgaben
1.

{{whyLabel}}: Understanding the 'alignment problem' is crucial to seeing why AI often behaves in ways that conflict with human values.

{{howLabel}}:

  • Focus on the chapters regarding 'Bias' and 'Transparency'.
  • Take notes on how training data reflects historical human prejudices.
  • Identify the difference between 'intent' and 'objective functions' in machine learning.

{{doneWhenLabel}}: You have finished the book and can explain the concept of reward hacking.

2.

{{whyLabel}}: The EU AI Act is the global benchmark for AI regulation and defines which AI impacts are considered 'unacceptable' or 'high-risk'.

{{howLabel}}:

  • Learn the four levels: Unacceptable, High, Limited, and Minimal risk.
  • Identify 'Unacceptable' practices like social scoring or real-time biometric surveillance.
  • Understand your rights to an explanation for high-risk AI decisions (e.g., in hiring or credit scoring).

{{doneWhenLabel}}: You can categorize three daily AI interactions into their respective risk levels.

3.

{{whyLabel}}: Having a personal set of rules helps you decide which technologies to adopt and which to reject.

{{howLabel}}:

  • List your top 3 values (e.g., Privacy, Autonomy, Truth).
  • Write down 'red lines' (e.g., 'I will not use apps that sell my biometric data').
  • Revisit this list whenever you install a new AI-powered service.

{{doneWhenLabel}}: A written one-page document outlining your ethical boundaries.

4.

{{whyLabel}}: AI models are trained on the granular data collected by apps, often without your explicit awareness of the scale.

{{howLabel}}:

  • Go to Settings > Privacy > Permission Manager.
  • Revoke 'Location' access for apps that don't strictly need it to function.
  • Disable 'Allow Apps to Request to Track' on iOS or similar 'Ad ID' settings on Android.

{{doneWhenLabel}}: All apps have only the minimum necessary permissions.

5.

{{whyLabel}}: Standard browsers are the primary entry point for trackers that build your digital profile for algorithmic targeting.

{{howLabel}}:

  • Install a browser that blocks cross-site trackers by default (e.g., Firefox or a Chromium-based privacy browser).
  • Enable 'Strict' tracking protection in the settings.
  • Set your default search engine to a non-tracking alternative like DuckDuckGo or Brave Search.

{{doneWhenLabel}}: The new browser is set as default on all your devices.

6.

{{whyLabel}}: Even privacy browsers can benefit from specialized tools that block the invisible scripts used to train behavioral AI.

{{howLabel}}:

  • Add an open-source content blocker like uBlock Origin.
  • Configure it to block 'third-party scripts' and 'remote fonts' for maximum privacy.
  • Use 'Privacy Badger' to automatically learn and block invisible trackers.

{{doneWhenLabel}}: Extensions are active and showing blocked trackers on major news sites.

7.

{{whyLabel}}: Your email address is a 'super-identifier' that AI systems use to link your data across different platforms.

{{howLabel}}:

  • Set up a service like SimpleLogin or Firefox Relay.
  • Create a unique alias for every new service you join.
  • If a service leaks data or spams you, simply disable that specific alias.

{{doneWhenLabel}}: You have created your first 3 aliases for non-critical services.

8.

{{whyLabel}}: Most major AI platforms use your conversations and files to train future models by default.

{{howLabel}}:

  • In ChatGPT: Go to Settings > Data Controls > Disable 'Chat History & Training'.
  • In Midjourney: Use the '/stealth' command if you have a pro plan, or avoid sharing sensitive prompts.
  • Check Adobe/Google settings to ensure your cloud files aren't being 'analyzed' for model improvement.

{{doneWhenLabel}}: Training is disabled in your most-used AI tools.

9.

{{whyLabel}}: Recommendation algorithms create 'filter bubbles' that can radicalize views and limit intellectual diversity.

{{howLabel}}:

  • Scroll through your feed and count how many posts challenge your current worldview (usually near zero).
  • Intentionally follow 5 high-quality sources with opposing viewpoints to 'confuse' the algorithm.
  • Use 'Not Interested' buttons on repetitive or inflammatory content.

{{doneWhenLabel}}: Your feed shows at least 10% diverse or non-targeted content.

10.

{{whyLabel}}: AI-generated deepfakes and misinformation are increasingly used to manipulate public opinion.

{{howLabel}}:

  • Use a search engine's 'Search by Image' feature for any viral or suspicious photo.
  • Look for the original source and date of publication.
  • Check for 'AI artifacts': distorted hands, asymmetrical glasses, or blurred backgrounds.

{{doneWhenLabel}}: You have successfully verified the origin of one viral image.

11.

{{whyLabel}}: AI interfaces often use subtle design tricks to nudge you into sharing more data or spending more time.

{{howLabel}}:

  • Look for 'confirmshaming' (making the 'no' option sound bad).
  • Watch for 'forced continuity' in AI subscriptions.
  • Recognize when an AI chatbot uses 'anthropomorphism' (acting like a friend) to gain your trust.

{{doneWhenLabel}}: You can name three dark patterns you've encountered this week.

12.

{{whyLabel}}: Sending sensitive personal or professional data to cloud-based AI is a major privacy risk.

{{howLabel}}:

  • Download an open-source tool like 'LM Studio' or 'Ollama'.
  • Download a medium-sized model (e.g., Llama 3 or Mistral).
  • Run queries entirely offline to ensure no data leaves your machine.

{{doneWhenLabel}}: You have successfully processed a private document using a local model.

13.

{{whyLabel}}: Transparency is a core ethical pillar; others have the right to know if they are interacting with human or synthetic output.

{{howLabel}}:

  • Add a small disclaimer like 'Assisted by AI' or 'AI-generated' to emails or blog posts.
  • Use metadata tags or watermarks for AI-generated images.
  • Be honest about the extent of AI involvement in your creative work.

{{doneWhenLabel}}: Your next 3 AI-assisted outputs are clearly labeled.

14.

{{whyLabel}}: Full automation often leads to 'algorithmic cruelty' where edge cases (real people) are ignored.

{{howLabel}}:

  • When using AI for decision-making, always perform a final human review.
  • Advocate for 'appeal' buttons in automated systems at your workplace.
  • Provide feedback to developers when you spot an AI error or bias.

{{doneWhenLabel}}: You have established a review step for all your AI-assisted tasks.

15.

{{whyLabel}}: AI capabilities and privacy settings change rapidly; a one-time setup is not enough.

{{howLabel}}:

  • Set a recurring calendar event for the 1st of every month.
  • Review new privacy features in your main AI tools.
  • Clear your 'temporary' chat histories and cookies.

{{doneWhenLabel}}: A recurring 15-minute event is in your calendar.

0
0

Diskussion

Melde dich an, um an der Diskussion teilzunehmen.

Lade Kommentare...