TechHive AI

About

TechHive AI is an innovative learning program to teach high school students about cybersecurity and ethics of AI. While AI holds great promise to positively transform many areas of society, ill-conceived deployments pose significant risks to people’s rights and safety.

 

TechHive AI combines STEM and social sciences to teach students about cybersecurity and AI. Using ethical AI principles (e.g., fairness, accountability, transparency, security), students will explore potential technology and governance strategies to mitigate harms and maximize benefits of AI in society.  

 

Participants learn and develop techniques that can address and mitigate AI cybersecurity threats in ways that adhere to ethical AI principles. Participants will also get opportunities to share these techniques with their communities, school districts, and local policymakers. This project is funded through a grant by the National Science Foundation.

Interested in Joining?

Who?

High school juniors and seniors interested in AI and cybersecurity and the promises and threats they pose. 

No prior computer science or AI technical experience is necessary.

What?

Explore the intersection of AI and cybersecurity, how it can help protect us, and the potential threats it poses.

Work with peers and learn from graduate students at UC Berkeley.

When?

July 19-23 and July 26-30 from 1 to 4pm PT. 

Where?

On Zoom, of course!

Students needing access to a WiFi hotspot and/or laptop can be provided one.

Why?

Because you want to explore the benefits and risks of emerging technologies, including fairness, accountability, transparency, ethics, and security.

Because you want to be a part of designing the future.

Because you want to connect with others who share your interests.

Because it is free!

Apply

TechHive AI is no longer accepting applications for its summer 2021 program.

More Info

Artificial Intelligence (AI) presents amazing possibilities, but also looming threats for cybersecurity. AI can aid the detection of malicious attacks, but can also be used to make attacks more problematic. As AI-enabled systems enter into our most critical social institutions—such as for screening job applications, predicting criminal activity, or determining who receives loans—ill-conceived deployments and efforts to hack their intended uses pose serious threats to many fundamental rights.  As AI becomes embedded within the systems and objects in our physical world, mitigating bias, discrimination, and threats to public safety become paramount.  It is therefore critical that AI systems are built with attention to security, accountability, fairness, ethics, and transparency (SAFE-T) principles. This challenge requires more than a technological solution. Instead, responding to this challenge requires attention to each of the physical, social, and technological spheres AI touches—bringing together the fields of AI, cybersecurity, and social sciences to effectively understand and apply SAFE-T principles in AI. 

 

To support this goal, this project is developing a novel educational approach, TechHive AI, that is recruiting teens from diverse communities to learn and develop techniques to address AI cybersecurity threats in ways that adhere to the SAFE-T principles. Participants will also develop practice with and have opportunities to share these techniques (technical and non-technical) with their communities, school districts, and policymakers. Together, this project will lead to: (1) a transdisciplinary curriculum model for teaching cybersecurity and AI; (2) guidelines for the development of effective online and hybrid learning models that integrate STEM and social science curriculum with SAFE-T principles in cybersecurity and AI; and (3) a research report that will detail the effectiveness of this transdisciplinary and experiential education model to support high schoolers’ development of 21st century workforce skills.

 

This project is funded through a grant by the National Science Foundation.

Sponsor

NSF_4-Color_bitmap_Logo

Organizers