Intrigued by the intersection of AI and media authenticity? This internship at Intel Labs offers a unique opportunity to delve into responsible AI approaches for trusted media. You’ll work on projects related to manipulated content detection, synthetic data generation, and media provenance analysis. Gain real-world experience alongside AI research experts and contribute to cutting-edge research in a dynamic environment.
Qualifications
- Pursuing a Master’s degree or PhD in Computer Science, Electrical Engineering, Artificial Intelligence, or a closely related field (for some positions, a Bachelor’s degree in a relevant field may be sufficient).
- Strong foundation in machine learning, deep learning, and computer vision concepts.
- Experience with programming languages like Python and familiarity with deep learning frameworks (TensorFlow, PyTorch) is preferred.
- Prior research experience in areas like image processing, signal processing, or media forensics is a plus.
- Excellent analytical and problem-solving skills with a passion for research.
- Effective communication skills to collaborate with a team of researchers and engineers.
Responsibilities
- Conduct research on fundamental concepts for detecting manipulated media and identifying authenticity cues.
- Develop and evaluate machine learning models for tasks like deepfake detection and media provenance analysis.
- Build and test proof-of-concept prototypes to demonstrate the feasibility of your research findings.
- Contribute to the creation of research papers and presentations for internal and external audiences.
- Stay up-to-date on the latest advancements in responsible AI and related fields.
Benefits
This internship provides an exceptional chance to work on impactful research at the forefront of AI and trusted media. You’ll gain valuable experience in a world-class research lab, collaborate with leading experts, and contribute to Intel’s commitment to responsible AI development.
Location
Fully Remote – United States
Deadline: 3 June 2024