We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Security Researcher - AI Red Team

Microsoft
United States, Washington, Redmond
Jan 11, 2025
OverviewSecurity represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. As a Security Researcher - AI Red Team, Do you want to find responsible AI failures in Microsoft's largest AI systems impacting millions of users? Join Microsoft's AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft's big AI systems. We are looking for an AI Safety Researcher where you will get to work alongside experts to push the boundaries of AI Red Teaming. We are a fast paced, interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, and Responsible AI experts with the mission of proactively finding failures in Microsoft's big bet AI systems. Your work will impact Microsoft's AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot and help keep Microsoft's customers safe and secure. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/ Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesThe AI Red Team is looking for security researchers who can combine the development of cutting-edge attack techniques, with the ability to deliver complex, time limited operations as part of a diverse team. This includes the ability to manage several priorities at once, manage stakeholders, and communicate clearly with a range of audiences. Understand the products & services that the AI Red Team is testing, including the technology involved and the intended users to develop plans to test them. Understand the risk landscape of AI Safety & Security including cybersecurity threats, Responsible AI policies, and the evolving regulatory landscape to develop new attack methodologies for these areas. Conduct operations against systems as part of a multi-disciplinary team, delivering against multiple priority areas within a set timeline. Communicate clearly and concisely with stakeholders before, during, and after operations to ensure everyone is clear on objectives, progress, and the outcomes of your work. Co-ordinate with your team members during ops to ensure that all areas of focus are covered and that stakeholders are clear on the status of your work. Partner with and support all elements of the AI Red Team and our partners, including actively contributing to tool development and long-term research efforts.Other Embody ourCultureandValues
Applied = 0

(web-776696b8bf-cvdwt)