We assess digital technologies to make them ethical and responsible

Immanence is a benefit company focused on ethical, legal and strategic consultancy, which aims to promote responsible, trustworthy and transparent digital projects, algorithms and artificial intelligence (AI) systems, in compliance with national and European regulations.

Immanence’s approach is characterised by the attention to contextual issues, the sensitivity to the specific case and sector, and the care for trust, ethical issues and relevant social impacts.

Read the manifesto
Ensuring that technological development respects human rights is the greatest challenge we will face in the coming years, and we want to start now.

Who we are

Diletta Huyskes

CEO & Co-Founder

Since 2019, Diletta has been working on the ethics and politics of technology. After graduating in Philosophy, first in Milan and then in Leiden, she worked as a data ethicist at Fondazione Bruno Kessler in Trento. She is currently a PhD Researcher in Sociology at the University of Milan, where she focuses on the use of algorithms by public bodies and administrations, the values that guide their design and their social impact. She is writing a book on the relationship between gender and technology.

Diletta has gained expertise in various research groups and as a civil society representative, leading advocacy projects to protect digital rights and providing recommendations on digital governance. She coordinated the first national mapping of algorithms used by public administration and served on several ethics committees.

Luna Bianchi

CEO & Co-Founder

Qualified attorney and manager of a corporate group of companies listed before NYSE, after graduating in Law Luna obtained a master’s degree in intellectual property at the Politecnico di Milano and Tongji University in Shanghai (China). A scholar of AI and activist in the field of human rights, in 2020 she enrolled for a master’s degree in Philosophy of Digital to explore the legal and social impact of digital transformation and the link between technology and discrimination.

Luna is a member of the World Economic Forum Working Group for Metaverse Governance, and, in the role of Advocacy & Policy Officer, part of the Ethics Committee set up at the Assessorato alla Trasformazione Digitale del Comune di Torino (Department of Digital Transformation of the City of Turin).

To drive the necessary cultural change, Immanence relies on a group of professionals with diverse backgrounds and skills, to guarantee assessments specifically designed around the needs of private and public sector clients:

 

  • Digital Governance;
  • Ethics and sociology of technology;
  • Intellectual property and consumer protection;
  • GDPR compliance and privacy-by-design;
  • Administrative and technology law;
  • Data computer science;
  • Cybersecurity.

What we do

“Ethics, unlike machines, is not binary. It has to be negotiated, adapted to the context, evolved together with social values and then work for justice: it is immanent to experience”

 

Diletta Huyskes, CEO & Co-founder
Training

on ethics, privacy, intellectual property applied to new technologies, and on the European regulatory framework, to meet internal awareness and update needs.

Ethical assessment of risks

aimed at (i) determining ethical risks in terms of social impacts, human rights violations, and governance, and (ii) foreseeing possible risky contextual biases, algorithmic errors and other dangerous impacts.

Preparation of a risk management system

of what mapped in the ethical assessment phase, and in terms of AI Act compliance, privacy by-design, intellectual property and consumer protection.

Audit

to identify, address and correct algorithmic bias and any other risky unforeseen and problematic impacts.

Internal governance structures and processes

to ensure accountability, transparency and the achievement of the organisation’s objectives (internal delegations, codes of conduct and guidelines).

Supervision, control and ethical maintenance

of the digital project, algorithm and AI system throughout its life cycle.

“The risks of technology are not rooted in how we use it, rather they emerge from how and with which values we design it.”

Luna Bianchi, CEO & Co-founder

Today, AI is one of the main components of innovation, a player in a steadily growing market thanks to which many companies and public bodies have been able to speed up their internal processes.

 

However, in an environment increasingly characterized by new risk factors and new variables, it is difficult to predict the real effects of a more or less sophisticated algorithm, whose social impact can lead to unexpected, immediate and large-scale consequences. With consequences in terms of discrimination, privacy violations, loss of autonomy and social exclusion.

To turn the prevention of such errors into a competitive advantage, innovative organizations can adopt an effective and anticipatory risk management strategy.

 

Knowing one’s potential risks, mapping and measuring them becomes a tool to protect the organization that can help corporate management identify new opportunities for truly innovative growth.

Manifesto

“In designing tools we are designing ways of being”

Winograd e Flores, 1987

Immanence, from the Latin immanere, refers to the condition of being, residing and becoming within. In this case, it describes grounding in reality and experience. This is what technology, the digital and artificial intelligence mean for us: products of the real built with and in society.
Accordingly, in our assessments we reflect on the motivations, purposes and impacts of technology with real effects on people’s lives, not only as such but also and above all as integral parts of a community.

We consider ethics and algorithmic fairness to be central competitive advantages for the private sector and founding principles for the public sector, also in terms of ESG indicators.

 

We anticipate the rules and requirements imposed by the European (and foreign) AI’s regulatory framework, betting on ethics and accountability as key elements of a trustworthy technological environment.

We support organisations in a consciously designed digitisation processes for the benefit of all and based on principles of fairness, transparency, accountability and proportionality. From privacy-by-design to impacts on individuals and society.

 

We believe that the ability to govern digital and automated processes – preventing negative social impacts, privacy violations and unwanted outputs – is a necessary and critical condition of every public entity and company of the present, also under the CRS reporting perspective.

We act as an aggregator of best practices and business cases to drive the public discourse on responsible technology, and ethics-by-design and ethics-by-context assessment processes.

 

We work closely with civil society to learn about citizens’ needs and to disseminate expertise on automation and new technologies.