Responsible AI sometimes means no AI
AI is not omnipotent and far from an appropriate solution to every problem
Chose your color
Frequently visited
Frequently asked questions
The Whois is an easy-to-use tool for checking the availability of a .nl domain name. If the domain name is already taken, you can see who has registered it.
On the page looking up a domain name you will find more information about what a domain name is, how the Whois works and how the privacy of personal data is protected. Alternatively, you can go straight to look for a domain name via the Whois.
To get your domain name transferred, you need the token (unique ID number) for your domain name. Your existing registrar has the token and is obliged to give it to you within five days, if you ask for it. The procedure for changing your registrar is described on the page transferring your domain name.
To update the contact details associated with your domain name, you need to contact your registrar. Read more about updating contact details.
When a domain name is cancelled, we aren't told the reason, so we can't tell you. You'll need to ask your registrar. The advantage of quarantine is that, if a name's cancelled by mistake, you can always get it back.
One common reason is that the contract between you and your registrar says you've got to renew the registration every year. If you haven't set up automatic renewal and you don't renew manually, the registration will expire.
Wanneer je een klacht hebt over of een geschil met je registrar dan zijn er verschillende mogelijkheden om tot een oplossing te komen. Hierover lees je meer op pagina klacht over registrar. SIDN heeft geen formele klachtenprocedure voor het behandelen van een klacht over jouw registrar.
Would you like to be able to register domain names for customers or for your own organisation by dealing directly with SIDN? If so, you can become a .nl registrar. Read more about the conditions and how to apply for registrar status on the page becoming a registrar.
AI is not omnipotent and far from an appropriate solution to every problem
The original blog is in Dutch. This is the English translation.
This is a republication of an article that appeared as an editorial in P&I magazine (T. Wabeke, Verantwoorde KI betekent soms geen KI, P&I 2022, afl. 5, p. 169)
Artificial intelligence (AI) is a hot topic and the focus of great expectations, as reflected in the huge investments being made by governments and corporations around the world. The number of academic publications devoted to AI has also shot up. In July 2021, the Rathenau Institute calculated that Dutch contributions to the literature on the subject had increased by 115 per cent between 2013 and 20181. With so much investment and academic interest, AI is an increasing focus of attention for the general public, the business community and policy-makers. News outlets frequently highlight applications of AI with major implications for everyday life, such as self-driving cars and automated medical diagnoses. Unsurprisingly, therefore, the Scientific Council for Government Policy (WRR) published a report in 2021 describing AI as a so-called 'system technology' with the potential to change society fundamentally2.
As the European Commission's draft AI Act makes clear, AI is a broad concept. In this editorial, my focus is mainly on the most popular form of AI, namely machine learning (ML). ML involves the use of algorithms that are able to make predictions or decisions on the basis of patterns or rules that the algorithms themselves define by analysing huge volumes of data. It is their self-teaching capability that distinguishes ML algorithms from more traditional algorithms that apply predefined rules. Because ML is the most popular form of AI, the two terms are often used interchangeably.
ML has its limitations, and the use of these algorithms involves challenges, many of them with ethical or societal dimensions such as implications for privacy, discrimination and autonomy. Awareness of those limitations and challenges has led the technical community to develop a variety of guidelines, frameworks and tools to support the responsible use of algorithms within projects and services. The AlgorithmWatch website provides a valuable inventory of 167 tools for responsible AI.3 The list includes everything from initiatives by corporate giants such as Google and Microsoft to resources made available by government agencies and universities.
However, it seems to me that many such resources ignore a very important question: should we be using AI at all? That question needs to be asked, because AI is not omnipotent and far from an appropriate solution to every problem. Nor should we assume that AI is always better than traditional software that incorporates expert knowledge. An AI developer who aspires to be responsible should not lose sight of those points. Keeping them front-of-mind can help prevent not only the deployment of flawed algorithms that will almost inevitably make flawed decisions, but also the investment of resources in projects that are doomed to fail.
Whether the use of ML is responsible in a given context depends partly on whether 2 criteria are met. The first criterion is that it is both possible and desirable to use historical data as a basis for future decision-making. Algorithms formulate decision rules by analysing historical data. However, as with investments, past performance does not guarantee future results. Moreover, data that reflects historical prejudices or is otherwise biased is best disregarded. The complexity of any application with a substantial social component should not be overlooked, either. Is it a good idea, for example, to predict whether a bank account may be used to fund terrorist activities by comparing its transaction profile with data on the transaction histories of earlier terrorists? An article criticising just that approach appeared in the Dutch daily newspaper NRC last summer.4
The second criterion for ML use to be deemed responsible is that factual, unprejudiced data labelling must be possible. That is not particularly difficult in a field such as image recognition. It's fairly straightforward to ascertain whether an image contains a traffic light, for example. Again, however, the greater the social component of the decision-making, the more ambiguous things become. Consider an AI system designed to assess job applicants. Someone whom you regard as suitable may not convince a colleague. An AI system will incorporate that ambiguity into its appraisal, potentially leading to undesirable or inappropriate conclusions. A responsible project will therefore use AI only if the outcome metric is unambiguous.
AI has great potential, particularly in certain fields. However, it is important to remain realistic: the use of AI does not assure success. We can hopefully avoid wasted investment, disappointment and poor algorithmic decision-making if we always ask ourselves whether the use of AI for a given purpose is responsible. Application of the two criteria presented above can help us answer that question, while also allowing ample scope for responsible and successful applications. Then, through such applications, AI can make a positive contribution to our constantly developing society.
Thymen Wabeke is a member of P&I's editorial board.
1rathenau.nl/nl/wetenschap-cijfers/onderzoek-naar-kunstmatige-intelligentie-nederland
2 wrr.nl/publicaties/rapporten/2021/11/11/opgave-ai-de-nieuwe-systeemtechnologie
Article by:
Share this article