The comprehensive exploration of the opportunities and challenges of AI in our economy, society and personal lives.
The world is in the midst of a technological revolution that is constantly pushing the boundaries of what is possible. Artificial Intelligence (AI) is changing the way we work, communicate and even how we organize societies. From automated processes that make businesses more efficient to advanced algorithms that help us address societal challenges such as climate change and labor market shortages, the opportunities seem endless. Recently, we again saw great examples of a tool like Google’s NoteBookLM. A tool that can probably start having an impact on how we take in information. Not only can we interact with interesting papers here, we can also make a kind of mini podcasts of between 10 and 25 minutes where we can specify which topics we want to be discussed or hack the model in such a way that it also makes a Dutch podcast.
At the same time, more and more critical questions are being asked about the ethical, social and economic consequences of AI. What will happen to our privacy? How will AI affect jobs, the way we learn, our mental development and behavior, equality and autonomy? And how do we avoid becoming dependent on foreign tech giants? That we don’t become a Geo political plaything?
With this article, we aim to provide a nuanced view of this complex technology by exploring both the opportunities and challenges of AI. Without slipping into utopian dreams or dystopian predictions. To do this, we have drawn on the articles AI Dystopia or Utopia and Machines of Loving Grace. Based on these, we want to look for a realistic and balanced picture. What are the benefits of AI and how do we exploit them? What risks should we face, and how can we manage them? The answers to these questions are crucial to determining what role AI will play in our future.
Economic Growth and Prosperity
Artificial Intelligence is increasingly seen as an engine for economic growth and broad prosperity. This is partly due to its ability to analyze large amounts of data quickly and accurately, AI offers the possibility to optimize business processes, generate new insights and respond more quickly to changes. An advancement that makes it possible for sectors such as healthcare, agriculture, transportation and logistics but also in government to achieve significant efficiencies. This contributes not only to economic growth but also to better and smarter services for society.
One example is the healthcare sector, where AI applications are now being used to analyze medical imaging and support diagnoses. This saves time for doctors and increases the accuracy of diagnoses. AI also has clear added value in agriculture: advanced algorithms help farmers optimize crop production by combining climate data, soil conditions and weather forecasts, among other things. The more actual digital real-time data we have from a physical environment in our digital twin the better the insights. This leads not only to higher yields, but also to more efficient and sustainable use of resources such as water and fertilizers.
In addition, AI plays a key role in collaboration between public and private sectors, especially by supporting new business models that fuel innovation. Consider, for example, AI-driven public-private partnerships that share data and knowledge to develop new applications that strengthen economic resilience. These collaborations help bridge the gap between research and practice and enable innovations to be deployed more quickly and effectively.
While the economic benefits of AI are promising, the increasing reliance on AI technology also brings a key challenge: strategic autonomy. Many advanced AI solutions, software platforms and even hardware have been developed by large international technology companies, often based in third countries. This dependence creates risks to the economic and political sovereignty of countries such as the Netherlands. The question arises as to whether the economic benefits of AI can be balanced with the need to remain independent of technologies and companies from other countries, which could potentially exert strategic influence.
A situation in which the Netherlands becomes largely dependent on foreign technology companies for our essential AI solutions in business and government processes could lead to a loss of autonomy. In doing so, we then become not only a plaything for the Tech companies providing the services but also possibly a Geopolitical plaything. This can make the Netherlands, our companies and the economy vulnerable to outside influences, for example through policy changes or new laws and regulations in the country of origin of these technologies. Consider companies like Google and Microsoft that not only provide AI technology, but also own the infrastructure and data storage. If these companies change their business models, or if political tensions arise, organizations in the Netherlands, both public and private, could be affected without having control over it.
In addition, reliance on international AI systems carries the risk of increasing exposure to cyber threats. Foreign companies may be less inclined to pay attention to the local security requirements and privacy laws of the Netherlands and Europe with the same intensity. This can lead to vulnerabilities in systems that can be exploited by malicious parties. One way to counter this is by encouraging the development of a robust AI ecosystem of our own and investing in Dutch and European collaborations for AI development. This would contribute not only to a more stable technological autonomy, but also to the strengthening of a reliable, value-driven AI systems and infrastructure more in line with European norms and values.
The strategic challenge lies in finding a balance: on the one hand, to fully exploit the economic benefits of AI, and on the other, to avoid the Netherlands compromising its technological sovereignty. Choosing Dutch AI organizations with their own models, trained on our own data. However, this balance requires a clear vision of digital sovereignty and a willingness to invest in the development and support of Dutch and European AI Startups and AI capabilities.
Societal Challenges and AI as Solution
In addition to economic growth, artificial intelligence has the potential to address some of society’s most pressing challenges. Indeed, AI can be an important solution to problems such as an aging population, labor market shortages, climate change and the increasing demand for healthcare and food security. Through advanced data processing and self-learning algorithms, AI can make existing processes smarter and more efficient, contributing to a more sustainable and resilient society.
For example, in healthcare, AI is being used to deliver tailored care through early detection of diseases. AI systems can recognize patterns in medical data and perform risk analysis, giving patients preventive advice or care. This relieves pressure on healthcare providers and increases patients’ quality of life.
AI also offers solutions for the labor market. By automating routine tasks, among other things, scarce workers can focus on more strategic and creative tasks. Agents in business processes, as junior colleagues, can support many operations to such an extent that it can provide enormous relief. This is valuable, for example, in sectors such as government, where AI can ease administrative burdens, set up the AI as a personal assistant for memos, memos but also and improve services to citizens through better policy insights.
While AI offers a solution to various societal issues, it also carries ethical and security risks. When AI is deployed to address societal challenges, large amounts of data are often collected and analyzed. This can pose potential privacy and security risks. Moreover, the algorithms that drive AI systems can introduce unintended forms of discrimination or bias, which can be detrimental to equality and justice in society. To prevent this, AI and digital literacy is relevant. It helps in the right conversation about BIAS. Where, When do you deploy AI Why there and what might be its effects besides the benefits?
In the labor market, AI can make biased decisions based on demographic data, for example, favoring or disadvantaging certain groups without transparency about why. Also, for example, open access to AI technologies allows both governments and individuals to use AI for manipulative purposes, which can threaten democracy and social cohesion.
So we have a challenge around ensuring responsible deployment of AI, with safeguards for privacy, ethics and security. This requires strict regulation, oversight and the development of ethical standards that center on the values of transparency, equality and accountability.
AI and Responsible Government Deployment.
The government can play an important role in ensuring the responsible and transparent deployment of artificial intelligence. By setting the right procurement requirements for its own purchased services think of sovereignty (purchasing from NL and EU parties), sustainability requirements (on both the model, the way of training the model and any cloud facilities etc.), privacy and copyright requirements, transparency requirements (on the model, the training data but also in use) and the right ethical conversation including the deployment of AI in value-driven ways. All considerations that help the government not only improve its own processes, but also serve as an example for responsible use in other sectors and drive its own (knowledge) economy. AI offers the government opportunities to optimize services to citizens, reduce administrative burdens and make processes more efficient. Think of applications in which AI helps to analyze large amounts of data, for example to detect fraud (with close attention to BIAS and Transparency, among other things), process tax returns or other information flows / documents or proactively improve services to citizens.
A concrete example is the use of AI for personalized services. Government agencies can use AI to better respond to the specific needs of citizens, for example by offering customized digital services. Also, after a phone call for example for information with the government from a citizen or organization, there no longer needs to be an excuse to also immediately share the information by email because an AI Agent can also register the phone call and inform the caller in 1 effort. It can also help reduce administrative burdens, for example, by automatically processing standard requests or compiling reports based on historical data.
While the benefits of AI in the public sector are promising, the applications of AI by governments can also involve significant risks. In particular, consider areas such as transparency, ethics in relation to maintaining citizen trust. Citizens have a right to know how decisions are made, especially when they are based on algorithms. Incidents with algorithms have previously shown how unintended effects – such as discrimination or unfair decision-making – can have major social consequences.
A major risk of AI in government is that decisions become opaque to citizens or, for example, policies, laws or regulations are created in opaque ways. When AI is used to automate decision-making, it can be difficult to ascertain on the basis of what data and rules the AI arrives at a particular judgment. This can lead to distrust among citizens, especially when the results of AI systems and decisions are therefore not easily understood or verified. To mitigate this risk, transparency, among other things, is essential. Transparency means not only that algorithms must be understandable, but also that the government must clearly communicate how and why AI is used in decision-making processes, and traceable processes are in place so that answers, decisions and other information produced are verifiable.
In addition, responsible use of AI in the public sector requires strict adherence to ethical standards. AI systems must be as free as possible from bias and discrimination and must meet the high ethical standards required of public services. In this, both regulation and active control play a crucial role but also the knowledge of the own employees within the government of both Data, Processes, Technology, Opportunities, Risks, Ethics and proper use in order to be able to have the right conversation about it and handle it properly (AI Literacy). In this, the European AI Regulation, among other things, helps to create a level playing field and forces governments to carefully consider the rights and interests of citizens in AI use and ensure that AI Literacy.
The government has a responsibility to carefully implement AI with safeguards for transparency and ethics so that public trust is not compromised. This also requires new forms of oversight, clear ethical standards and a proactive approach to making AI understandable and verifiable. These include ensuring that proper stakeholder groups are set up around the process of permanent testing and the establishment of ethical review committees, among other things. Only with a balanced approach can the government take advantage of the opportunities of AI, without this technology undermining its social responsibility.
Impact on Education and the Labor Market
Artificial intelligence is expected to profoundly transform education in addition to the labor market. This concerns opportunities to make educational programs more flexible and personalized, allowing learning pathways to better meet the individual needs and capabilities of young pupils and students. But it also requires new knowledge and skills due to the impact on the labor market because as a professional you will have to focus on more strategic and creative tasks, critical thinking and more, among other things.
First, let’s look at deployment within education where AI can be used, for example, to develop adaptive learning methods that adapt to each student’s learning ability. This enables personalized learning, where students can grow faster in their strengths and receive additional guidance on their weaker areas.
In addition to productivity gains, the AI revolution requires new skills. For example, creativity is becoming increasingly important. Employees must be able to combine original ideas with AI output and come up with new applications. Adaptive learning is also becoming crucial: being able to switch quickly between tools and technologies is needed to keep up in an environment where AI and other technologies are evolving at lightning speed. Interdisciplinary collaboration is also growing in importance; professionals must be able to collaborate with AI specialists and integrate tools effectively into different work processes.
Although AI can improve education and requires new knowledge and skills, it also brings risks in areas such as equity and access to skills. Despite some of the AI applications being freely available not everyone has the same access to AI education or the means to quickly adapt to new technologies this can lead to a growing gap between workers who have AI and other technical skills and those who do not. This difference can result in inequality in the labor market, with workers without these skills being disadvantaged.
Also, the complexity and ethical issues surrounding AI can create challenges. Not only do employees need to learn to use AI systems, but they also need to develop critical thinking skills to evaluate AI output. Interesting then is if AI is going to function as a junior colleague how will we then acquire the right knowledge and skills to properly weigh the AI’s output. Current knowledge we gather for this in education is not sufficient and in business these junior positions will be more and more automated. The question then is how will this evolve? AI systems are not always error-free and can produce “hallucinations” or biased results, among other things. It becomes essential to recognize errors and assess the reliability of AI information, especially in situations where decisions have important consequences.
To close this gap, it is therefore also vital that AI literacy and ethical awareness become central to both education and job training. This includes knowledge of process knowledge and system understanding, as well as an understanding of data literacy to be able to recognize and correct bias in data but also assess data for quality. In this way, we can create an inclusive and resilient labor market and education system that not only harnesses the opportunities of AI but also overcomes the risks.
In a world where artificial intelligence (AI) is increasingly intertwined with our daily activities, international cooperation and diplomacy is essential. Indeed, in the European Union (EU), through a European approach to AI, we are striving with the combined countries, including through strategic alliances, to harness the benefits of AI without becoming dependent on a select few technology companies.
Benefits of International Cooperation
International cooperation on AI offers numerous benefits.
Through joint policy initiatives and diplomatic efforts, countries can develop common standards that not only stimulate economic growth but also protect fundamental values such as privacy and sovereignty. In addition, by making Sovereign Choices in cooperation with Dutch parties, among others, for example, for end-to-end process support with AI or Transcription AI – parties that can provide both User interface and language models on premise – we can ensure that we build a robust AI economy of our own.
If you want to know more about the parties, please contact us.
Also, by sharing knowledge and expertise, we can work together to develop ethical guidelines and high-quality yet transparent AI systems. For this we have, among other things, the European AI Regulation that aims to make AI systems reliable and secure.
This cooperation not only promotes innovation, but also ensures that AI applications are in line with European standards and values. In addition, European cooperation strengthens cyber security. Acting together makes joint countries more resilient to digital threats and disinformation campaigns.
Critical Consideration: Geopolitical Risks and Technological Dependence
Despite the benefits, international cooperation on AI also carries geopolitical risks. Many advanced AI technologies are developed by and at companies in the U.S. and China. This dependence can undermine the strategic autonomy of European countries. During former President Donald Trump’s tenure, it became clear how geopolitical tensions can lead to restrictions in technology exports, making dependence on foreign technologies problematic. We have seen this develop in part under current President Joe Biden by calling for further restrictive measures toward ASML, among others. Now that Donald Trump has been elected to a second term, the strong expectation is that this will continue partly because both China and the U.S. are striving to become world leaders in AI. This evolving empire could lead to a geopolitical power struggle in which further constraints are not foreign even in Tech. China, for example, has made significant investments in AI research and development, aiming to become a world leader in this field by 2030.
But even if not, the ties between Trump and Big Tech are better than ever! As a result, we expect, among other things, that they will be given many more freedoms. Which could start to ensure even further enhanced market dominance of US tech giants but also less room for European competitors to grow in the US market. In addition, Trump is also expected to see no risk in scaling up power generation with nuclear power plants. This could also inhibit innovation on sustainability. Freedoms that are often at odds with European directives and needs. All the more reason to look more towards OpenSource and builders and suppliers within the EU.
The ambitions of America and China to dominate the AI market can also result in a situation where European countries become even more dependent on technologies from these countries, with possible consequences for their own sovereignty and policy freedom.
Additionally, the ambitions of America and China to dominate the AI market may result in a situation where European countries become even more dependent on technologies from these countries, with possible consequences for their own sovereignty and policy freedom.
Another risk is that AI systems, when functioning as a “junior digital colleague,” could lead to loss of certain knowledge and skills and thus further growing dependence on technology. And we are in favor of technology but if it comes from systems from countries with different geopolitical interests, there is a risk that in case of political tensions, access to these technologies will be restricted. This emphasizes the importance for the Netherlands, Europe and all organizations and governments within it to invest in our own robust AI ecosystem so that we are less vulnerable to external influences.
Innovation, Efficiency and Safety in Business Processes.
AI is penetrating ever deeper into the core of business processes – from HR to finance to strategic policy. The promise is clear: With AI, organizations can work more efficiently, make smarter decisions and innovate faster. For example, the smart algorithms can help HR departments screen job applicants. Although given the level of risk and looking at the EU AI Act this is not the first thing you want to put it to HR on. Better there might be to look at things like automation of leave requests, deployment of AI in Training and Development, to increase employee engagement or, for example, creating and analyzing surveys. In addition, there are many other departments to look at. In the finance department, for example, fraud detection, automation of accounting processes, generation of financial reports or deployment of autonomous agents to analyze historical data and identify patterns and trends. For policymakers, that might be deployment in automated information gathering, data-driven decision-making, the initial drafting of memos or notes or a program plan. And so there are numerous other possibilities we can think of for any team in any organization. The applications seem endless and, in theory, AI should free companies from boring routine tasks, allowing employees to focus on work with more value.
A practical example of AI in business operations is, for example, the deployment at a procurement organization where AI is used, for example, weighing responses to tenders. Classifying them whether they meet the market demand but also working out reasoned opinions on them. But also AI that is deployed in writing the first draft of CIO judgments on Projects and Programs and AI that is deployed in the first draft of Contracts and agreements and the further follow-up in delivery of services related to the contract.
Too Good to Be True?
As with any promise of effortless efficiency, there are pitfalls behind the brilliance of AI. First, AI is often only as good as the data it is trained with. In addition, data shows that which has ACTUALLY happened. But what actually happens is not always what should have happened or what your intention was for it to happen. If you then build on that, the pattern will also show something you would probably prefer to see (slightly) differently anyway.
For HR, for example, this can mean that an AI model automatically repeats the same preferences (and thus the same biases) based on previous hiring decisions. This creates risks such as in the area of discrimination, unfairly excluding possible specific groups.
With ever smarter cyber threats comes an increased need to protect AI systems from attacks. Here it helps when you opt for on premise user interfaces and on premise models. Added to this is the fact that AI-driven systems depend on sensitive corporate data. This data can become vulnerable to misuse if it falls into the hands of malicious parties also that is why it is safer to keep some of it inside the door. For example, what happens if a financial AI tool is hacked and leaks sensitive corporate data? Or if a chatbot – once devised to help customers efficiently – accidentally provides confidential information? These are situations in which AI suddenly seems less reliable and organizations are forced to reevaluate their trust in this technology.
Efficiency vs. Innovation
While AI can certainly help streamline processes, there is a risk that this “efficiency” sometimes actually leads to less innovation. For example, consider Microsoft CoPilot that helps you quickly draft documents. Handy then we think quick one project plan written for you. But why should you write project plans when there are more than excellent project management applications that do not require you to write lengthy documents but do help you to get the right data together in a very targeted way and provide good project information during the project. Or imagine, handy with CoPilot making Powerpoint presentations, but why would you still make (MS) Powerpoints when you can also make videos with AI?
Employees have less need to think creatively or fundamentally improve processes when an AI system “automates” everything and that’s what the BigTech is also driving at otherwise their relevance will be compromised. We at DigiBeter help your organization look critically at what you do, why you do it and then see where AI can really help you. The danger is that, precisely by relying on AI, organizations lose the connection to their own processes and are mainly concerned with adjusting AI models and algorithms. AI then does not replace routine tasks (necessary or not), but shifts the problem – and creates the illusion that all solutions can be solved with an algorithm.
Complexity, Environmental Impact and Skills
Previous pieces and newsletters have covered the benefits and challenges of artificial intelligence (AI), from economic growth to ethical dilemmas. Now we focus on the impact of AI from three other key perspectives: the technical complexity of AI, the environmental impact of energy consumption, and the skills needed for responsible and effective application.
Complexity: Opportunities and Limitations
The power of AI to process vast amounts of data and make insightful predictions is unparalleled. This complexity makes AI an invaluable tool in sectors such as healthcare and business and government. At the same time, this technical complexity also poses a challenge: AI models are often based on complex but also difficult to fathom algorithms. Among other things, results therein can lead to a “black box” effect, where currently even developers cannot always explain how a model arrives at a particular outcome. This can make transparency and trust a challenge, which can be a stumbling block to deployment, especially in more precarious roles and industries.
Environmental Impact: AI’s Ecological Footprint
Although in many cases AI helps to optimize (work) processes and can also bring sustainability, training extremely large and complex models such as LLMs (Large Language Models) – Foundation Models – requires a lot of energy. In addition, every question posed to such a large model requires a significant amount of energy and cooling to realize the answer to it. Recent estimates show that energy and cooling consumption of AI currently can already be compared to that of some countries. And this can only increase Globally towards 2027, the time when we might also see and experience Artificial General Intelligence. And here lies an important challenge: AI can indeed support sustainability goals, but only if we also consciously ask the right questions and make the right demands in procurement and use, based on our own knowledge.
Fortunately, there are ways to make AI more sustainable. Besides setting the aforementioned smart procurement terms and choosing partnerships with organizations committed to training and processing in green energy-based data centers, such as Northern Data, there are options for energy-efficient models that require less heavy infrastructure. To stay with Northern Data for a moment, for example, they work with data centers that run entirely on renewable energy, with the goal of operating carbon-neutral. Through such partnerships (but also requirements of other software vendors implementing AI in their software to partner with such parties), AI can be implemented in a relatively more environmentally friendly way.
Another crucial choice for sustainability is to use Small Language Models (SLMs) – aka expert models – instead of the more energy-intensive LLMs. SLMs are not only more efficient in energy use, both training and querying, but they can often meet specific business goals well, in the quality desired within your organization, without requiring huge data sets or computing power. This makes them an attractive option for organizations that want to deploy AI with an eye on environmental impact, their own specific and qualitative way of working but also issues such as information security, control of the data from an ethical, privacy and copyright perspective) and many things more. Moreover, these smaller models can also work together in a smart configuration that mimics the power of a larger model. By combining multiple SLMs and controlling them through a central “conductor,” these models can work as one integrated system.
This works like this: each SLM is responsible for a specific part of the task, and the “conductor” cleverly distributes the work among the models. Think of an orchestra: each instrument plays its own part, but the conductor makes sure that together they form a harmonious whole. Thus, a question or problem can be handled efficiently and with less energy. This setup offers the power of a large model without the high energy costs, because each SLM performs only part of the computational tasks and is therefore less heavily loaded. In addition, this is also more secure because small models can often also operate more easily on premise / on device without having to process data in large compute centers. A final advantage we would like to mention here is that, when this is picked up in a government, for example, you can speak of scale economy here. If every government would train one or a few models in a certain area (and exchange these with fellow governments) it is not an unrealistic scenario that in a very short time a country can realize models that are quite equivalent in quality to the LLMs of the current large market players.
Bringing together sustainable procurement, green partners such as Northern Data, and the choice of smaller, more efficient models such as SLMs underscores the importance of conscious and responsible use of AI technology. Sustainable AI becomes a reality when sustainability is not just an aspiration, but is actually incorporated into the conditions and choices surrounding AI implementation. For this to happen, however, it is necessary to have sufficient knowledge so that the right people can ask the right questions. Which brings us to the next point namely skills in the new labor market.
Skills: New Demands for the Labor Market
Artificial intelligence, as discussed earlier, has already taken us far, from economic growth to social solutions. But let’s not forget that every technology, no matter how smart, also has its own challenges. AI may be a “junior colleague” that makes your job a lot easier, but this technology also presents us with new issues around sustainability, complexity and developing crucial skills. How do we address these challenges?
The Complexity of AI: More than a Black Box
AI can deliver incredible insights by performing complex analysis and recognizing patterns that are barely perceptible to us as humans. But this technology does not always work like an open book. Many AI models remain a “black box” in which the exact logic is difficult to figure out, even for the developers. This raises questions about transparency and trust: how do you know if your “junior colleague” is not making a small mistake that could have major consequences?
To properly integrate AI, a good understanding of the underlying systems becomes more important. Knowing how an algorithm works, what its limitations are, and when it might just not give the right answer are all essential skills. Those who understand AI can ask much more targeted questions and work more efficiently with the technology – exactly what you need to leverage AI as a valuable assistant and not as a troublesome, inscrutable force.
Environmental Impact: The Invisible Footprint of Smart Technology
While AI offers many benefits, it also has a hefty environmental footprint. Training the largest models consumes so much energy that the carbon emissions of one model can amount to the equivalent of dozens of transatlantic flights. By 2027, the global AI sector could even consume as much energy as all of the Netherlands(Artificial intelligent…). This begs the question: can we use AI responsibly without an unsustainable ecological toll?
Fortunately, there is also a green side to AI. By making conscious choices when selecting data centers and energy suppliers, we can reduce the footprint. Data centers like Northern Data’s are fully committed to renewable energy and striving for carbon neutrality(Artificial intelligent…). By taking sustainability into account when sourcing AI systems and infrastructure, organizations can manage AI without unnecessarily impacting the environment.
Skills of the Future: Understanding, Assessing and Utilizing AI
Being able to address an AI system, give a command or ask a question is one thing, but using AI optimally requires much more. It requires critical thinking, being able to assess whether the output generated is reliable, whether you see a particular bias or whether you might be dealing with a “hallucination” – a common phenomenon where AI simply presents incorrect information as if it were truths. Being able to assess critically, distinguish fact from fiction, detect Bias in the output and building data expertise become elements of your knowledge that are indispensable.
And then there is the art of prompt engineering: asking the right questions in the right way and the same goes for giving instructions. Clear questions and instructions produce much better results, imagine how you instruct a junior colleague, often by explaining the process step by step. This absolutely does not have to be in those long prompts that you regularly see passing by the way but it does require knowledge of certain structuring and techniques. By learning about and experimenting with formulations and formulation techniques, you learn to use the technology optimally. It’s as if you get to know the AI step by step and the AI does the work exactly like a new colleague you are working with who discovers what works and what doesn’t work.
In addition to all this, there is more. Creativity and adaptive learning are also key components of your AI toolbox. Where the AI does the quick analyses, you come up with original ideas, new applications, new needed, high-quality data and ways to break the AI, for example. All to take that production together with AI to the next level. Couple this with interdisciplinary collaboration, for example, between IT specialists, domain experts and other stakeholders ensures the integration of AI tools in different work processes, and that’s how the overall combination starts to add real value.
Ethics and responsibility should also not be missing from your toolbox. AI has the potential to reinforce bias and bias if we are not careful and do not do the right selections on data and work with transparent algorithms. Everyone has bias, including an everyone at DigiBeter. But by working with a diverse set of stakeholders and broad disciplines, we try to neutralize it as much as possible. Furthermore, we should also not be blind to the huge ecological footprint that we all leave behind and, for example, the impact by using AI on how we develop socially and in working life. And then we don’t want to forget about the impact on work at all because what does this mean for our future junior colleagues if AI has replaced their work that they normally pick up in order to grow into medior and senior roles. The ability to oversee the ethical side of AI use and have the right conversations makes it possible to use AI fairly and responsibly.
Developing AI literacy within organizations, as well as in the broader education system, is therefore essential. Only by training employees and preparing schools for the skills required by AI can we use this technology responsibly and inclusively and have the right conversations about it together.
The Dutch Position in the AI Landscape.
The Netherlands is at a crossroads. On the one hand we see unprecedented opportunities for innovation and progress, on the other hand we are faced with crucial choices about our technological sovereignty, preservation of jobs in our country and sustainability, among others. Recent developments in politics (such as the wars at play and Trump’s election with Elon Musk etc.) and at large tech companies such as OpenAI, Meta and Google (such as the investments in nuclear energy, choices to be able to use models also in warfare etc.) underline the importance of these choices.
Where Do We Stand Now?
- Adoption lags behind: 25% of the Dutch use AI tools such as ChatGPT, while in India 73% of the population is already actively using AI. Organizations, especially in the (Semi) government and SMEs, often still lack visibility into the full potential.
- Dependence on foreing tech: Dutch organizations and the government are still heavily dependent on mainly U.S. AI technology and hardly move on this.
- Growing awareness: There is a growing awareness of the need for own, European AI solutions, European Cloud and other office software.
- Laws and regulations: In Europe, we have good, strong laws and regulations such as the GDPR (AVG) and, since this year, the AI Act. These seem to inhibit to the outsider but nothing could be further from the truth as long as you deploy it properly. However, looking through them is complex for many. AI Literacy (mandatory from Feb. 2025) can help. We will help you with AI Literacy.
Opportunities for the Netherlands (and Europe)
The situation offers unique opportunities for the Netherlands, Europe and Dutch and Eurpopean organizations:
- Development of small-scale, efficient AI models (SLMs / exper models) with benefits such as:
- Higher auality
- More sustainable in training and use compared to energy-intensive large models
- Affordable
- Focus on specific applications and areas of expertise
- Run locally (on a powerful laptop)
- Scalable together to 1 large model using orchestration
- Public-private cooperation for new, innovative NL / EU Tech:
- Targeted investment in Dutch (and EU) AI startups, scale-ups and organizations can provide positive economic impetus and retain/grow employment
- Preservation / strengthening of the knowledge economy in the Netherlands, EU and own organizations
- Behoudt van / versterking van de kenniseconomie in Nederland en eigen organisaties
- Growth of knowledge tech innovations in the Netherlands and EU
- Strengthening of our economic climate
Risks and Challenges
At the same time, we must be mindful of the challenges:
- Geopolitical tension: The growing rivalry between the U.S. and China in AI development, war between Russia, Ukraine and broader tensions that could start to put us in undesirable situations in the relatively short term
- Energy consumption: The increasing energy requirements of AI systems and making the potentially wrong choices in this.
- Knoledge gap: The risk of a growing digital divide in society
- Employment and Security: The impact on employment in the event of insufficiently accommodating movement in employee training or the use of foreign tech acting as a junior colleague as a result of which we outsource more and more work abroad causing jobs to ‘leak’ and our trusted business information may or may not be covertly used there.
A New Course for Dutch and EU AI
Recommendations for Organizations
- Invest in knowlegde (We will help you with this) :
- Develop AI literacy within the organization
- Focus on critical thinking and evaluation of AI outputs
- Encourage interdisciplinary collaboration
- Choose high quality, secure, sustainable, NL and/or EU solutions (We help you with this):
- Consider small-scale, dedicated AI models
- Pay attention to energy consumption and environmental impact
- Consider which, where and how information is processed
- Work with Dutch and European partners
- Ensure ethics and transparency:
- Implement clear guidelines for AI use
- Ensure accountability of AI decisions
- Consider privacy and security
Tips for selecting AI applications/organizations
please note the following
Organizations / applications that:
- Are from the Netherlands or elsewhere in Europe and have little to no influence from investors or politics outside the EU.
- Have a focus on NL and Europe
- Can deliver both User Interface and Models on-premise and/or EU cloud and can be hosted on various types of servers (Linux, Windows, etc.)
- Preferably open source
- Train expert models (SLMs) together with an orchestrator
- Deliver ethical models that are not trained by clickers,
- Deliver models that do not contain (public or non-public) privacy data or copyrighted or otherwise protected or unethical content
- Stand for sustainable development in both training the models and in their use
- Check in your procurement processes of new applications whether AI will be / is integrated into them and if so include the above requirements in the purchase conditions to your supplier regarding the built-in AI functionality.
- And other(s)
Conclusion: An Own Way Forward
The Netherlands and EU faces the challenge of developing our own, secure, sovereign and distinctive position in the international AI landscape. This calls for:
- Targeted investments in Dutch and EU AI development
- Clear choices
- Focus on small-scale, preferably shareable and efficient solutions
- Strong public-private cooperation
- Attention to information security, privacy, sustainability and ethics
This may sound like unfeasible, but we see organizations where this is already working. By making the right choices now, the Netherlands can become a forerunner in responsible AI development, with an eye for both innovation and societal values.
This article is automatically translated with AI

0 Comments