Why not use algorithms instead of AI again?

AI is everywhere – and adds value in many areas. But what good will it do us if AI becomes an end in itself and we no longer think about “the best solution”, but want to solve everything with AI across the board?

If we let AI generate long texts from key points, which are then (for time reasons) summarized again by another AI into key points, then this is somehow pointless.
If we generate suitable images artificially instead of looking for them, this may bring advantages, but it dilutes the original “purpose” of photos – namely the depiction of reality, the capture of a “real” event.
When AIs upload products to Amazon or social media, other AIs generate reviews or likes and other AIs send us product recommendations…. But let’s leave that alone.

I have the impression that AI is currently being used a lot because it is an (if not THE) absolute hype topic – and in many cases for good reason. However, AI has a number of decisive disadvantages. Among other things:
– 𝗙𝗲𝗵𝗹𝗲𝗻𝗱𝗲 𝗘𝗿𝗸𝗹𝗮̈𝗿𝗯𝗮𝗿𝗸𝗲𝗶𝘁: AI processes are still not end-to-end explainable and comprehensible for humans (and machines). You do not “know” why an AI produces corresponding results, i.e. you cannot prove, guarantee or certify the process.
– 𝗨𝗻𝗸𝗹𝗮𝗿𝗲 𝗥𝗲𝗰𝗵𝘁𝗲-/𝗗𝗮𝘁𝗲𝗻𝘀𝗶𝘁𝘂𝗮𝘁𝗶𝗼𝗻: For all AI products, the question of rights must be asked in the business environment, i.e. what was the AI trained with?i.e. what was used to train the AI, where is “my” data processed, who “owns” the results? Although we have the EU AI Act, many AI manufacturers are very sparing with information here. This can become a business risk for you.
– 𝗥𝗲𝗰𝗵𝗲𝗻𝗹𝗲𝗶𝘀𝘁𝘂𝗻𝗴: AI systems need an incredible amount of computing power (especially for training). Manufacturers (eventually) pay for this. You can currently get a lot of AI for little money. But at some point, all these investments must also pay off for the providers. Your investments in AI are therefore currently difficult to calculate and represent a certain risk.

I therefore advocate not completely forgetting the “good old algorithm”, the classic approach to software engineering. Reproducibility, verifiability (even mathematically), reliability and certifiability have given us a great deal of security and robust IT landscapes in recent decades.
So please let’s not throw this overboard too soon!

P.S.: there are very good algorithms for quickly finding images and for summarizing texts (which can be statistically proven)….