@Paco67
Let's call the starting position A. Then we go to B, and then to C.
At A, there are lots of possibilities to consider. There's B. And then there's also a bunch of other lines: B2, B3, B4, B5, etc. For each of these lines, the engine needs to look at the next possibility. B followed by C, B followed by C2, B:C3, B:C4, etc. Then B2:C1, B2:C2, B2:C3, etc. That's a lot of calculations.
Now, let's evaluate B. The computer can just look at B followed by C, B:C2, etc. It doesn't need to also look at all the B2, B3, etc positions that may have happened with different play. So now it can look further down the chain.
The engine can only search so far ahead. This is called the "horizon effect" in AI. (Your English is much better than any other language I speak. You're doing great. Maybe this would make more sense to you, though, if you googled things like the horizon effect and were able to read them in your native language?)
Some engines, like Stockfish, compensate for this by combining brute force calculations with knowledge specifically created by chess experts. Some engines, like Alpha Zero and Leela Chess Zero, become stronger by playing or analyzing millions of games. The moves and the moves that lead to patterns it recognizes are reinforced when the result of the game is a win. With enough training material, the moves that started out as quasi-random become stronger and stronger.
There isn't just a single way of evaluating a position and giving it a numerical value. If you ask the engine to recalculate further down the line, you'll get different results. Depending on the complexity of the position, that evaluation may change radically if the engine is allowed to find a new threat it didn't see previously.
Does that help any more?
Let's call the starting position A. Then we go to B, and then to C.
At A, there are lots of possibilities to consider. There's B. And then there's also a bunch of other lines: B2, B3, B4, B5, etc. For each of these lines, the engine needs to look at the next possibility. B followed by C, B followed by C2, B:C3, B:C4, etc. Then B2:C1, B2:C2, B2:C3, etc. That's a lot of calculations.
Now, let's evaluate B. The computer can just look at B followed by C, B:C2, etc. It doesn't need to also look at all the B2, B3, etc positions that may have happened with different play. So now it can look further down the chain.
The engine can only search so far ahead. This is called the "horizon effect" in AI. (Your English is much better than any other language I speak. You're doing great. Maybe this would make more sense to you, though, if you googled things like the horizon effect and were able to read them in your native language?)
Some engines, like Stockfish, compensate for this by combining brute force calculations with knowledge specifically created by chess experts. Some engines, like Alpha Zero and Leela Chess Zero, become stronger by playing or analyzing millions of games. The moves and the moves that lead to patterns it recognizes are reinforced when the result of the game is a win. With enough training material, the moves that started out as quasi-random become stronger and stronger.
There isn't just a single way of evaluating a position and giving it a numerical value. If you ask the engine to recalculate further down the line, you'll get different results. Depending on the complexity of the position, that evaluation may change radically if the engine is allowed to find a new threat it didn't see previously.
Does that help any more?