I agree. F# has also type providers (https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/type-providers/) that are a form of type-inference on steroids, for assigning types to external resources.
IMHO, whenever possible it is better using static typing, but there are real world problems where the nature of data is extremely dynamic, or the types are very complex. In these cases a naive but rigid static type system can be a problem. So in these cases it is better or a relaxed static type system where some constraints can be checked at run-time (i.e. like a dynamic type-system), or a very powerful static type system. In a certain sense, Common Lisp and Racket are examples of the first case, because you can add type annotations to code, so they are both dynamically and statically typed language.
You can try also https://gtoolkit.com/ The language is the same of Pharo, but the GUI is better, IMHO.
Glamorous Toolkit/Pharo are better than CL as IDE/GUI. It is more like a “video-game”, because the IDE is a first class citizen and you can customize it. For example you can notice if some classes are not passing some tests, because there are flags in the IDE.
As language I prefer CL, because metaprogramming (i.e. macro) are more explicit and clear respect Smallatalk approach.
In CL you have something like “(some-dsl-prefix …)” and all the things following the “(some-dsl-prefix …)” are clearly is the specified DSL. You can expand the macro, for seeing its semantic.
In Smalltalk you had to check the metaclass that created the object, but objects can be created in different point respect their usage, so good luck. Then you had to inspect if the behavior of some standard method is modified/customized. CL macro run at compile-time, while Smalltalk metaprogramming code run at run-time, using reflection, and customization of metaclasses.
A CL macro has a better view of the DSL code, because it can walk in it. I don’t remember how Smalltalk solves this.
I tried Smalltalk few years ago, so maybe I missed something.