LLMs cannot provide critique
They can simulate what critique might look like by way of glorified autocomplete. But it cannot actually provide critique, because they do not reason, they do not critically think. They match their outputs based upon the most statistically likely interpretation of the input in what you could think of as essentially a 3D word cloud.
Any critique that you get from an llm is going to be extremely limited and shallow (And there’s for the critical critique you require). The longer your text the less likely the critique that you receive is going to be relevant to the depth in which it may be needed.
It’s good for finding mistakes, it’s good for paraphrasing, it’s good for targeting. It cannot actually critique, which requires a level of consideration that is impossible for LLMs today. There’s a reason why text written by llms tends to have distinguishing features, or lack of, that’s a bland statistically generated amalgamation of human writing. It’s literally a “common denominator” generator.
Of course they are why wouldn’t they?
Any change in albedo modifies how much radiation is absorbed and emitted and the wavelengths it’s emitted at.
Sure one tile doesn’t do much but it does do something by a measurable degree. Even if tiny, it’s still quantifiable.