Your example is clearly an example of fair-use. Mentioned a very similar example in the article, so I'm not sure what you mean by "downplaying". Perhaps you can elaborate on that.
Let's say I make a piece of art that inspires you. I will benefit from that because a portion of your audience will find me. That happens a lot with the creative fields. In the case of LLMs, you can't point to an inspiration. This means that if I create an output sampling someone else's work, that other person gets no benefit. Compensation can be as simple as crediting the inspirations (which I recognize is harder with how LLMs work), but should be possible (to a degree). It won't always work, but there should be attempts at trying. A not perfect solution is better than nothing.
The other alternative is simpler- use smaller datasets. Pick better samples (multiple results have shown you can get similar results with smaller datasets), and pay everyone whose work was included in the dataset. You pay for an entire textbook/course upfront, regardless of the portion you use/complete. Similar concept. There will be logistical things to hash out here, but it is again doable.
From the article-
Moving on to the next argument, this is the more overarching conversation to be had. Technically, these models embed representations into latent space and use that to generate inputs. The models don’t copy as much as take inspiration from the images. If I decided to create Goku-like character after looking at DBZ, do I owe money to Akira Toriyama? Do all the anime creators pay royalties to their inspirations? No, to both. Should this be any different for large models, which are essentially just sampling a data pool of inspirations to create their outputs?