There are other considerations here though. Google suffers reputational harm if users become victims through their platform. It becomes news, it creates distrust in users, it generates friction with regulators and law enforcement. Users may be trained to be ad averse or install ad blockers. In addition, these ads generate reports which costs time to process even if the complaints are rejected.
At the end of the day these scammers are not high profile advertisers and they’re not valuable. They’re burner accounts that pay cents to deliver their ads. They’re ephemeral, get zapped, reappear and constantly waste time and resources. Given that YouTube can easily transcribe content and watermark it, it makes no sense to me that they wouldn’t put some triggers in, e.g. a new advertiser places an ad that says “Elon Musk”, or “Quantum AI” or other such markers, flag it for review.
They have to have a human respond to each and every complaint about that ad. Seems more sensible to automate and flag suspicious ads before the complaints happen.