Would it be worth reaching out to them on social media? Something like this would be great PR for them.
Would it be worth reaching out to them on social media? Something like this would be great PR for them.
Not including work devices - probably my old university files. I intentionally wrote about topics relevant to the career I wanted (which I now have) and they’re genuinely useful for going back and referring to.
Fair enough. Well hopefully it helps some Europeans out! I only noticed it by accident because I sit by the back toilets due to IBS anyway.
Edit: I’ve done this with easyJet, Wizzair and Vueling - for reference
I always buy an aisle seat so I can stretch my legs and get up whenever I need.
I’ve also learned that most airlines (at least here in Europe) fill their seats from the front back. So if you sit near the back and keep an eye on the back row, sometimes it’s completely unoccupied. In which case I move there and can practically lie down.
Yeah I’m potentially moving abroad next year and will have to find a home for her, and I’m cautious to give her to certain people for that reason.
I know all cats like to get under your feet but mine is an absolute jedi master at knowing exactly where to be to inconvenience you the most. I don’t know how she does it. She just knows exactly where I’m going to go so she can be directly in the way. You’d think she’d learn after being accidentally kicked a few times but nope.
I’ve heard of people’s accents changing after receiving a brain injury. Whether that’s more likely than some form of attention seeking, who knows.
As someone who works in the field of criminal law (in Europe, and I would be shocked if it wasn’t the same in the US) - I’m not actually very worried about this. By that I don’t mean to say it’s not a problem, though.
The risk of evidence being tampered with or outright falsified is something that already exists, and we know how to deal with it. What AI will do is lower the barrier for technical knowledge needed to do it, making the practice more common.
While it’s pretty easy for most AI images to be spotted by anyone with some familiarity with them, they’re only going to get better and I don’t imagine it will take very long before they’re so good the average person can’t tell.
In my opinion this will be dealt with via two mechanisms:
Automated analysis of all digital evidence for signatures of AI as a standard practice. Whoever can be the first person to land contracts with police departments to provide bespoke software for quick forensic AI detection is going to make a lot of money.
A growth in demand for digital forensics experts who can provide evidence on whether something is AI generated. I wouldn’t expect them to be consulted on all cases with digital evidence, but for it to become standard practice where the defence raises a challenge about a specific piece of evidence during trial.
Other than that, I don’t think the current state of affairs when it comes to doctored evidence will particularly change. As I say, it’s not a new phenomenon, so countries already have the legal and procedural framework in place to deal with it. It just needs to be adjusted where needed to accommodate AI.
What concerns me much more than the issue you raise is the emergence of activities which are uniquely AI dependent and need legislating for. For example, how does AI generated porn of real people fit into existing legislation on sex offences? Should it be an offence? Should it be treated differently to drawing porn of someone by hand? Would this include manually created digital images without the use of AI? If it’s not decided to be illegal generally, what about when it depicts a child? Is it the generation of the image that should be regulated, or the distribution? That’s just one example. What about AI enabled fraud? That’s a whole can of worms in itself, legally speaking. These are questions that in my opinion are beyond the remit of the courts and will require direction from central governments and fresh, tailor made legislation to deal with.