Rendered at 12:35:41 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
_fw 1 days ago [-]
Are you trying to tell me, in this the year of our lord 2026, somebody has been (rightfully or wrongfully) arrested for literally ‘crying wolf’?
There’s something hilariously poetic about a ~2,500 year old fable being relevant today, because of AI.
lukan 1 days ago [-]
No, not really. There was a real wolf and the person dusturbed the operation.
"South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city.
The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection"
sillysaurusx 1 days ago [-]
But there are real wolves when shepherding too. That’s why crying wolf has any power.
To cry wolf is to say there’s a wolf here when it’s actually located elsewhere. The AI photo said there was a wolf at a certain intersection when it was actually located elsewhere.
In fact crying wolf is doubly appropriate because it means disturbing an operation looking for a wolf.
croes 1 days ago [-]
Crying wolf is normally starting the operation while there isn‘t a wolf.
This is misdirection while there is a wolf
Similar but different
weird-eye-issue 1 days ago [-]
That's completely pedantic and besides it's false because there literally wasn't a wolf there where he faked the photo in the first place
bryanrasmussen 1 days ago [-]
what if the real criers of wolves were the sheeple we misled along the way?
croes 1 days ago [-]
Crying wolf is crying for help when there is no danger not when there is a danger just at different place.
That's not pedantic, that's the meaning of the idiom.
fc417fc802 1 days ago [-]
If you stipulate that everyone must be relaxing at the time, sure. But the core concept of crying wolf is IMO simply a false alert with no particular constraints placed on those responding. I think in this case it simultaneously qualifies as crying wolf as well as misdirection.
1 days ago [-]
croes 24 hours ago [-]
But this isn't a false alert. The alert is real, people just got misdirected.
fc417fc802 17 hours ago [-]
Well it's definitely a false alert but I think maybe I see what's bugging you. If an enemy agent intentionally did that with the goal of disrupting operations we'd call it misdirection and it seems a bit silly in that scenario to also categorize it as crying wolf. Since there's an ongoing search that the guy was aware of you view this the same way.
But have you considered that the criteria arose organically as opposed to being engineered top down to account for edge cases such as this? I think in practice the term can probably apply to any instance where you might consider the longer term reputation of an individual or group that is separate from the response team.
Basically you've decided the two things must be mutually exclusive but haven't provided any reasoning or precedent for that constraint.
weird-eye-issue 23 hours ago [-]
It was a false alert in that particular place. I doubt those residents who were alerted had felt like they were previously in immediate danger.
sebastiennight 22 hours ago [-]
Think of it like this: if this same story was happening a couple of centuries ago, pre-Internet, this person who just got arrested would have been sitting at their balcony, crying "the wolf is here! down at the intersection!" ; causing the hunting team to waste time.
21 hours ago [-]
DANmode 12 hours ago [-]
Must not have paid close attention to the details of the story.
tokai 22 hours ago [-]
You don't get to chose pedanticism when it suits you. Please stop.
debo_ 19 hours ago [-]
Not pedanticism. Pedantry.
/joke
weird-eye-issue 24 hours ago [-]
This is real life there's always a danger just at a different place.
23 hours ago [-]
heliumtera 1 days ago [-]
le reddit mentality
psychoslave 1 days ago [-]
The biggest difference now is wolf is actually sought to protect him¹ from the crowd of the super-predators in town, so they can "give him a calm environment for recovery".
¹ Following pronoun variant used in the fine article here.
shlant 23 hours ago [-]
what an incredibly dumb thread this is. OP pointed out something amusing and it's being ruined by completely useless pedantry
arjie 22 hours ago [-]
I find that building a personal blocklist extension for myself lets me treat such threads as fertile grounds. I no longer get annoyed because I am pleased that I can quickly remove a lot of low quality commenters at once. Recommend writing one for yourself (trivial with LLM).
Original comment was clever and subsequent commenters were uninteresting to me. In this case, I only saw it because I’m on my phone which doesn’t have Chrome extensions. Turns out I’d already blocked them.
lukan 21 hours ago [-]
Well, I was a bit missled by the original comment, then I read the article, found the case to be a bit different than the tale and posted context. But giving the pedantry nature of HN, I should have probably been more clear in my wording to avoid the debate about meaning of words. Because yes, there is a resemblance to the fable, just not literally the same.
colechristensen 22 hours ago [-]
Serious suggestion: we flag responses or have a separate flag for comments which are unnecessarily pedantic / the commenter is functionally illiterate / the commenter didn't read (the three things are somewhat indistinguishable).
sunaurus 22 hours ago [-]
I'm honestly not convinced that it isn't just LLMs going in circles with each other
fennecbutt 23 hours ago [-]
Welcome to HN, I guess
20 hours ago [-]
aaron695 12 hours ago [-]
[dead]
abletonlive 19 hours ago [-]
I think people are just traumatized the content they've consumed from reddit so anything that reminds them of it, such as OP, triggers something in them because HN is starting to feel the same.
Don't even link me to the comment about how this has always been a complaint on HN, it's boring and it isn't the "gotcha" that you think it is.
pj_mukh 1 days ago [-]
If this was America there would be 20 think pieces in the Atlantic about how AI is ruining our culture and no one would get arrested.
colechristensen 22 hours ago [-]
Perhaps you forgot the fable which... features a real wolf.
moron4hire 1 days ago [-]
There was a real wolf in "The Boy Who Cried Wolf", too.
PUSH_AX 1 days ago [-]
> the person dusturbed the operation
Did they? The article says it's unclear as to their intent.
> Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.
lukan 1 days ago [-]
Intent or not, it did disturb as it misslead. And .. how can one imagine not disturb a search, when posting a wrong location?
stronglikedan 21 hours ago [-]
Intent or not, disturbing the search or not, it's silly that the authorities would arrest a civilian because of their own incompetence.
amenhotep 1 hours ago [-]
Everyone involved in this including the police are civilians
baranul 12 hours ago [-]
That's the true core of the story. Was the man truly trying to mislead authorities or is it more about authorities using the man as a scapegoat to hide their incompetence.
lukan 10 hours ago [-]
They were searching for a wolf. (Not really a standard task)
They don't have total surveillance, so also rely on public information
- a citicien posts information about the location of the wolf, a picture!
- authorities adopt their search based on that picture
Where is the incompetence here?
yongjik 9 hours ago [-]
I swear, some commenters here think "the government is incompetent" is an axiom and work backward from there to fill in the details.
computerfriend 18 hours ago [-]
Civilians get arrested for incompetence all the time.
hansmayer 1 days ago [-]
The fable was always relevant, afaic it is still a part of the curriculums. It's also a nice illustration of how LLMs screw up everything they touch - and please don't serve me the old "guns don't kill people - people kill people" argument over this.
unsupp0rted 1 days ago [-]
> It's also a nice illustration of how LLMs screw up everything they touch
And you'll be shocked what the kids have been doing with databases and API calls
hansmayer 1 days ago [-]
???
GuB-42 24 hours ago [-]
Guns primary purpose is to kill. The primary purpose of genAI (image generation goes beyond the scope of LLMs) is not to mislead, they are used successfully by millions of people for purposes that are in no way nefarious. It includes valuable contributions to fields like medicine.
Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it.
It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material.
zugi 19 hours ago [-]
Since you opted to engage in this off-topic discussion, I'll just point out that the overwhelmingly vast majority of the one billion or so guns currently on the planet have never been used to kill anyone. It's statistically far more accurate to say their primary purpose is to defend, with killing much further down on the list.
> Guns primary purpose is to kill
hansmayer 23 hours ago [-]
:) Wow you are getting ahead of yourself aren't you. LLMs are dangerous tools that any moron nowadays has access to. They can fabricate images of wolves roaming the streets, hallucinate fake arguments that sound really convincing and even coach people into committing a suicide, as you probably heard in the recent at least a dozen cases. I can't quite see the comparison you are making. It's not like you have access to a nuclear reactor or whatever other dangerous technology you wanted to lump in with it, at your finger tips, do you? This is because those other dangerous technologies are carefully managed. So now follow where I am taking this, I'll be explain it really simple. Guns are really easily accessible to people in large parts of the US. So some people will use guns to kill other people. Sometimes its an accident, like kids playing with daddy's gun and shooting their sibling. Some people argue that guns should be restricted, as it would reduce such accidents and incidents. But some other people say "guns dont kill people - people kill people". Now LLMs are as a dangerous technology, accessible to most anyone not just in the US, but around the world. Also easier to use. So anyone with basic command of language and ability to clank on a keyboard can "use" it. To the point that some people not only harm others, like this Korean champ, but also themselves, like those people who were goaded into committing suicide. Now my point was, and it should not have been that hard to see, that your argument is precisely of the "guns don't kill people" variety. The point is, if the chatbots that we pompously resigned to call "artificial intelligence" make mistakes 30-40% of the time, and we use them to verify information, they are dangerous and should not be allowed to use for such purposes as misleading generating public. Because that is dangerous. Now, in your small little selfish world, maybe they are "everywhere", meaning, you can offload your thinking to them, and maybe you even use them to write emails and summarise other people emails so you don't completely drown in your boring office job. But it does not mean you should compare them to anything you listed above. Those small "benefits" do not account for overall shittines of this so-called technology.
grosswait 1 days ago [-]
Is there a reason you felt the need to slip this non sequitur in your reply?
hansmayer 1 days ago [-]
I am not sure, but it probably isn't because I wanted to sound smart by using smart sounding words :)
1 days ago [-]
Razengan 1 days ago [-]
> somebody has been (rightfully or wrongfully) arrested for literally ‘crying wolf’?
Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?
Practically a social DoS
colechristensen 22 hours ago [-]
>Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?
This is an accurate criticism of the boy in the fable, if... an unnecessary way to express the idea.
littlestymaar 1 days ago [-]
Yeah, I really don't see the difference with false bomb alerts.
baobabKoodaa 20 hours ago [-]
"The boy who generated wolf"
red-iron-pine 18 hours ago [-]
the 40-year-old unnamed man who generated wolf
TwoNineFive 9 hours ago [-]
The world can't be that advanced when you have people shoving their religious imaginary friends in every sentence and then feigning victimhood for having it called out.
kqp 1 days ago [-]
It sounds like he didn’t actually file a false police report. They don’t even say they asked him whether it’s true. It seems the police just read a post by a random person on the internet, assumed it’s true, then arrested him when it wasn’t. The article is devastatingly light on info, though, so I can’t be sure.
tmtvl 1 days ago [-]
Yeah, we can't actually tell whether the image was posted with the poster going 'hey, @SouthKoreanPolice, wolf is here!', or whether it was xit out without any comment or context, or whether it was in response to a friend who lives in the vicinity of the location in the picture wondering where the wolf was,...
I don't care enough to bother finding out, but seems like the BBC could have done some more journalism, if they were so inclined.
Wowfunhappy 16 hours ago [-]
I can't decide how I feel about this.
The thing is, there's basically no reason to create this photo other than to mislead the authorities. It's purposefully blurry and not aesthetically pleasing. I cannot come up with any plausible artistic intent.
This could have happened without AI. Imagine if the police were trying to catch a serial killer, and I posted on Twitter that I saw him in a small town in Idaho or wherever, not because I had any real information but because I thought it would be amusing to create chaos. Maybe I'd create a bunch of sock puppet accounts with correlated sightings. At no point would I explicitly make a false police report, but the fake posts would get noticed all the same.
Is this illegal? I have no idea, I'm not a lawyer—but it feels like the sort of thing you'd want to have laws against. I'm not sure whether you'd run into first amendment issues in the United States.
TwoNineFive 9 hours ago [-]
I like this reply. It's nuanced. This guy didn't post that picture to be helpful. He did it to troll, and trolling is cruelty, and defending cruelty is immoral.
thebruce87m 22 hours ago [-]
> It seems the police just read a post by a random person on the internet, assumed it’s true
If it was true and police saw it but didn’t act, the fallout for them could be much worse depending on the outcome.
rubslopes 21 hours ago [-]
If this was true, they should have contacted the user nevertheless.
sigbottle 24 hours ago [-]
That was the impression I got as well, but it seems like other people disagree.
Joker_vD 22 hours ago [-]
It's always amusing to see what crimes people demand to have strict liability for, yes. "He posted a wrong location online, of course that'd disrupt the search for the wolf, right to jail, right away".
dfxm12 22 hours ago [-]
Authorities also presented the AI image during a press briefing on the runaway wolf, local media reported. ... Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.
With the info presented in the article, it sounds like the cops jumped to conclusions, got publicly embarrassed and are now going after him to either save face or get revenge (depending on how credulous you are of LEO).
sigmoid10 1 days ago [-]
Title should be "Man arrested for deceptive and antisocial behavior".
The only reason you are seeing this right now is because it has AI in the title.
maplethorpe 1 days ago [-]
Isn't the technology that enabled the deception noteworthy? Presumably this person wouldn't have been able to do this before AI.
Hypothetically, if a hacking tool was released that let non-technical people hack into sensitive databases, and then a journalist wrote the headline "local man hacks IRS", without any mention of the tool, wouldn't that be a bit irresponsible, to purposely leave that information out?
tete 1 days ago [-]
> Presumably this person wouldn't have been able to do this before AI.
Photoshop? I don't think you need much skill.
conartist6 1 days ago [-]
To make a shooped image good enough to fool the police into think they're looking at a completely real picture, you'd think it would take a reasonable amount of skill. If nothing else you need an exact match picture in terms of lighting and perspective.
sigmoid10 24 hours ago [-]
I guess people here are too young to remember things like the WTC plane guy. Half the people online thought it was genuine, while he did it for the lulz in a few minutes. Nobody cared about inconsistent lighting and perspective. Same way most people don't care about the obvious hallmarks of diffusion model generated pictures today.
maplethorpe 7 hours ago [-]
I'm not too young. I can't remember if I thought it was real at the time, but if I did, I give myself a pass since I was probably viewing it on a 15 inch CRT at 1024x768.
Because we're talking about the ease of Photoshopping a wolf into a scene, I think it's also worth pointing out that floating objects are a lot easier to work with than grounded objects, since cast shadows and bounce lighting are less of an issue. Having said that, it would still require some basic skill to achieve the WTC image which I think you're discounting. You'd need a working knowledge of layers, masks, and the lasso tool, which already would have placed it out of reach for most people at the time. Online resources were much more scarce, so I wouldn't be surprised if this guy was a hobbyist photographer or graphic designer. It definitely wouldn't have been achievable in a few minutes for the average person, and doing the same thing with a wolf would have been far more difficult, and well outside the realm of possibility for anyone who wasn't an expert.
notahacker 24 hours ago [-]
A person who had a Photoshop licence, had played around with layers and colour balance before and was sufficiently motivated to make it look convincing to spend a bit of time tidying it up, sure they could. But I'm not sure that necessarily applies to random people making funny memes of the wolf in their neighbourhood...
maplethorpe 1 days ago [-]
Creating a photorealistic mashup in Photoshop, without AI, takes a lot of skill. Just getting the shadows looking correct takes enough skill in itself, and that's only part of it.
Have you used Photoshop before? You come across as commenting on something you don't understand.
Loughla 1 days ago [-]
[dead]
ForHackernews 23 hours ago [-]
People have lied to the authorities without AI.
latexr 1 days ago [-]
The technology used is very much relevant, because the ease of access and easiness of production are likely to have been the biggest contributors. Had they had to open an image editor and spend a few hours to make something convincing, they would’ve been much less likely to do so, assuming this particular person even had the skills, and would have had multiple opportunities to change their mind.
It’s a crime of opportunity¹, one where you have the idea and act on it on a whim. No opportunity, no crime, and the technology provided the opportunity.
Yes, it's an interesting and novel thing about a topic many people here are interested in.
conductr 21 hours ago [-]
It’s relevant in the sense that I would have never guessed this little prank would get someone arrested.
We need to learn/adapt what we post, see, believe in photos to avoid arrest. Especially so in the AI reality because generating these images, and these pranks, has become increasingly easy for anyone to do with no skills and minimal time.
I think the part I find most fascinating though is it’s not clear if he took this picture to the police, actively wasting their time, or if he just posted it and they found it and mistakenly took it as truth. I have no insight to SK laws but for me it’s going to be unfair if they were the ones that used this picture as evidence when if it was never meant to be taken seriously.
jamesnorden 1 days ago [-]
The one time the headline isn't misleading, you want it changed?
mminer237 1 days ago [-]
That would be so vague as to be useless.
heddycrow 1 days ago [-]
I so wish this were true. Put AI in the title, garner instant attention.
raincole 1 days ago [-]
Except the actual title here is clearer. Your suggestion is so anti-AI-clickbait that it overflew and became a bad title again.
If Tesla (insert any car manufacturer you hate) ran over a kid I'd like to see the title say it, instead of "Tesla fined for violating traffic laws."
sebastiennight 22 hours ago [-]
I'd say "Tesla" in your example would be the equivalent attention-grabber to "AI" in the article here, so your non-clickbait example might have been "car manufacturer fined for car accident"
darkwater 1 days ago [-]
Yes, and at the same time we should ask the question: would the intersection between "people who think this is a funny thing to do" and "people with the technical capabilities to actually generate something that misleads police" [1] return a value > 0 before GenAI?
[1] waiting for some example where fool policemen where outsmarted with simple tricks /s
bblb 1 days ago [-]
How about not believing everything that's posted to the Internet. This could've easily been done with Photoshop in the pre AI era.
rwmj 1 days ago [-]
"easily" is doing some heavy lifting there. Is Photoshopping this image together really easier than prompting an AI?
RIMR 22 hours ago [-]
Yes, it was easy. Just because AI is "easier" doesn't mean that photoshopping a dog into a picture was ever particularly hard.
culi 6 hours ago [-]
Making it convincing certainly was. Being able to generate an image in 30 seconds is completely different from having to dedicate an hour of your day to it.
In the gun debate, there's something called "Weapon Instrumentality Effect"
Background image of some local street. Image of a wolf and object selection tool (pre AI era version). Touch up a little and add some filters to drop the quality.
Sure a little bit more involved than the two second AI prompt, but 3 min job for the lulz photoshoppers.
latexr 1 days ago [-]
No, it’s not “a little bit more involved”, it’s significantly more involved because it also requires the skills to even know what you’re talking about, the experience of having done it before to be convincing, the inclination to spend the time on it, downloading Photoshop itself, possibly cracking it… There are a lot of steps, most of which most people haven’t done and don’t know how. With generative AI, you just open a website and type a few words.
There are significantly more people able to type a few words into a prompt than people who can use an image editor fast and convincingly and would be inclined to waste their time on this kind of fake.
travisjungroth 22 hours ago [-]
Or like $5. This is the kind of thing it was very easy to hire people for. Dropping the price to near zero exploded the usage, though.
latexr 21 hours ago [-]
> This is the kind of thing it was very easy to hire people for.
But would you? People grumble about $0.99 for an app they’ll use everyday, I doubt paying even $5 (and waiting for a result!) for a fake image to mislead police is high on anyone’s list.
Making this image was likely fast and free. It’s a crime of opportunity.
This argument has always seemed a bit silly to me. It's like responding to someone saying that it's easy to set up a hello world web server without AI with "but but but first you need to know what a web server is and how to use the command line and download a compiler/runtime and and and", as if there were some dire shortage of people who are already programmers. There are literally millions of us, the existence of one who would want to put up a website about pretty much anything you can think of is not some huge shock. We know this because we were literally there in that pre-AI world full of sites about all sorts of things! Just because you can now do the same thing with a prompt doesn't mean it was magically unlikely to find a site about, I don't know, the social lives of cane rats before.
latexr 21 hours ago [-]
> There are literally millions of us
And there are literally billions of everyone else.
Do you not see that the amount of fake images has exploded with free access and ease of use? That’s what a tool does. It’s silly to argue generative AI doesn‘t make a difference in the proliferation of fake images, just like it’d be arguing that digital photography on a small multi-purpose device that is always with you doesn’t make people take more pictures.
filleduchaos 20 hours ago [-]
> It’s silly to argue generative AI doesn‘t make a difference in the proliferation of fake images
What I actually said couldn't be any clearer, and it's rather silly to twist my words into a strawman you can argue against.
latexr 20 hours ago [-]
> What I actually said couldn't be any clearer
I very much disagree, since you went on to make your whole point with an unrelated matter and apparently I misunderstood your point. Maybe you don’t know how to make your point clearer, but that isn’t the same as it being impossible to be clearer.
> and it's rather silly to twist my words
There was no twisting intended, and if I misconstrued your point I’d appreciate the correction (i.e. clarification).
Specifically: If you do agree that access to generative AI increases the proliferation of fake images (do you? I’m really asking. Sounds like you might), then what exactly is your objection to the original point?
cozzyd 21 hours ago [-]
yes but could your $RELATIVE set up a hello world web server without AI?
filleduchaos 20 hours ago [-]
Why does it HAVE to be my relative when the actual question is whether hello world web servers exist?
I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you. Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as? My goodness
latexr 19 hours ago [-]
> Why does it HAVE to be my relative when the actual question is whether hello world web servers exist?
No, that is not the question. I mean, maybe it’s the question you are asking, but no one else is.
> I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you.
That is not the point. The argument is simple: easier and cheaper access to a tool makes more people use the tool more often. Manual image editing is harder and takes longer than typing words into a box, thus more people do it more often and with fewer thought.
If you have the idea to manually edit a wolf into a street, you’ll first have to go to your computer or tablet, have a bunch of skills, and spend time doing it. You have plenty of opportunity to say “fuck it, I’ll do something else”. Most people drop at that point
because they can’t be bothered.
With generative AI, you can be so drunk you can barely stand, sitting on a portable toilet at a concert, haphazardly type a few words and get the result, immediately and for free.
Do you not see the difference between those two?
We can go further back: You could do image manipulation on film, before digital was a thing. But few people knew how or had access to the necessary chemicals and dark room. Do you not think the ease of access and digital tools increased the amount of people doing it?
> Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as?
No, no one believed that and no one is making that argument and I think you know that.
darkwater 24 hours ago [-]
I could never do it without investing a large amount of time into PS, and getting stressed a lot in the meanwhile.
dfxm12 22 hours ago [-]
"Easily in the pre-AI era" is different from "easier" & "easier" wasn't the claim.
To answer your question, relative ease is a function, in part, of one's skills & resources, so, it's certainly a reasonable claim to make, but will be different person to person.
pixl97 1 days ago [-]
And they easily could have been arrested for making photoshops of the same event.
NooneAtAll3 23 hours ago [-]
even better - it could've been an old photo in pre-photoshop era
rm30 1 days ago [-]
The BBC article doesn't specify the text with the image, but I clearly see a procedural gap in the police department. Accusing a man who only posted a photo, reorganizing the search based on an unverified photo, it's a big failure.
Did Orwell teach anything? What will they do with the next Visitors' spaceship photo?
culi 6 hours ago [-]
Posting misinfo online like that during a hunt that has the entire nation's attention is yelling fire in a crowded theater. Speech acts aren't protected when they unhinge on other rights. Like the entire subspecies of wolf surviving. It is extinct in the wild
pluc 1 days ago [-]
Get used to it, it's gonna keep happening since we're dumb enough to create a technology that mirrors reality with no safeguards whatsoever.
gmerc 1 days ago [-]
Oh actually penalizing people does help
kreco 1 days ago [-]
Penalizing people is slow and does not scale as much as AI creations that can be mass produced.
pixl97 1 days ago [-]
And if the person isn't in your country?
Nasrudith 23 hours ago [-]
Safeguards are a myth we like to tell ourselves exists so we don't have to reckon with the effects of human agency with tools. Where are the safeguards on knives and guns that make them only usable to hurt bad people?
prmoustache 1 days ago [-]
> Neukgu is part of a programme at O-World to restore the Korean wolf, which once roamed the Korean Peninsula but is now considered extinct in the wild.
I don't understand, shouldn't they have let him go if the idea is that they still roam in the wild? Why forcing it back to a zoo?
spiffyk 1 days ago [-]
Pretty sure if you let only a handful of individuals from an almost-extinct species roam around freely in an uncontrolled environment, chances are pretty high something is going to kill them off before they reproduce, hence why they are almost-extinct.
The zoo provides a controlled environment needed to restore the species.
EDIT: typo/word ordering
CrazyStat 23 hours ago [-]
Also, careful breeding to retain as much genetic diversity as possible is important to avoid collapse in small populations. Even if small local pockets survive, if each pocket is only able to inbreed with itself that will cause problems.
Our local children's museum is part of a network of sites working to restore red wolf [1] populations. Every few years they get new wolves as the coordinators move young wolves around to optimize mating pairs.
India is trying to increase it's wild tiger population, but that doesn't mean you let tigers roam around in the middle of New Delhi.
05 1 days ago [-]
Maybe it’s because wolves are genetically dogs and will cross breed and the conservation program supposedly needs to increase the numbers of that particular breed and not just wolves/dogs in general?
christoff12 1 days ago [-]
I'm a little surprised zoo animals aren't chipped with some kind of beacon locator for incidents such as these.
ErroneousBosh 1 days ago [-]
What sort of size do you think that would be?
Luc 1 days ago [-]
Small and low energy enough that tiny migratory birds can wear them for months. Externally worn of course (e.g. attached to the ear, for a wolf).
You could adjust the firmware of a wildlife tag to start transmitting location every 10 minutes when the animal leaves a geo-fence.
ErroneousBosh 1 days ago [-]
Bird ones are easy because birds are high in the air, so there's nothing to block the signal.
They are also not implanted in the birds, but are a relatively large "backpack" or leg tag.
christoff12 22 hours ago [-]
No idea. Surely some enterprising engineer could come up with a viable solution that could be sold to zoos all over, though.
tgsovlerkhgsel 15 hours ago [-]
Somewhat smaller than a regular airtag.
chrisweekly 1 days ago [-]
size of chip? they're tiny. dog owners typically have the vet "chip" their pet as a puppy. full-grown dog doesn't need a bigger chip.
codebje 1 days ago [-]
Those chips need to be scanned from about 3cm away. If you want a locator tag, it needs to carry enough power to broadcast a signal a useful distance. Still, a microchip is handy if you're not sure if it's your tiger you found.
jannes 1 days ago [-]
Those chips cannot track a dog's location
1 days ago [-]
shantnutiwari 21 hours ago [-]
He wasn't arrested because he posted an AI photo. He was arrested because he was wasting police time during a genuine problem of the police hunting for a escaped wolf.
He would have been arrested even if the image wasnt AI.
The title and article are very...tabloid-y
sveme 21 hours ago [-]
Why is the Content Credentials Standard [1] not more supported? It's basically hardware-signing of images, which would make it fairly straightforward to identify AI-generated content.
The BBC could have done a better job here with the headline. How about, “South Korea police arrest man who pulled a prank costing extensive resources”. The device of the prank is irrelevant really it’s the consequence. Involuntary manslaughter works the same way.
Joker_vD 19 hours ago [-]
Last time I checked, "costing extensive resources" was not a crime. And even if it were, you'd still ideally need mens rea.
estebank 17 hours ago [-]
When was the last time you checked South Korean law?
stingraycharles 1 days ago [-]
South Korea has some very specific (and unusually harsh) laws around deepfakes. I was under the impression that it was only about impersonating people, but apparently it’s broader.
msh 1 days ago [-]
I think many places, even without specific deepfake laws, would prosecute someone who used a fake image to mislead the police.
jonnonz 1 days ago [-]
This is how the future will look!
Ylpertnodi 23 hours ago [-]
Nay, poor BBC journalism has been around for a while, now.
antiloper 1 days ago [-]
Need this in the west as well
Gigachad 1 days ago [-]
IMO you should be legally required to disclose that a video has been AI generated when you share it.
sammy2255 1 days ago [-]
What is the charge?
fredoralive 1 days ago [-]
The article says:
“Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700; £5,000)”
Somewhat harsher than the UK at least, where “wasting police time” would only get you six months or around a £2500 fine.
heddycrow 1 days ago [-]
It is, quite frankly, completely wrong that this man was arrested—if anything, by this line of reasoning, it should have been an artist instead—since AI, as we are told, merely makes copies of what hard-working human artists have already created and shared on the internet.
AI is plagiarism—full stop—nothing more, nothing less.
Of course, this point could have been made without sarcasm (and AI tells for parody)—I’m aware—but that would remove a certain… texture from the argument. And where, exactly, is the fun in that?
idbehold 1 days ago [-]
The amount of punctuation and terrible sentence structure make this nearly incomprehensible.
heddycrow 1 days ago [-]
Yeh, I might have went overboard with the snark here. It seems even the line hinting that this was snarky was lost.
If it helps, imagine the text more as a work of art than an instruction manual. Art matters.
nslsm 23 hours ago [-]
[dead]
thrownaway561 22 hours ago [-]
The dude openly admits that he posted the image "for fun", so there was clearly nefarious behavior and purposely wanted to confuse police and the investigation. I don't see why people are trying to defend this dude as him simply "posting a picture of a wolf". I guess people will defend any sort of bad behavior in this day and age.
valicord 21 hours ago [-]
How do you immediately jump from "fun" to nefarious behavior? Are you implying that it's impossible for anyone to have fun in a non-nefarious way?
bhanuhai2 23 hours ago [-]
Solid
oldbear 19 hours ago [-]
the news paywall. wow ok.
Aegis_Labs 23 hours ago [-]
[dead]
junaru 1 days ago [-]
[flagged]
concinds 1 days ago [-]
Antisocial behavior should face consequences. I'm not Asian and I don't understand your mindset.
cwillu 1 days ago [-]
I think a fine is a perfectly reasonable consequence.
catcowcostume 1 days ago [-]
How do we flag racism on HN?
cwillu 1 days ago [-]
You click the flag button, the same as any other objectionable/antisocial comment.
keybored 1 days ago [-]
It’s not straightforward for comments. The flag button does not appear on comments when you are viewing the thread.
- Click on the timestamp for the comment which will take you to the comment page
- Then you can click the flag button
dist-epoch 1 days ago [-]
So you are saying authorities should ignore public posts unless they are specifically sent to them?
What if another citizen forwarded the image to the police, not knowing it was AI generated? Should it have been ignored because it was not made by the sender? Should it have been ignored because it was forwarded from a public post?
hsbauauvhabzb 1 days ago [-]
Asian specifically? Westerners are just as bad if you look into that whole nation state influencing foreign elections thing.
kotaKat 1 days ago [-]
"disrupting government work by deception" sounds like such a busywork charge here trying to do some heavy lifting. An absolutely tough, rough criminal out here...
nubg 1 days ago [-]
Can you clarify what you mean by Asian mindset here? Trying to save face?
dilawar 1 days ago [-]
Not OP. Indian here. I find [1] to be a perfect example.
There’s something hilariously poetic about a ~2,500 year old fable being relevant today, because of AI.
"South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city.
The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection"
To cry wolf is to say there’s a wolf here when it’s actually located elsewhere. The AI photo said there was a wolf at a certain intersection when it was actually located elsewhere.
In fact crying wolf is doubly appropriate because it means disturbing an operation looking for a wolf.
This is misdirection while there is a wolf
Similar but different
That's not pedantic, that's the meaning of the idiom.
But have you considered that the criteria arose organically as opposed to being engineered top down to account for edge cases such as this? I think in practice the term can probably apply to any instance where you might consider the longer term reputation of an individual or group that is separate from the response team.
Basically you've decided the two things must be mutually exclusive but haven't provided any reasoning or precedent for that constraint.
/joke
¹ Following pronoun variant used in the fine article here.
Original comment was clever and subsequent commenters were uninteresting to me. In this case, I only saw it because I’m on my phone which doesn’t have Chrome extensions. Turns out I’d already blocked them.
Don't even link me to the comment about how this has always been a complaint on HN, it's boring and it isn't the "gotcha" that you think it is.
Did they? The article says it's unclear as to their intent.
> Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.
They don't have total surveillance, so also rely on public information
- a citicien posts information about the location of the wolf, a picture!
- authorities adopt their search based on that picture
Where is the incompetence here?
And you'll be shocked what the kids have been doing with databases and API calls
Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it.
It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material.
> Guns primary purpose is to kill
Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?
Practically a social DoS
This is an accurate criticism of the boy in the fable, if... an unnecessary way to express the idea.
I don't care enough to bother finding out, but seems like the BBC could have done some more journalism, if they were so inclined.
The thing is, there's basically no reason to create this photo other than to mislead the authorities. It's purposefully blurry and not aesthetically pleasing. I cannot come up with any plausible artistic intent.
This could have happened without AI. Imagine if the police were trying to catch a serial killer, and I posted on Twitter that I saw him in a small town in Idaho or wherever, not because I had any real information but because I thought it would be amusing to create chaos. Maybe I'd create a bunch of sock puppet accounts with correlated sightings. At no point would I explicitly make a false police report, but the fake posts would get noticed all the same.
Is this illegal? I have no idea, I'm not a lawyer—but it feels like the sort of thing you'd want to have laws against. I'm not sure whether you'd run into first amendment issues in the United States.
If it was true and police saw it but didn’t act, the fallout for them could be much worse depending on the outcome.
With the info presented in the article, it sounds like the cops jumped to conclusions, got publicly embarrassed and are now going after him to either save face or get revenge (depending on how credulous you are of LEO).
The only reason you are seeing this right now is because it has AI in the title.
Hypothetically, if a hacking tool was released that let non-technical people hack into sensitive databases, and then a journalist wrote the headline "local man hacks IRS", without any mention of the tool, wouldn't that be a bit irresponsible, to purposely leave that information out?
Photoshop? I don't think you need much skill.
Because we're talking about the ease of Photoshopping a wolf into a scene, I think it's also worth pointing out that floating objects are a lot easier to work with than grounded objects, since cast shadows and bounce lighting are less of an issue. Having said that, it would still require some basic skill to achieve the WTC image which I think you're discounting. You'd need a working knowledge of layers, masks, and the lasso tool, which already would have placed it out of reach for most people at the time. Online resources were much more scarce, so I wouldn't be surprised if this guy was a hobbyist photographer or graphic designer. It definitely wouldn't have been achievable in a few minutes for the average person, and doing the same thing with a wolf would have been far more difficult, and well outside the realm of possibility for anyone who wasn't an expert.
Have you used Photoshop before? You come across as commenting on something you don't understand.
It’s a crime of opportunity¹, one where you have the idea and act on it on a whim. No opportunity, no crime, and the technology provided the opportunity.
So yes, the technology used matters.
¹ https://en.wikipedia.org/wiki/Crime_of_opportunity
http://web.archive.org/web/20250201051019/https://www.ojp.go...
We need to learn/adapt what we post, see, believe in photos to avoid arrest. Especially so in the AI reality because generating these images, and these pranks, has become increasingly easy for anyone to do with no skills and minimal time.
I think the part I find most fascinating though is it’s not clear if he took this picture to the police, actively wasting their time, or if he just posted it and they found it and mistakenly took it as truth. I have no insight to SK laws but for me it’s going to be unfair if they were the ones that used this picture as evidence when if it was never meant to be taken seriously.
If Tesla (insert any car manufacturer you hate) ran over a kid I'd like to see the title say it, instead of "Tesla fined for violating traffic laws."
[1] waiting for some example where fool policemen where outsmarted with simple tricks /s
In the gun debate, there's something called "Weapon Instrumentality Effect"
https://journals.sagepub.com/doi/10.1177/1088767907306507
Sure a little bit more involved than the two second AI prompt, but 3 min job for the lulz photoshoppers.
There are significantly more people able to type a few words into a prompt than people who can use an image editor fast and convincingly and would be inclined to waste their time on this kind of fake.
But would you? People grumble about $0.99 for an app they’ll use everyday, I doubt paying even $5 (and waiting for a result!) for a fake image to mislead police is high on anyone’s list.
Making this image was likely fast and free. It’s a crime of opportunity.
https://en.wikipedia.org/wiki/Crime_of_opportunity
And there are literally billions of everyone else.
Do you not see that the amount of fake images has exploded with free access and ease of use? That’s what a tool does. It’s silly to argue generative AI doesn‘t make a difference in the proliferation of fake images, just like it’d be arguing that digital photography on a small multi-purpose device that is always with you doesn’t make people take more pictures.
What I actually said couldn't be any clearer, and it's rather silly to twist my words into a strawman you can argue against.
I very much disagree, since you went on to make your whole point with an unrelated matter and apparently I misunderstood your point. Maybe you don’t know how to make your point clearer, but that isn’t the same as it being impossible to be clearer.
> and it's rather silly to twist my words
There was no twisting intended, and if I misconstrued your point I’d appreciate the correction (i.e. clarification).
Specifically: If you do agree that access to generative AI increases the proliferation of fake images (do you? I’m really asking. Sounds like you might), then what exactly is your objection to the original point?
I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you. Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as? My goodness
No, that is not the question. I mean, maybe it’s the question you are asking, but no one else is.
> I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you.
That is not the point. The argument is simple: easier and cheaper access to a tool makes more people use the tool more often. Manual image editing is harder and takes longer than typing words into a box, thus more people do it more often and with fewer thought.
If you have the idea to manually edit a wolf into a street, you’ll first have to go to your computer or tablet, have a bunch of skills, and spend time doing it. You have plenty of opportunity to say “fuck it, I’ll do something else”. Most people drop at that point because they can’t be bothered.
With generative AI, you can be so drunk you can barely stand, sitting on a portable toilet at a concert, haphazardly type a few words and get the result, immediately and for free.
Do you not see the difference between those two?
We can go further back: You could do image manipulation on film, before digital was a thing. But few people knew how or had access to the necessary chemicals and dark room. Do you not think the ease of access and digital tools increased the amount of people doing it?
> Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as?
No, no one believed that and no one is making that argument and I think you know that.
To answer your question, relative ease is a function, in part, of one's skills & resources, so, it's certainly a reasonable claim to make, but will be different person to person.
Did Orwell teach anything? What will they do with the next Visitors' spaceship photo?
I don't understand, shouldn't they have let him go if the idea is that they still roam in the wild? Why forcing it back to a zoo?
The zoo provides a controlled environment needed to restore the species.
EDIT: typo/word ordering
Our local children's museum is part of a network of sites working to restore red wolf [1] populations. Every few years they get new wolves as the coordinators move young wolves around to optimize mating pairs.
[1] https://en.wikipedia.org/wiki/Red_wolf
https://m.wikitree.co.kr/articles/1132213
You could adjust the firmware of a wildlife tag to start transmitting location every 10 minutes when the animal leaves a geo-fence.
They are also not implanted in the birds, but are a relatively large "backpack" or leg tag.
He would have been arrested even if the image wasnt AI.
The title and article are very...tabloid-y
Needs to be supported by smartphones, of course.
[1] https://contentcredentials.org/
“Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700; £5,000)”
Somewhat harsher than the UK at least, where “wasting police time” would only get you six months or around a £2500 fine.
AI is plagiarism—full stop—nothing more, nothing less.
Of course, this point could have been made without sarcasm (and AI tells for parody)—I’m aware—but that would remove a certain… texture from the argument. And where, exactly, is the fun in that?
If it helps, imagine the text more as a work of art than an instruction manual. Art matters.
- Click on the timestamp for the comment which will take you to the comment page
- Then you can click the flag button
What if another citizen forwarded the image to the police, not knowing it was AI generated? Should it have been ignored because it was not made by the sender? Should it have been ignored because it was forwarded from a public post?
[1] https://www.thehindu.com/news/national/fir-against-reporter-...