But we have known of many of the threats. Bin Laden had tried to take down the towers a few years before. There was chatter all over the place. No human sat down with a cup of coffee and looked at it and said, "Wait a minute, something's not right here."
Because if this is what we are talking about, I am not sure that running an algorithm would have stopped that guy in Las Vegas. Or the guy here who choked his wife.
In a sceptical world I am just questioning what this is about. Is it about stopping these sorts of things, or is it an easy way to catch a person who might have inadvertently let his car insurance lapse ten years ago? Or affect immigration reform by proxy. Or sell this data on to some corporation.
I don't know how much serious crime has been stopped in Britain via CCTV, but it has certainly been a cash cow for popping people who may be travelling 10 miles an hour over the speed limit. But a guy with a sharpened screwdriver just pulls his hood up.
Comparing pre to post 9/11 or a decade further on is night and day.
There are trillions of data points flying around in systems that are (often intentionally and for good reason) not well interconnected. There are not enough people on earth to process that much information, nor hope to do it while a flight is in the air from Toronto to NYC.
No algorithm is perfect, and a perfect algo with imperfect data is going to make mistakes.
Still, you do the best you can. If a computer can pass 90%+ or travelers who present no problems, and flag the 10% to human review with the reasons they were flagged, that’s certainly better than humans trying to pour through 100% and make judgements without any outside data to help them.
Of course it’s not perfect. But it’s the best they can do at present and a heck of lot better than nothing.
It’s asking people from non global entry countries to provide social media user names and email addresses they’ve used in the last five years. It isn’t username and password. They can’t see anything more than a correctly aimed google search would lead them to. It’s just faster and more accurate. And still that’s only going to be even looked at after the person has been flagged to human review for some other reason.
There are some interesting predictive software applications targeted at fake news and extremism detection. Some of those could be weaponized with this information, but they could without it too.
There’s really no way for this to impact immigration reform by other means, or discrimination, or whatever else.
Sent from my iPhone using Tapatalk