10 Ethical Dilemmas in Science
10 Ethical Dilemmas in Science
10 Ethical Dilemmas in Science
No matter how much you spend, you’re still going to get old
Buying youth
Fueled by concern over both premature aging and skin diseases (and,
of course, a good dose of vanity), more and more people are willing to
shell out hundreds of dollars to closely monitor and treat their skin.
But many of these devices have little or no reliable scientific evidence
to back them up. Even research done by the manufacturers
themselves isn’t terribly convincing since it usually involves very few
test subjects and includes no other lifestyle information about
participants for context. But most consumers never see that research
anyway.
Now, the fact that there’s little independent research on beauty tech is
partly because the only people who want to pay to prove that a device
works are the ones selling it. And some devices simply don’t lend
themselves to double-blind placebo-controlled studies – how are you
going to administer a placebo for a microneedler, for example?
There’s no doubt that our obsession with beauty and with the
“quantified self” is driving the propagation of questionable beauty tech.
But is this really letting us live our best lives or is it adding an extra
layer of anxiety about how we should look?
Putting aside the cultural issues at play, the main question is whether
this tech does any good. In cases where it helps detect skin cancer,
sure. But how about when it gives you a wrinkle score? Who are you
being scored against?
Our faces, ourselves?
There’s a lot of information available online about each and every one
of us and companies are already incorporating what they can into their
detailed assessments. But there’s a big difference between what a
company can learn about a candidate and what
they should know about them. How much data is a company entitled
to gather about a (potential) worker anyway? If HR can already look at
your social media accounts and credit history, is scraping the web for
your data and using it to quantify your fitness for a job really pushing
the boundary? Well, yes.
Predatory journals
Just because a piece of research is published in a journal doesn’t mean
it’s legitimate. But how are we to know that? In this age of being able
to find nearly anything online, what’s stopping us from coming across
the abstract of a piece of bad research and taking it as scientifically
valid, whether we’re students working on a project or journalists
reporting on a story? The answer is: almost nothing.
It’s no surprise that these have popped up if you think about the
opportunities that come with supposed expertise in an area. In
academia, those on the tenure-track must publish, and they are
judged by committees made up of faculty from various fields. Those
who don’t know what to make of the reputation of various journals
often fall into the trap of judging scholars based on quantity rather
than quality. Many scholars have been duped by invitations from
predatory journals to publish their work there – after an important
paper that they’ve spent months or years writing has been rejected by
legitimate journals (sometimes simply because they receive more
manuscripts than they can possibly publish), they may be happy just
to be published somewhere.
These days we’re simply swimming in fake data with no real way to tell
– sometimes even the scientists themselves can’t – if research is
legitimate. Do you know how to tell if an article or journal is
legitimate? And how do we measure this problem against another
known issue in scientific publishing – the replication crisis that plagues
research in even the most prestigious journals?
Many have seen this project as an effort to deflect attention away from
the gun rights debate, and it remains to be seen just what would
happen to those identified as high risk by the algorithms SAFEHOME
might employ.
While the project would begin by collecting and processing data from
volunteers who would give up any expectation of privacy, it does open
the door to a future where the government can surveil those deemed
mentally ill. And there’s no way of knowing yet how transparent the
project would be.
Arresting the innocent
A culture of surveillance
•Critics are concerned about the privacy of this information as well as
the psychological effects that constant measurement has on children.
It’s meant to reinforce good behavior and help teachers and parents
intervene with students exhibiting “bad” behavior. But is it yet another
insidious tool in our new surveillance culture as well as our obsession
with the quantification of the self?
ClassDojo claims the concerns voiced by critics are wrong:
First grade classroom at the elementary school in the village Ait Sidi
Hsain, near Meknes. The school has 610 students of which 243 are
girls. There are 16 teachers. The school is part of a conditional cash
transfer (CCT) program — Tayssir — which started out in 2008. It aims
to prevent children in Moroccos poorest rural areas from dropping out
of school by transferring regularly a monthly amount of money per
child to the families. So far, 254 households in the village has
benefited from the program. Getting and keeping boys and girls in
school, particularly in underprivileged areas, is a key development
priority for the government of Morocco. The World Bank assisted in
this effort advising on the design and implementation of the targeted
cash transfer program, as well as by evaluating results to facilitate
optimal scale-up in the future. The CCT program now reaches more
than 340,000 students (or 200,000 households) both in elementary
and secondary school. (Photo: Arne Hoel)
ClassDojo isn’t going anywhere any time soon – in fact, its use will
probably grow. But that’s why it’s important for people to keep asking
questions about whether and how students are benefitting and how
this exceptionally personal data is being kept safe (especially
since schools are popular targets for hackers and ransomware).
Grinch bots
We first heard about Grinch Bots in 2017 when online entities began
using cyberbots to snap up popular goods as soon as they hit the
market or went on sale for Black Friday. The goal is to increase
demand and control the supply for everything from children’s toys to
event tickets and make money by jacking up their resale prices on
sites like eBay.
While states such as New York have tried to crack down on these
cyber retail stalkers, they’re tough to find and they can get past
CAPTCHA software by employing humans to do that work for very little
money.
“While it appears that internet traffic is at its annual high during the
prep days before Black Friday/Cyber Monday, 37% of that traffic is
comprised of bots, not holiday shoppers…Bad bots are at their highest
level a few days prior to Black Friday/Cyber Monday,
representing 96.6% of total traffic to retailers’ login pages. This
indicates that bot masters are using this time as preparation days
before the surge in customer shopping.”
Maybe now is the time to think about just how much you really need
that shiny new item.
Project Nightingale
Project Nightingale was announced in November of this year and
raised approximately one round of panic before the news cycle moved
onto the next big thing. But just because it’s not on the front page
doesn’t mean it’s gone away (and just because the project could start
in the U.S. doesn’t mean it’s irrelevant to the rest of the world).
First, let’s start with Ascension, the Catholic health care system that
also happens to be the second-largest in the United States. With
roughly 2,600 hospitals, doctors’ offices and other related facilities
spread over 21 states, it holds tens of millions of patient records – and
these records have comprehensive health information on millions of
Americans. It’s a valuable resource for anyone wanting to do health
research.
Then along came Google, a company that has had a rough few years
PR-wise and has largely lost the public’s trust (even as we continue to
use it every day). When it was announced that Google was developing
software to compile, store and search medical records and that both
companies had signed a Health Insurance Portability and
Accountability Act (HIPAA) agreement, the goal was clear – Ascension
was going to transfer the health records to the Google Cloud.
But the way consumers found out and the fact that it wasn’t a
transparent process naturally raised some suspicions, especially in the
privacy arena. In fact, these suspicions still exist since most of what
we know comes from anonymous insiders. Those insiders also report
that Ascension employees have expressed concern over some of the
ways Google intends to handle the data, claiming that it is not HIPAA-
compliant. Google denies this.
If we’ve learned anything over the last decade, it’s that secure data
can be hacked and anonymized data can be de-anonymized. So this
raises an important question – what could possibly go wrong? And
we’re about to find out since the partnership has now triggered a
federal inquiry.
Of course, this is not to say that one must have a Ph.D. in philosophy
to be a tech ethicist. In fact, if academics keep the field to themselves,
it’s unlikely ever to make it out of the so-called Ivory Tower.
The point is that people who “do” ethics need to have rigorous training
and understand the frameworks for ethical decision making.
Otherwise, ethics turns into a merry-go-round where resolutions are
made on the fly by people who use whatever evidence they want in
order to decide if something is right or wrong.
Deepfakes
In a world where we believe whatever we like to hear, is there any
reliable way to stop the spread of misinformation?
These days, just about anyone can download deepfake software to
create fake videos or audio recordings that look and sound like the real
thing. While there has been a lot of fear surrounding the damage they
might do in the future, until now, deepfakes have been limited
to superimposing faces into porn or swapping out audio to make it look
like politicians are saying something controversial. Because the
Internet is full of fact-checkers, most deepfakes have been outed
almost immediately.
There are two federal bills currently under consideration in the U.S.,
the Malicious Deep Fake Prohibition Act and the DEEPFAKES
Accountability Act. California, New York, and Texas are also attempting
to build state legislation to regulate them. But one wonders, do
politicians understand the technology well enough to regulate it? Will
those laws conflict with First Amendment rights? Is there even a way
to regulate the technology or should we be concentrating on its
weaponization instead?
One way governments are trying to stop the creation and spread of
deepfakes is by regulating social media, the most common platform on
which they are shared. But tech companies have already proved
themselves largely immune to this type of regulation.