If you go by the headlines alone, then you’d be tempted to come to the conclusion that technology like digital fingerprint filtering is wreaking havoc on the Internet, and resulting in instant take-down of online copyright infringements in response to a runaway number of ‘false positives.’
“Out of control copyright bots are making a mockery of the DMCA,” proclaims a headline on ExtremeTech. “The Algorithmic Copyright Cops: Streaming Video’s Robotic Overlords” is how Wired.com entitled its treatment of the subject.
The headlines allude to a few unfortunate recent incidents in which live-streamed content has been erroneously disabled as copyright infringement in progress. A webcast of the Hugo Awards was one victim; Michelle Obama’s speech at the Democratic National Convention another, and NASA’s Mars rover a third.
What the (rather hysterical) headlines don’t convey, however, is that in each of these cases, the problem was not the underpinning technology, but the way in which that technology was employed by its users.
As Yangbin Wang, the founder and CEO of Vobile explained to Slate.com, Vobile’s software does not shut down video feeds, nor does it control how its software is used by its clients. All Vobile’s software does is report suspected infringements. From there, what happens next is up to the client.
The bottom line has not changed since we addressed this subject on this blog exactly one month ago: software like that authored by Vobile can only do so much, and one of the things it can’t do (yet, anyway) is serve as substitute for human judgment when human judgment is what the situation requires. A bot simply isn’t going to be able to identify fair use as reliably as a person can, and we should not try to make it do so.