FarayiMafotiFirstPaper 11 - 22 Jan 2012 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstPaper" |
MY REVISED ESSAY | |
< < | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | > > | Your revised essay
shouldn't be stacked on top of former drafts. It should replace
former drafts. The History facility in the wiki does the job of
permitting comparison of versions. This misuse of the wiki form
makes actual comparison of versions much more difficult. Please undo
it, by creating a clean version of your second draft, then another
with my comments interlined, then a clean copy of your next version,
each saved on top of the other, so that the history shows correct
diffs. | | GOOGLE, GIVE US A PEAK
Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it displays its own affiliated sites or content prominently in search results. | |
> > | Nonsense. All of this
is merely a thin crust of complaining on top of an immense reservoir
of not doing anything. You haven't responded to the basic criticism
of the last draft, which is that you're mischaracterizing
self-promotion by legislators and regulators with actual governmental
activity, of which there isn't any for obvious reasons you don't
mention. | | THE ARGUMENT FOR TRANSPARENCY | |
< < | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet? | > > | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? | | | |
> > | If that's the question
being asked, the answer is simple: no. Among other reasons is the
existence of the First Amendment. I don't know whether all "search
neutralists" are incompetent morons, or only the ones who teach on
the Columbia Law faculty, but if there is something intelligent
enough to be worth writing an essay about, this question isn't
it.
Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet?
Of course not. Why
would it? Google is just one method for searching the web. Most of
us use multiple other methods, whether we know it or not, and there's
an immense, deeply-funded competitor pressing the Google results
model everywhere on earth every second of every day. You'd have to
be making up both facts and law as you go along to believe there's
any energy available in that question. This was the problem that
needed to be addressed after draft one, and despite arguing with me
in the comments and writing another supposedly-responsive draft, you
still haven't laid a glove on it. | | THE ARGUMENT FOR TRANSPARENCY WILL BE IGNORED BY THE COMMON HERD
Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless; (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to spur innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | |
> > | Or an approximation
which is sufficient for their present purposes, whatever those
purposes are. If they want another approximation, Bing is delighted
to provide one. Depending on what you're looking for, and what
you're looking at it on, either one (or a third engine) may be the
"best" choice, though there is no reason to suppose that an actual
optimum exists where any search with a significant number of results
is conducted. | | A PROSUMER-INSPIRED SEARCH ARCHITECTURE | |
> > | Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. | | | |
< < | Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | > > | Why is a prosumer
different from a consumer in this respect? You haven't actually made
any use of the idea of the prosumer, and you've missed the point
involved in my suggesting the importance of our own acts in building
the web in consequence.
Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others.
So what?
Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches.
Huh? There's nothing to
prevent people from wrapping the results of simultaneous searches
among the competing engines in results-rankers of their own devising.
I often use a tool that does simultaneous Bing and Google searches,
combines the two sets, and then throws away almost all the
information each of them provided in order to give me what I want. I
don't have to care what the algorithms are that either engine used.
All they did was dig raw material out of the Web for me, and I
processed it myself. The union of everything produced by Google and
everything produced by Bing, reselected and sorted by what I want to
prioritize, is easy for me to make and entirely eliminates whatever
"anti-competitive" effects you think you could discover in either
mega-engine's behavior, for some tiny number of searches in some tiny
number of ways. A little technical thinking and some prototyping
could probably have prevented you from wasting time on this blind
alley of thinking, as it would help the law professors who mumble
about this stuff all the time without knowing shit about it. But
because they don't know shit, their chances of figuring anything out
are tiny, and they never learn anything from anybody else, because
they're so smart they don't need to listen to anyone except
themselves. You do not want to follow their
example.
To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web.
Well, then, why bother
writing the essay, inasmuch as there's no need whatever for
"transparency" in order to enter into this "emerging paradigm"?
| | FOOD FOR THOUGHT: ONE IMPLICATION OF A FEDERATED SEARCH ARCHITECTURE | |
< < | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would help avoid issues of gamesmanship (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. | > > | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to.
No, it is, not "can
be" difficult for a simpler reason: we don't know how. This has
nothing to do with "revenue mechanisms." If we knew how to federate
search, we wouldn't need "revenue mechanisms," anymore than we need
"revenue mechanisms" for Wikipedia, or free software. We need to
know, as a technical matter, how to federate search. If you already
know that, and all you need is a revenue model, you should write an
essay about that. Many people, including me, will be immensely
impressed.
Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the gpublisher.
What have sponsored
links got to do with it? In a federated search model, there wouldn't
be any.
The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would help avoid issues of gamesmanship (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines.
This is all irrelevant
to what would happen if we had a federated system for building
back-links in the Web.
I'm not sure you've focused clearly enough on what search engines do.
Perhaps you should begin from considering an alternate-universe, in
which Tim Berners-Lee had chosen a double-linked instead of
single-linked architecture for the CERN system that became the Web.
There would have been an even more serious difficulty with a Web made
double-linked, which you'll spot when you think about it, but the
problem of search would be different. Or you can imagine the Web in
terms of the history of Lisp: what you have to do to recover from the
drawbacks of using the single-linked list as the primitive data type
in a computer language. Or you could take a look at the "searching"
half of the third volume of Donald Knuth's work of genius _The Art of
Computer Programming_, and ruminate on the necessary structures for
the World Wide Web that would make searching trivial, and then
consider why we don't switch to them. In any event, until you
separate the technical problem of search from the social opportunity
to address the primary problem with 20th-century mass advertising,
it's unlikely that you're going to write anything about the union of
the two forces with free software, which created the entity Google,
and the Web you think you know. I went through all of this in class,
not throughly enough to displace the resulting essay of yours, but
enough to have explained already the difficulties in argument that
this revision does not yet address. | | | |
> > | | | Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency |
|
FarayiMafotiFirstPaper 10 - 18 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
| |
< < | | | MY REVISED ESSAY
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | GOOGLE, GIVE US A PEAK | |
< < | Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it display its own affiliated sites or content prominently in search results. | > > | Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it displays its own affiliated sites or content prominently in search results. | |
THE ARGUMENT FOR TRANSPARENCY |
|
FarayiMafotiFirstPaper 9 - 17 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
| |
< < | | > > | | | MY REVISED ESSAY
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | |
> > | GOOGLE, GIVE US A PEAK | | Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it display its own affiliated sites or content prominently in search results. | |
> > | THE ARGUMENT FOR TRANSPARENCY | | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet? | |
> > | THE ARGUMENT FOR TRANSPARENCY WILL BE IGNORED BY THE COMMON HERD | | Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless; (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to spur innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | |
> > | A PROSUMER-INSPIRED SEARCH ARCHITECTURE | | Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | |
< < | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would avoid issues of trust (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. | > > | FOOD FOR THOUGHT: ONE IMPLICATION OF A FEDERATED SEARCH ARCHITECTURE
Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would help avoid issues of gamesmanship (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. | |
Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency |
|
FarayiMafotiFirstPaper 8 - 10 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
MY REVISED ESSAY | | Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it display its own affiliated sites or content prominently in search results. | |
< < | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can assert full and undisclosed editorial control of what content you see and what you don’t, does it not follow that this endangers the fundamental openness of the internet? | > > | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can apparently assert full and undisclosed editorial control of what content you see and what you don’t, does it follow that this endangers the fundamental openness of the internet? | | Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless; (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to spur innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | |
< < | Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | > > | Transparency’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would avoid issues of trust (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. |
|
FarayiMafotiFirstPaper 7 - 10 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
MY REVISED ESSAY | | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it display its own affiliated sites or content prominently in search results. | |
> > | | | “Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can assert full and undisclosed editorial control of what content you see and what you don’t, does it not follow that this endangers the fundamental openness of the internet? | |
< < | Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless. (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to bless innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | > > | Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless; (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to spur innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | | | |
< < | Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. In other words, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | > > | Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. To be clear, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would avoid issues of trust (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. |
|
FarayiMafotiFirstPaper 6 - 09 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
MY REVISED ESSAY | | Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless. (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to bless innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption. | |
< < | Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. In other words, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable to scour the deep Web. | > > | Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. In other words, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable of scouring the deep Web. | | Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would avoid issues of trust (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. |
|
FarayiMafotiFirstPaper 5 - 08 Dec 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
| |
< < | THESE ARE MY COMMENTS TO PROFESSOR MOGLEN'S COMMENTS. A REVISED ESSAY WILL BE SUBMITTED. | > > | MY REVISED ESSAY | | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | |
> > | Google Inc has not cooked its search results to favor its own products and listings, Executive Chairman Eric Schmidt told a U.S. Senate hearing looking into whether the search giant abuses its power. Members of the Senate Judiciary Committee's antitrust panel said last September that Google had grown into a dominant and potentially anti-competitive force on the Internet. This hearing should come as no surprise to anyone who has been following Google’s ongoing squabbles with the FTC and the EC. Practically every player in the digital economy is gunning for Google these days with some accusing Google of operating a “black box” algorithm that lacks transparency or accountability. Others say Google stacks the deck against rival services, such as maps or shopping services, when it display its own affiliated sites or content prominently in search results.
“Search neutralists,” as they call themselves, articulate their argument against Google as follows: If search engines have become an undisputed gateway to Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? Given that Google, the overwhelmingly dominant search engine, can assert full and undisclosed editorial control of what content you see and what you don’t, does it not follow that this endangers the fundamental openness of the internet?
Disregarding transparency’s obvious problems with execution ((1) the more transparent the algorithm, the more vulnerable it is to gamesmanship by spammers or worse, the greater the chance of the algorithm being rendered useless. (2) if the algorithm is transparent to regulators, they are unlikely to adapt fast enough to bless innovations), the concept is only worthwhile to prosumers, not consumers, and it is vital to remember that antitrust law, at least in theory, is supposed to be about protecting consumers. All consumers see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption.
Google’s relation to prosumers: The prosumer campaigns for a system that allows a visitor to conduct any or all three types of a search task: develop information, compare options, and find where to execute transactions. Algorithmic opacity would not be ideal for the prosumer because the prosumer, the active, tech-savvy customer who gains information from digital media or online, and interprets and influences mass consumers in terms of lifestyle and brand choices, desires increased facility with the technology in order to maximize his ability to engage critically with it and collaborate with others. Collaboration or federation is value to the prosumer. Currently, web search engines like Google function as weak federation mechanisms either by bringing up relevant web pages for user queries or via directories of related sites. A federated architecture, however, would offer a single point of entry allowing users to employ specific applications optimized for their searches. In other words, the emerging paradigm is based on the combination of a multi–domain query approach with the integration of heterogeneous data sources capable to scour the deep Web.
Conceptualizing a prosumer- ideal search architecture, or as Professor Moglen puts it, “a system of federated search technology, in which we all do searching for one another in some fast and efficient manner” can prove difficult, however, for a number of reasons, not least of which is because there would need to be a revenue mechanism different from the “pay-per-click” method that we are accustomed to. Existing revenue sharing agreements between search engines and publishers, where each receives a fixed share of the profit, are no longer feasible. Consider Google’s model: once a user clicks on a sponsored link, the search engine receives the payment from the corresponding advertiser and gives part of this to the publisher. The payment ratio of the search engine is defined by a commercial contract, existing independently of the specific search. When it comes to federated search, however, the contracts between the publisher and the domain-specific search engines must account for the fact that each search engine plays a role in the generation of the search process. In order for there to be a disciplined way to estimate the search value of each domain-specific search engine, the monetization must be performed after the ranking. This would avoid issues of trust (often a search engine may want to bid strongly or decrease the bid on a query for purely economic reasons) around the domain-specific engines. | | Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency
-- By FarayiMafoti - 21 Oct 2011 |
|
FarayiMafotiFirstPaper 4 - 29 Nov 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
| |
> > | THESE ARE MY COMMENTS TO PROFESSOR MOGLEN'S COMMENTS. A REVISED ESSAY WILL BE SUBMITTED. | | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency | | jurisdictional armament with which to maintain, such a prolonged
confrontation. | |
> > | I was initially motivated to write this paper after the senate antitrust committee threatened to subpoena Eric Schmidt as a means of forcing his appearance before the committee. The Senate, likely aware of (1) the reluctance on the part of companies to open up their executives to interrogation when there are federal investigations under way and (2) the serious legal consequences confronting the government if there is any indication that the probes are not the result of good faith independent judgment, still went ahead with such an aggressive tactic, which sparked my interest. By “scrutiny,” I am merely referring to the existence of the various probes that you have referred to without making any judgment about the likelihood of an actual case. Perhaps I’ll just get the ball rolling with a neutral reference to the September hearing as to avoid any confusion. | | Many have argued that Google tweaks its algorithm in ways that push down certain sites in search results that compete for eyeballs, allegedly because these sites have unoriginal content. Disgruntled site owners have cited Google giving Yelp the “TripAdvisor treatment” in support of their accusations. Yelp CEO Jeremy Stoppelman, when questioned about Google Places, remarked:
"Google’s position is that we can take ourselves out of its search index if we don’t want them to use our reviews on Places…. But that is not an option for us, and other sites like us – such as TripAdvisor? – as we get a large volume of our traffic via Google search…We just don’t get any value out of our reviews appearing on Google places and haven’t been given an option other than to remove ourselves from search, how to improve this situation." | | might differ), there is no appropriate mode of analysis that begins
by confusing them. | |
> > | I’ll delete everything from “Disgruntled…” onwards as I am only concerned with the former issue. | | The "Everyone Wins" Artifice
Google’s competitors have bolstered their campaign against Google’s black box algorithms using the pretext of “consumer welfare” – which is defined almost exclusively in terms of consumers reaping the benefits of a fair marketplace that produces real innovation presumably through merit-based competition. | | | |
> > | If search engines have become an undisputed gateway to the Internet, and are now arguably as essential a component of its infrastructure as the underlying network itself, does that not create a basis on which to argue for algorithmic transparency? In other words, if we accept that Google, the overwhelmingly dominant search engine, can assert full and undisclosed editorial control of what content you see and what you don’t, could you not deduce, as Foundem and other search neutralists have in the past, that this endangers the fundamental openness of the internet? I am simply trying to give Google’s critics a leg to stand on here even though I believe that public utility style regulation of a digital economy is pointless.
I agree with your comment about consumers. All users see is the supposedly objective final results, not the intervention by the gatekeeper. Unless the search manipulation is drastic (i.e. no relevant result appears), corrupted results are an “unknown unknown” and so no one cares. People will continue to see the search as a credence good, whose value is difficult to determine even after consumption.
Prosumers: I suppose one could argue that unless the search technology is bereft of individualized advertising, a federated search architecture, with its implications of democratizing the media towards participatory systems, would only spur the commodification of human creativity. The more users make use of advertisement-based free online platforms and the more time they spend online producing, and exchanging content, the higher the advertisement prices will rise and the higher the prosumer’s value will become as a commodity. The people who “google” data constitute a user commodity that is sold to advertisers; unlike with the radio and television, search users would be much more active, continually creating content, which is likewise commodified (exploited?). I do not see any other way to avert this problem without doing away with the advertisement space altogether a la Linux or Wikipedia, wherein the platform has use value and no exchange value. But then what would make federated search any different from a P2P? network?
| | An Overview of Online Search
Google's sponsored links are produced for businesses interested in advertising and willing to pay Google when users click on their ads. Advertisements are generated by the keywords a user enters into Google's search engine. The amount that Google charges for sponsored links is calculated according to a keyword auction conducted through Google's AdWords? platform. These auctions are automated based on a set of parameters specified by each advertiser, and they occur instantaneously each time a keyword is entered into Google's search engine. An advertiser who places a higher bid for a keyword will receive better placement of its advertisements when a user enters that keyword as part of his search. Additionally, Google employs an innovative quality metric that adjusts the placement and cost to the advertiser of sponsored links based on the links' relevance to the search query and the quality of the underlying webpage.
The heart of the dominant theory of Sherman Act, Section 2 liability against Google relates to Google's use of quality scoring in influencing the outcome of its AdWords? auctions. The quality score employs advanced algorithmic technology to maximize the relevance of search results and thus the value of the search engine to users, the likelihood of revenue-producing impressions to advertisers, and revenue to Google. Allegations of anticompetitive conduct surrounding the quality score turn less on its existence--all major search engines use quality scores to improve the relevance of search results—and more on its arcane nature. The specific determinants of quality scores are kept hidden by design and Google has repeatedly justified this opacity with the obvious axioms: (a) No company wants to share its secret formulas with its competitors; (b) by making the ranking formulas too accessible, it would be easier for people to game the system. | | The problem for Google is not transparency but rather making us feel that it is transparent. Google undoubtedly has some perverse incentives in the ranking process with their own content properties and that alone leaves us wanting for some transparency in the process, although Google owes us nothing. As end users, we are attuned to the belief that an open internet is a desirable internet. It is this openness however, that creates a spam economy and eventually, an unusable web. If anything, an opaque, ever changing algorithm allows Google to be one step ahead of the spammers, even if Google profits from some of the spam (i.e. sponsored links). I acknowledge that the logical outcome of this argument is dangerous, however: we essentially would have to rely on Google to make things more transparent, more open, and more independent (note: the user-controlled, transparent quality scoring system mentioned earlier would admittedly be appealing at this stage of the argument even though it would nibble away at Google’s ability to do as it sees fit with its essential intellectual property). | |
> > | Here’s what I think the roadmap of my paper should be as to address both search manipulation and the advertisement space:
1. Critics argue that Google determines search relevancy or where certain things show up in search results, accusing Google of operating a “black box” that lacks transparency or accountability. If this accusation is true, what’s the remedy?
2. One possible solution is to erect an independent search commission that would monitor search results in the interest of search neutrality.
3. Antitrust law, however, is about protecting consumers. This proposed solution fails to realize that transparent search would only be valuable to prosumers, not consumers. at this point I will have decided to no longer entertain the critics’ ideas and moved on to a few implications of federated search with respect to advertising
Given that this template would change my entire discussion, I wanted to submit my comments first.
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. |
|
FarayiMafotiFirstPaper 3 - 06 Nov 2011 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstPaper" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | |
Google, Give us a Peak | |
< < | Google is now the subject of antitrust scrutiny, having been accused of leveraging its dominance in the “"keyword targeted internet advertising" space (consider this attempt number 1 at defining the relevant market here) to favor its own services to the detriment of a competitive market for the purpose of marginalizing rival companies. Many have argued that Google tweaks its algorithm in ways that push down certain sites in search results that compete for eyeballs, allegedly because these sites have unoriginal content. Disgruntled site owners have cited Google giving Yelp the “TripAdvisor treatment” in support of their accusations. Yelp CEO Jeremy Stoppelman, when questioned about Google Places, remarked: | > > | Google is now the subject of antitrust scrutiny, having been accused of leveraging its dominance in the “"keyword targeted internet advertising" space (consider this attempt number 1 at defining the relevant market here) to favor its own services to the detriment of a competitive market for the purpose of marginalizing rival companies.
Depends on your
definition of "scrutiny." This subject is much discussed by
theorists in what used to be thought of as "law reviews," and the FTC
engages in occasional throat-clearing. But it was apparent to all
knowledgeable observers even before Christine Varney departed US DoJ?
for Cravath that there will be no serious dust-up between this
Administration and Google in during this or any second term.
Similarly, the EC has no political stomach for, and precious little
jurisdictional armament with which to maintain, such a prolonged
confrontation.
Many have argued that Google tweaks its algorithm in ways that push down certain sites in search results that compete for eyeballs, allegedly because these sites have unoriginal content. Disgruntled site owners have cited Google giving Yelp the “TripAdvisor treatment” in support of their accusations. Yelp CEO Jeremy Stoppelman, when questioned about Google Places, remarked: | | "Google’s position is that we can take ourselves out of its search index if we don’t want them to use our reviews on Places…. But that is not an option for us, and other sites like us – such as TripAdvisor? – as we get a large volume of our traffic via Google search…We just don’t get any value out of our reviews appearing on Google places and haven’t been given an option other than to remove ourselves from search, how to improve this situation." | |
> > | These are two completely
different questions being conflated. At the beginning of the
paragraph you are discussing the supposed unfairness of Google's
algorithmic rankings. Later you are reflecting on a completely
different issue; whether a site operator has some reason to object
when material that is searchable to Google spiders--and to the entire
rest of the universe on the same terms--is re-aggregated by Google in
contexts that bring no immediate monetary advantage to the site
operator whose page was searched. Whatever analysis is appropriate
to each of these questions (about which I suspect in both cases we
might differ), there is no appropriate mode of analysis that begins
by confusing them. | | The "Everyone Wins" Artifice
Google’s competitors have bolstered their campaign against Google’s black box algorithms using the pretext of “consumer welfare” – which is defined almost exclusively in terms of consumers reaping the benefits of a fair marketplace that produces real innovation presumably through merit-based competition. | |
> > | This sentence doesn't
make any actual sense. I'm not aware, at the moment, of any search
engine operator proposing to establish service on the basis of
transparently published ranking algorithms. If there were such an
operator, it's argument in favor of transparency would not be to
consumers, except in the general sense that all propaganda in a
society where consumption represents the predominant share of GDP is
conducted in the meaningless bullshit vocabulary of "consumer
welfare." Slight analysis reveals that it's not the consumer of
search results who cares whether the algorithm is transparent. The
very people who think that free software is unimportant because the
bulk of ordinary PC consumers don't use it should see at once why
this argument is nonsense. Transparent search algorithms are
valuable to prosumers of search: that is, to the creation of a
system of federated search technology, in which we all do searching
for one another in some fast and efficient manner we haven't devised
yet, and no party in the world knows what all the rest of us are
thinking about.
But such a system, in which we spread search out throughout the Net,
deconcentrating it, would remove the power over advertising that is
held by a centralized search engine that knows what lots of people
are looking for right now. In other words, the real competition in
search is not between parties centralizing search to control
advertising, for each of whom ranking algorithms that enable auction
markets are quite legitimately and necessarily hypervaluable trade
secrets, but between the architecture of centralized search and the
architecture of federated search. As I've already explained in
class, centralized search architecture—which makes Google,
Bing, and almost all the other known variants—has what at
present seems an insurmountable lead. The one-way link design of the
Web is apparently overwhelmingly in its favor. And no one has yet
built a model of successful, general, instantaneous, Web-wide
federated search, so there is not even an unviable competitor to the
reigning architecture.
On a twenty-year time scale, however, that's not true. The Web is
less than 7,000 days old now. When it is 15,000 days old, federated
search will be the dominant architecture, and ranking algorithms will
be transparent, just as operating system kernel implementations are
becoming transparent, because they too will be free software.
That has important implications for the future of advertising, and
the distribution of economic power, that are probably the big story
here. But you can't find them if you don't look for them.
| | An Overview of Online Search
Google's sponsored links are produced for businesses interested in advertising and willing to pay Google when users click on their ads. Advertisements are generated by the keywords a user enters into Google's search engine. The amount that Google charges for sponsored links is calculated according to a keyword auction conducted through Google's AdWords? platform. These auctions are automated based on a set of parameters specified by each advertiser, and they occur instantaneously each time a keyword is entered into Google's search engine. An advertiser who places a higher bid for a keyword will receive better placement of its advertisements when a user enters that keyword as part of his search. Additionally, Google employs an innovative quality metric that adjusts the placement and cost to the advertiser of sponsored links based on the links' relevance to the search query and the quality of the underlying webpage.
The heart of the dominant theory of Sherman Act, Section 2 liability against Google relates to Google's use of quality scoring in influencing the outcome of its AdWords? auctions. The quality score employs advanced algorithmic technology to maximize the relevance of search results and thus the value of the search engine to users, the likelihood of revenue-producing impressions to advertisers, and revenue to Google. Allegations of anticompetitive conduct surrounding the quality score turn less on its existence--all major search engines use quality scores to improve the relevance of search results—and more on its arcane nature. The specific determinants of quality scores are kept hidden by design and Google has repeatedly justified this opacity with the obvious axioms: (a) No company wants to share its secret formulas with its competitors; (b) by making the ranking formulas too accessible, it would be easier for people to game the system. | |
> > | There's no "dominant
theory" of Google's antitrust liability because there's no viable
theory. Not one serious party, public or private, has decided to
make the vast bet required to test any theory, because there is no
theory that demonstrates both harm and causation sufficient to invoke
legal process by any party with a claim to
propose.
The other problem here is that you aren't discussing ranking
algorithms anymore, you're discussing advertising placement
algorithms for sponsored links. These are not the same things, not
the same algorithms, and not the same analytic issues with regard to
transparency. You need to be clear from the beginning that you're
not asking anything about search, the purpose of which is to find
things in the web that the reader wants to see. You're asking about
advertising placement, which is the algorithm matching stuff that
wants to be seen to people who could be induced to click on it
whether they actually want it or not. There is no analytic case
whatever for transparency of the latter algorithms, except the
general belief that people should be able to stufy the software they
interact with, which is too weak a claim to overset the obvious
operator interest in trade secrecy. | | What a Transparent Google Might Look Like
Some argue that now is the time for Google to admit that its power in the ecosystem is so great that the company owes it to the rest of the ecosystem to become more transparent in how it ranks sites. One outcome of this arrangement would be the creation of an independent, transparent, quality scoring system. This system would simply rate pages and search results and publish a Klout-like score. The details of the report would be open for everyone to see and scrutinize. Popular dot-com era entrepreneur and blogger Jason Calacanis has also called for Google to provide a calendar of algorithm updates and the informing of the ecosystem of its plans. | |
> > | But that's about search
ranking. The case is weak, but you're not discussing it. Are you?
The confusion on technical points is harmful even to basic coherence
at this point. | | The Case for Pandora’s Box
From an antitrust perspective, no business, even a monopolist (assuming that Google is one and Google has refused to deal in lieu of short-term profits), has an antitrust duty to reveal to competitors formulas that it uses to set prices (see http://newscenter.berkeley.edu/2011/06/07/digital-democracy/ - the quest for transparency on the internet in general is a red herring). Moreover, courts are skeptical to intervene on the basis of complaints about product design by rivals because of the presumption that such intervention will chill innovation. |
|
FarayiMafotiFirstPaper 2 - 24 Oct 2011 - Main.FarayiMafoti
|
|
META TOPICPARENT | name="FirstPaper" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | From the User’s Perspective | |
< < | The problem for Google is not transparency but rather making us feel that it is transparent. Google undoubtedly has some perverse incentives in the ranking process with their own content properties and that alone leaves us wanting for some transparency in the process, although Google owes us nothing. As end users, we are attuned to the belief that an open internet is a desirable internet. It is this openness however, that creates a spam economy and eventually, an unusable web. If anything, an opaque, ever changing algorithm allows Google to be one step ahead of the spammers, even if Google profits from some of the spam (i.e. sponsored links). I acknowledge that the logical outcome of this argument is dangerous, however: we essentially would have to rely on Google to make things more transparent, more open, and more independent (note: the user-controlled, transparent quality scoring system mentioned earlier would admittedly be appealing at this stage of the argument even though it would nibble away at the Google’s ability to do as it sees fit with its essential intellectual property). | > > | The problem for Google is not transparency but rather making us feel that it is transparent. Google undoubtedly has some perverse incentives in the ranking process with their own content properties and that alone leaves us wanting for some transparency in the process, although Google owes us nothing. As end users, we are attuned to the belief that an open internet is a desirable internet. It is this openness however, that creates a spam economy and eventually, an unusable web. If anything, an opaque, ever changing algorithm allows Google to be one step ahead of the spammers, even if Google profits from some of the spam (i.e. sponsored links). I acknowledge that the logical outcome of this argument is dangerous, however: we essentially would have to rely on Google to make things more transparent, more open, and more independent (note: the user-controlled, transparent quality scoring system mentioned earlier would admittedly be appealing at this stage of the argument even though it would nibble away at Google’s ability to do as it sees fit with its essential intellectual property). | | |
|
FarayiMafotiFirstPaper 1 - 21 Oct 2011 - Main.FarayiMafoti
|
|
> > |
META TOPICPARENT | name="FirstPaper" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Google's Algorithmic Cat and Mouse Game: The Case against Greater Transparency
-- By FarayiMafoti - 21 Oct 2011
Google, Give us a Peak
Google is now the subject of antitrust scrutiny, having been accused of leveraging its dominance in the “"keyword targeted internet advertising" space (consider this attempt number 1 at defining the relevant market here) to favor its own services to the detriment of a competitive market for the purpose of marginalizing rival companies. Many have argued that Google tweaks its algorithm in ways that push down certain sites in search results that compete for eyeballs, allegedly because these sites have unoriginal content. Disgruntled site owners have cited Google giving Yelp the “TripAdvisor treatment” in support of their accusations. Yelp CEO Jeremy Stoppelman, when questioned about Google Places, remarked:
"Google’s position is that we can take ourselves out of its search index if we don’t want them to use our reviews on Places…. But that is not an option for us, and other sites like us – such as TripAdvisor? – as we get a large volume of our traffic via Google search…We just don’t get any value out of our reviews appearing on Google places and haven’t been given an option other than to remove ourselves from search, how to improve this situation."
The "Everyone Wins" Artifice
Google’s competitors have bolstered their campaign against Google’s black box algorithms using the pretext of “consumer welfare” – which is defined almost exclusively in terms of consumers reaping the benefits of a fair marketplace that produces real innovation presumably through merit-based competition.
An Overview of Online Search
Google's sponsored links are produced for businesses interested in advertising and willing to pay Google when users click on their ads. Advertisements are generated by the keywords a user enters into Google's search engine. The amount that Google charges for sponsored links is calculated according to a keyword auction conducted through Google's AdWords? platform. These auctions are automated based on a set of parameters specified by each advertiser, and they occur instantaneously each time a keyword is entered into Google's search engine. An advertiser who places a higher bid for a keyword will receive better placement of its advertisements when a user enters that keyword as part of his search. Additionally, Google employs an innovative quality metric that adjusts the placement and cost to the advertiser of sponsored links based on the links' relevance to the search query and the quality of the underlying webpage.
The heart of the dominant theory of Sherman Act, Section 2 liability against Google relates to Google's use of quality scoring in influencing the outcome of its AdWords? auctions. The quality score employs advanced algorithmic technology to maximize the relevance of search results and thus the value of the search engine to users, the likelihood of revenue-producing impressions to advertisers, and revenue to Google. Allegations of anticompetitive conduct surrounding the quality score turn less on its existence--all major search engines use quality scores to improve the relevance of search results—and more on its arcane nature. The specific determinants of quality scores are kept hidden by design and Google has repeatedly justified this opacity with the obvious axioms: (a) No company wants to share its secret formulas with its competitors; (b) by making the ranking formulas too accessible, it would be easier for people to game the system.
What a Transparent Google Might Look Like
Some argue that now is the time for Google to admit that its power in the ecosystem is so great that the company owes it to the rest of the ecosystem to become more transparent in how it ranks sites. One outcome of this arrangement would be the creation of an independent, transparent, quality scoring system. This system would simply rate pages and search results and publish a Klout-like score. The details of the report would be open for everyone to see and scrutinize. Popular dot-com era entrepreneur and blogger Jason Calacanis has also called for Google to provide a calendar of algorithm updates and the informing of the ecosystem of its plans.
The Case for Pandora’s Box
From an antitrust perspective, no business, even a monopolist (assuming that Google is one and Google has refused to deal in lieu of short-term profits), has an antitrust duty to reveal to competitors formulas that it uses to set prices (see http://newscenter.berkeley.edu/2011/06/07/digital-democracy/ - the quest for transparency on the internet in general is a red herring). Moreover, courts are skeptical to intervene on the basis of complaints about product design by rivals because of the presumption that such intervention will chill innovation.
From the User’s Perspective
The problem for Google is not transparency but rather making us feel that it is transparent. Google undoubtedly has some perverse incentives in the ranking process with their own content properties and that alone leaves us wanting for some transparency in the process, although Google owes us nothing. As end users, we are attuned to the belief that an open internet is a desirable internet. It is this openness however, that creates a spam economy and eventually, an unusable web. If anything, an opaque, ever changing algorithm allows Google to be one step ahead of the spammers, even if Google profits from some of the spam (i.e. sponsored links). I acknowledge that the logical outcome of this argument is dangerous, however: we essentially would have to rely on Google to make things more transparent, more open, and more independent (note: the user-controlled, transparent quality scoring system mentioned earlier would admittedly be appealing at this stage of the argument even though it would nibble away at the Google’s ability to do as it sees fit with its essential intellectual property).
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" on the next line:
# * Set ALLOWTOPICVIEW = TWikiAdminGroup, FarayiMafoti
Note: TWiki has strict formatting rules. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of that line. If you wish to give access to any other users simply add them to the comma separated list |
|
|