Children's Internet Protection Act (CIPA) Ruling by United States District Court For The Eastern District Of Pennsylvania
page 10 of 209 (04%)
page 10 of 209 (04%)
![]() | ![]() |
|
the technology of automated classification systems, and the
limitations inherent in human review, including error, misjudgment, and scarce resources, which we describe in detail infra at 58-74. One failure of critical importance is that the automated systems that filtering companies use to collect Web pages for classification are able to search only text, not images. This is crippling to filtering companies' ability to collect pages containing "visual depictions" that are obscene, child pornography, or harmful to minors, as CIPA requires. As will appear, we find that it is currently impossible, given the Internet's size, rate of growth, rate of change, and architecture, and given the state of the art of automated classification systems, to develop a filter that neither underblocks nor overblocks a substantial amount of speech. The government, while acknowledging that the filtering software is imperfect, maintains that it is nonetheless quite effective, and that it successfully blocks the vast majority of the Web pages that meet filtering companies' category definitions (e.g., pornography). The government contends that no more is required. In its view, so long as the filtering software selected by the libraries screens out the bulk of the Web pages proscribed by CIPA, the libraries have made a reasonable choice which suffices, under the applicable legal principles, to pass constitutional muster in the context of a facial challenge. Central to the government's position is the analogy it advances between Internet filtering and the initial decision of a library to determine which materials to purchase for its print collection. Public libraries have finite budgets and must make |
|