Netflix

- NASDAQ:NFLX
Last Updated 2024-04-23

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nasdaq:nflx Netflix Apr 26th, 2022 12:00AM Jun 17th, 2020 12:00AM https://www.uspto.gov?id=US11317150-20220426 Video blurring systems and methods The disclosed computer-implemented method includes determining that an image is to be blurred. The image has multiple pixels arranged along horizontal and/or vertical axes. The method next includes identifying a boundary size for a sliding window within which pixel values are to be sampled from the image and sampling, from pixels that lie on an axis that is diagonal relative to the horizontal/vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The method further includes performing an initial convolution pass on the pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels, and then presenting the blurred image. Various other methods, systems, and computer-readable media are also disclosed. 11317150 1. A computer-implemented method comprising: determining, by a hardware processor of an electronic device, that at least a portion of an image is to be blurred, the image including a plurality of pixels arranged along at least one of a horizontal axis or a vertical axis; identifying a boundary size for a sliding window within which pixel values are to be sampled from the image, the sliding window including a plurality of different characteristics including window size, shape, placement, or rotation; sampling, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, one or more pixel values from within the boundary of the sliding window, wherein the plurality of characteristics of the sliding window are dynamically changeable at each pixel sampled during the sampling, and wherein the pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern; identifying one or more computing resources of the electronic device including at least the hardware processor; performing an initial convolution pass on one or more pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels, wherein the blurring is performed at a variable quality level that is dynamically determined based on the identified computing resources of the electronic device; and presenting the image, at least a portion of which is blurred as a result of the initial convolution pass. 2. The computer-implemented method of claim 1, further comprising performing a subsequent convolution pass on one or more different image pixels surrounding the sampled pixels. 3. The computer-implemented method of claim 2, wherein the initial convolution pass is performed at a specified diagonal angle, and wherein the subsequent convolution pass is performed at an opposite diagonal angle that is opposite to the specified diagonal angle. 4. The computer-implemented method of claim 2, wherein multiple-pass convolutions are performed to reduce a number of samples taken within the sliding window. 5. The computer-implemented method of claim 1, wherein the image is one of a plurality of sequential images in a video media item. 6. The computer-implemented method of claim 1, wherein at least a portion of the plurality of sequential images is sequentially blurred according to the sampling. 7. The computer-implemented method of claim 1, wherein the pixels within the sliding window are selected for sampling on a per-pixel basis. 8. The computer-implemented method of claim 1, further comprising, subsequent to presenting the blurred image, transitioning back to the original, unblurred image. 9. The computer-implemented method of claim 1, wherein a specified number of pixels are sampled from within the sliding window. 10. The computer-implemented method of claim 1, wherein the specified number of pixels that are to be sampled within the sliding window is selected by a user. 11. The computer-implemented method of claim 10, wherein the specified number of pixels that are to be sampled within the sliding window is selected based on at least one of electronic device specifications and available processing resources on the electronic device. 12. The computer-implemented method of claim 11, wherein the specified number of pixels that are to be sampled within the sliding window is dynamically adapted based on currently available processing resources. 13. A system comprising: at least one physical processor of an electronic device; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: determine, by the physical processor of the electronic device, that at least a portion of an image is to be blurred, the image including a plurality of pixels arranged along at least one of a horizontal axis or a vertical axis; identify a boundary size for a sliding window within which pixel values are to be sampled from the image, the sliding window including a plurality of different characteristics including window size, shape, placement, or rotation; sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, one or more pixel values from within the boundary of the sliding window, wherein the plurality of characteristics of the sliding window are dynamically changeable at each pixel sampled during the sampling, and wherein the pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern; identify one or more computing resources of the electronic device including at least the physical processor; perform an initial convolution pass on one or more pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels, wherein the blurring is performed at a variable quality level that is dynamically determined based on the identified computing resources of the electronic device; and present the image, at least a portion of which is blurred as a result of the initial convolution pass. 14. The system of claim 13, wherein the sliding window comprises a circle with a specified radius within which the one or more pixels are sampled, and wherein the size of the radius is dynamically controlled per-pixel. 15. The system of claim 13, further comprising: identifying one or more portions of content within the image; determining that the identified content has one or more prominent angles; and altering the sampling of pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, such that the altered sampling avoids the one or more prominent angles. 16. The system of claim 15, further comprising rotating the sliding window by a dynamically determined amount at each sampled pixel. 17. The system of claim 13, wherein the specified noise pattern comprises a blue noise filter. 18. The system of claim 17, wherein noise values selected from the blue noise pattern are accessed from a lookup table, and wherein the accessed noise values are implemented for a plurality of convolutions before new noise values are accessed. 19. The system of claim 18, wherein the sliding window is at least partially altered prior to performing each convolution. 20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: determine, by a hardware processor of an electronic device, that at least a portion of an image is to be blurred, the image including a plurality of pixels arranged along at least one of a horizontal axis or a vertical axis; identify a boundary size for a sliding window within which pixel values are to be sampled from the image, the sliding window including a plurality of different characteristics including window size, shape, placement, or rotation; sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, one or more pixel values from within the boundary of the sliding window, wherein the plurality of characteristics of the sliding window are dynamically changeable at each pixel sampled during the sampling, and wherein the pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern; identify one or more computing resources of the electronic device including at least the hardware processor; perform an initial convolution pass on one or more pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels, wherein the blurring is performed at a variable quality level that is dynamically determined based on the identified computing resources of the electronic device; and present the image, at least a portion of which is blurred as a result of the initial convolution pass. 20 BACKGROUND Many times, images or parts of images are blurred in a movie or television show. These blurred portions may be aimed at obscuring a trademarked product, or blurring content that may be objectionable to some users. In a traditional blurring process, a computing device will sample certain pixels along a horizontal or vertical axis and blur the sampled pixels or pixels that are near to the sampled pixels. This blurring process is typically performed in two passes: one pass for the horizontal blur and one pass for the vertical blur. When using these traditional blurring algorithms, if a 100×100 pixel area is to be blurred, the computing device will need to take 100×100 samples per pixel. This sampling process, even if broken up into two different passes of 2×100, is still highly resource intensive. Moreover, traditional attempts to reduce the amount of processing in these blurring algorithms typically lead to artifacts that are noticeable and distracting to users. SUMMARY As will be described in greater detail below, the present disclosure describes systems and methods for selectively blurring pixels in an image. One of these methods includes determining that at least a portion of a digital image is to be blurred. The image includes multiple pixels arranged in a grid along a horizontal axis or a vertical axis. The method further includes identifying a boundary size for a sliding window within which pixel values will be sampled from the image. The method also includes sampling, from pixels that lie on an axis that is diagonal relative to the horizontal axis of the image and/or the vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The method also includes performing an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels. The method then includes presenting the digital image. At least some of the image is blurred as a result of the initial convolution pass. In some examples, the method further includes performing a subsequent convolution pass on different image pixels surrounding the sampled pixels. In some examples, the initial convolution pass is performed at a specified diagonal angle, and the subsequent convolution pass is performed at an opposite diagonal angle that is opposite to the specified diagonal angle. In some examples, multiple-pass convolutions are performed to reduce the number of samples taken within the sliding window. In some examples, the image is one of a plurality of sequential images in a video media item. In some examples, at least a portion of the sequential images is sequentially blurred according to the sampling. In some examples, the pixels within the sliding window are selected for sampling on a per-pixel basis. In some examples, subsequent to presenting the blurred image, the method includes transitioning back to the original, unblurred image. In some examples, a specified number of pixels are sampled from within the sliding window. In some examples, the specified number of pixels that are to be sampled within the sliding window is selected by a user. In some examples, the specified number of pixels that are to be sampled within the sliding window is selected based on electronic device specifications and/or available processing resources on the electronic device. In some examples, the specified number of pixels that are to be sampled within the sliding window is dynamically adapted based on currently available processing resources. In addition, a corresponding system may include at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to determine that at least a portion of an image is to be blurred, where the image includes multiple pixels arranged along at least one of a horizontal axis or a vertical axis, identify a boundary size for a sliding window within which pixel values are to be sampled from the image, and sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The physical processor is further configured to perform an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels and then present the image, at some of which is blurred as a result of the initial convolution pass. In some examples, the sliding window includes a circle with a specified radius within which the pixels are sampled, and where the size of the radius is dynamically controlled per-pixel. In some examples, the physical processor is further configured to identify various portions of content within the image, determine that the identified content has one or more prominent angles, and alter the sampling of pixels that lie on an axis that is diagonal relative to the horizontal axis of the image or relative to the vertical axis of the image. As such, the altered sampling avoids the prominent angles. In some examples, the physical processor is further configured to rotate the sliding window by a dynamically determined amount at each sampled pixel. In some examples, the specified noise pattern is a blue noise filter. In some examples, noise values selected from the blue noise pattern are accessed from a lookup table. In some cases, the accessed noise values are implemented for multiple convolutions before new noise values are accessed. As such, a new noise value may be selected per pixel per pass, resulting in two times the number of pixels in the image or frame that is being processed. In some examples, the sliding window is at least partially altered prior to performing each convolution. In some examples, the above-described method is encoded as computer-readable instructions on a computer-readable medium. In one example, a computer-readable medium includes computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to determine that at least a portion of an image is to be blurred, where the image includes multiple pixels arranged along at least one of a horizontal axis or a vertical axis, identify a boundary size for a sliding window within which pixel values are to be sampled from the image, and sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The computing device is further configured to perform an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels and then present the image, at some of which is blurred as a result of the initial convolution pass. Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure. FIG. 1 illustrates a computing architecture in which pixels are selectively blurred within an image. FIG. 2 is a flow diagram of an exemplary method for selectively blurring pixels in an image. FIG. 3 illustrates an embodiment in which pixels are sampled along a diagonal with respect to horizontal or vertical axes. FIG. 4 illustrates an embodiment in which pixels are sampled along multiple diagonals in multiple passes with respect to horizontal or vertical axes. FIG. 5 illustrates an embodiment in which an image is blurred according to different types of input. FIG. 6 illustrates an embodiment in which a sliding window boundary size is dynamically changed according to different elements. FIGS. 7A and 7B illustrate embodiments in which pixels are sampled along different diagonals based on content in the underlying image. FIG. 8 illustrates an embodiment in which noise values are selectively accessed and implemented to identify sample pixels. FIG. 9 is a block diagram of an exemplary content distribution ecosystem. FIG. 10 is a block diagram of an exemplary distribution infrastructure within the content distribution ecosystem shown in FIG. 9. FIG. 11 is a block diagram of an exemplary content player within the content distribution ecosystem shown in FIG. 9. Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The present disclosure is generally directed to selectively blurring pixels in a digital image or a sequence of digital images. As noted above, image blurring is typically performed by taking samples of pixel data for pixels along horizontal or vertical axes within the grid that forms the digital image. This pixel data is then used to generate a blur that obfuscates the underlying content in the image. The blur is typically generated by convolving pixel values to values that are different enough from the original colors to cause blurring, but not so different that the underlying image appears to have colors that were not originally part of the image. In a typical example, a computing device will need to sample a block of 100×100 pixels per pixel to generate a 100×100 blur. In some cases, the horizontal pixels within the 100×100 grid are initially sampled on a first pass, and the vertical pixels in the 100×100 grid are then sampled in a second pass. Sampling this high number of pixels, however, is impractical on limited-resource devices and consumes computing resources unnecessarily on high-resource devices. Moreover, traditional solutions that attempt to reduce the amount of processing often lead to noticeable artifacts in the digital image. Moreover, many digital images (or other computer-generated graphics including graphical user interfaces (GUIs)) include text or other objects that are aligned horizontally or vertically. For example, the letter “T” is perfectly aligned along horizontal and vertical axes. Applying an image blur using a traditional blurring technique that samples pixels along horizontal and vertical axes would result in ghosting or other artifacts in the blurred portions of the image. These artifacts may stand out to users and may distract those users from enjoying the image, movie, GUI, or other form of digital content. In contrast thereto, the embodiments described herein are designed to reduce the number of samples needed to generate a high-quality blur effect in an image. In some cases, the number of samples taken depends on the processing characteristics of the device, or on which computing resources are currently available. Still further, the embodiments described herein are designed to remove the likelihood of having artifacts in the blur effect by using specific noise patterns to choose the samples. In some cases, for example, blue noise values are used to determine which pixels to sample in the image grid. In contrast to traditional techniques that sample pixels along horizontal and vertical axes, the embodiments described herein sample pixels along a diagonal relative to the horizontal and vertical axes. Moreover, the sampled pixels along the diagonal are selected using blue noise values or some other noise pattern to reduce the creation of artifacts when generating the blur effect. These embodiments will be described in greater detail below with regard to computing architecture 100 of FIG. 1, method 200 of FIG. 2, and the embodiments depicted in FIGS. 3-11. FIG. 1 illustrates a computing environment 100 that includes a computer system 101. The computer system 101 includes software modules, embedded hardware components such as processors, or a combination of hardware and software. The computer system 101 is substantially any type of computing system including a local computing system or a distributed (e.g., cloud) computing system. In some cases, the computer system 101 includes at least one processor 102 and at least some system memory 103. The computer system 101 includes program modules for performing a variety of different functions. The program modules are hardware-based, software-based, or include a combination of hardware and software. Each program module uses computing hardware and/or software to perform specified functions, including those described herein below. The computer system 101 also includes a communications module 104 that is configured to communicate with other computer systems. The communications module 104 includes any wired or wireless communication means that can receive and/or transmit data to or from other computer systems. These communication means include hardware interfaces including Ethernet adapters, WIFI adapters, hardware radios including, for example, a hardware-based receiver 105, a hardware-based transmitter 106, or a combined hardware-based transceiver capable of both receiving and transmitting data. The radios are cellular radios, Bluetooth radios, global positioning system (GPS) radios, or other types of radios. The communications module 104 is configured to interact with databases, mobile computing devices (such as mobile phones or tablets), embedded or other types of computing systems. The computer system 101 further includes a determining module 107. The determining module is configured to access an image 114 (or series of images in a movie or television show, or an instance of a GUI, etc.) and determine that a blur effect is to be applied to the image. The sliding window management module 108 defines and manages a sliding window 116 within which pixel data samples are taken by the pixel sampling module 109. The sliding window 116 is configured to operate in substantially any shape or size, and may cover a single pixel or the entire image or anything in between. For example, image 114 includes a plurality of pixels laid out in a grid. The grid includes multiple columns and rows, where the columns are arranged along a vertical axis 117, and the rows are arranged along a horizontal axis 118. The sliding window 116 specifies an area within which pixels will be sampled to create the blur effect, thereby limiting the number of pixels that are sampleable. This sliding window 116 may change in size dynamically and may change continually over time. In some cases, the sliding window changes with each sampled pixel. The pixel sampling module 109 thus samples pixels within the sliding window 116. The pixel sampling module 109 then samples pixel 115, for example, and provides that pixel data to the convolution module 110. The convolution module 110 convolves (e.g., performs a convolution algorithm on) the sampled pixels, the pixels surrounding the sampled pixels, or both, depending on the configuration. The convolution module 110 performs the convolution in either a single pass or in multiple passes. In some cases, a single-pass convolution convolves the sampled pixels or the pixels around the sampled pixels (collectively referred to herein as a “sampled pixel area”) after taking one set of samples along a diagonal relative to the vertical and horizontal axes 117/118. In other cases, the convolution module performs a multi-pass convolution in which sampled pixel areas along other diagonals are further convolved in subsequent passes. Such multi-pass configurations reduce the number of pixels that need to be sampled, thereby also reducing the computing resources needed to generate the blur effect. Once the image has been blurred according to the convolution(s), the presentation module 111 presents the blurred image 119 on a display 120. These embodiments will now be described further with regard to method 200 of FIG. 2. FIG. 2 is a flow diagram of an exemplary computer-implemented method 200 for selectively blurring pixels in an image. The steps shown in FIG. 2 are performed by any suitable computer-executable code and/or computing system, including the system illustrated in FIG. 1. In one example, each of the steps shown in FIG. 2 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below. As illustrated in FIG. 2, at step 210, method 200 includes determining that at least a portion of an image is to be blurred. The image includes multiple pixels arranged along horizontal and vertical axes. For instance, in one example, determining module 107 of FIG. 1 determines that image 114 is to be blurred. In some cases, the blurring applies to some of the image 114, and in other cases, the blurring applies to all of the image 114. Moreover, the blurring may apply to a single image or to a sequence of images (e.g., in a movie). (Herein, it is assumed that any mention of applying blurring to a single image could also be applied to multiple sequential images, as in a movie or video). The blurring is applied in substantially any shape or pattern including a round pattern, an oval pattern, a square pattern, a rectangular pattern, an amorphous or hand-drawn pattern, or some other type of pattern (e.g., based on an image). In some cases, an outside user (e.g., 112) provides input 113 indicating where the blurring is to be applied and to which images and in which patterns. In some cases, the amount of blurring applied to the image is dependent on the computer system or other electronic device applying the blurring. A higher quality, more visually convincing blur effect is typically the result of additional computer processing, requiring additional central processing unit (CPU) time, memory usage, graphics processing unit (GPU) time, data storage, network bandwidth, data accessing time (e.g., time spent accessing stored data values such as lookup table values in random access memory (RAM) or in some other type of memory), or other processing resources. A lesser quality, but still adequate blurring effect can be applied to the image on a lower-resource computing system, or on a device that has fewer processing resources currently available. In general, more samples are taken in a higher-quality blur, and fewer samples are taken in a lower-quality blur. Method 200, at step 220, next includes identifying a boundary size for a sliding window within which pixel values are to be sampled from the digital image. Thus, when a blur effect is to be generated for image 114, the sliding window management module 108 identifies a boundary size (or shape or other characteristics) for sliding window 116. Then, samples are taken from the image 114 within the boundaries of the sliding window 116. The image 114 is a digital image and, as such, includes many thousands or millions of pixels. Each pixel includes a color value. The combined grid of pixel color values represents an underlying image or the “content” of the image. It is this content (or at least a portion thereof) that to be blurred by the blurring effect. In order to generate the blurring effect, pixel values are sampled from within the sliding window 116. As noted above, the sliding window 116 may be any shape or size, and may change shape or size after the sampling of each pixel. Indeed, the embodiments described herein allow pixel-by-pixel control over each pixel value that is sampled and subsequently used in determining an appropriate convolution. In step 230 of Method 200, the pixel sampling module 109 samples from pixels that lie within the sliding window 116. More specifically, the sampled pixels (e.g., 115) lie on an axis that is diagonal relative to the horizontal axis 118 of the image and/or the vertical axis 117 of the image. It will be understood that the phrases “sampling pixels” and “sampling pixel values” are used interchangeably herein, and both phrases are intended to mean that pixel values are identified and stored for those pixels that are sampled by the pixel sampling module 109. As noted above, merely sampling pixels along the vertical or horizontal axis of the image will result in ghosting or other artifacts when the blurring effect is applied. These artifacts are undesirable and distract from the viewer's overall experience. The embodiments described herein sample pixels on a diagonal relative to the horizontal or vertical axes. Sampling the pixels in this manner reduces or eliminates the ghosting and other artifacts seen in traditional systems. FIG. 3 illustrates an embodiment in which pixels are sampled at a diagonal relative to the horizontal or vertical axes. FIG. 3 illustrates a digital image 301 that includes four sampled pixels 303. These sampled pixels are all within the sliding window 302, and are all on a diagonal relative to the horizontal and vertical axes of the pixel grid. This drawing, it should be noted, is a simplification of a real-life sampling, but it remains illustrative. In a real-world image, the image would likely include millions of pixels. The sampled pixels do not need to be on a directly straight diagonal line as shown in FIG. 3. Rather, the sampled pixels are sampled in clusters, where the cluster forms a generally diagonal line relative to the horizontal or vertical axes. In some cases, the pixels sampled along the diagonal angle within the sliding window are selected according to a noise pattern. The noise pattern ensures that the pixels that are sampled are chosen in a pseudo-random manner. In certain cases, some types of noise patterns are better than others at selecting sample pixels that will lead to an optimal blur effect. This concept will be explored further below. Method 200 of FIG. 2 next includes, at step 240, performing an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels. The convolution module 110 of computer system 101 in FIG. 1 performs a convolution on the area that is to be blurred. This convolution occurs in a single pass, or occurs in multiple passes. Each pass convolves some or all of the pixels in the specified area. The convolution incorporates the pixel values of the sampled pixels and convolves those pixel values with a kernel or other set of values (e.g., a filter) that will create the blurring effect. In some cases, the blurring effect extends beyond the sliding window 116 in which the pixels are sampled, and in other cases, the blurring effect stays within the bounds of the sliding window. Once the convolution pass or passes have been performed, the blurred image 119 is prepared for presentation on a display 120 (step 250 of Method 200). The viewer (e.g., 112) then sees the blurred image 119 that includes the blur effect generated by the convolution. In some cases, the convolution is performed in two passes. In other cases, the convolution is performed in two or more passes. In one of these cases, the initial pass samples and convolves one portion of the image, while a subsequent pass samples and convolves a different portion of the image. The two different portions of the image either have some overlap or are entirely separate. In one case, for example, the initial convolution pass is performed at a specified diagonal angle, and the subsequent convolution pass is performed at an opposite diagonal angle that is opposite to the first diagonal angle. For instance, as shown in FIG. 4, image 401 includes sampled pixels 403A and 403B that are within the sliding window 402. The sampled pixels 403A were convolved during the initial pass, and the sampled pixels 403B were convolved during the subsequent pass. The sampled pixels 403B were sampled at an angle that is 90 degrees different (e.g., rotated) relative to the sampled pixels 403A. It will be understood here that the pixels are sampled and convolved in substantially any angle, with respect to the image grid or with respect to each other. In some cases, the sliding window is rotated to sample pixels at other diagonal angles (e.g., 30 degrees, 45 degrees, 60 degrees, or some other diagonal angle that is different than the initial pass, etc.). This rotation of the sliding window and the sampling further reduces the likelihood of creating artifacts in the blurred image. In some case, convolutions are performed in multiple passes to reduce the number of samples taken within the sliding window. Each pixel value sample takes CPU time and computing resources. The embodiments described herein attempt to use fewer samples than conventional approaches and still provide a visually appealing blur effect. Taking fewer samples results in less CPU usage and longer battery life in mobile devices. By performing a convolution in multiple passes (e.g., in two, three, or more passes), each pass can involve fewer samples and thereby use fewer computing resources while still maintaining a minimum level of blur effect quality. This allows the embodiments described herein to run capably on low-specification devices that have relatively slow processors, relatively small amounts of memory, etc. In some cases, the blur effect generated for one image is applied to multiple subsequent images. For example, in a movie, video clip, or other video media item, a blur effect is applied to multiple sequential images in that video item. If the movie includes 24 frames every second, for instance, the blur is applied to hundreds of frames over a period of ten seconds. In other cases, the blur effect may change (either slightly or greatly) in each image of the sequence or at specific places within the sequence (e.g., at a scene change). In still other cases, the blur effect applies until a specified marker is reached or until the blurred object is no longer in scene. In some cases, machine learning is used to identify the content of the image and detect when an item that is to be blurred is in the scene (i.e., in the image). In such cases, the blur effect is generated in a manner that tracks the item when it is on screen and blurs that item whenever it appears on screen. In other cases, a portion of an object is blurred while the remainder of the object remains sharp. For instance, a car's logo or license plate may be blurred while the rest of the car remains in focus. The car's logo or license plate is thus tracked using object recognition and blurred wherever it appears within the video item. The blurring effect is applied to images differently in different situations. For example, as shown in FIG. 5, a blur effect 507 is applied to a video media item 505 that includes multiple sequential images 506. The blur effect is applied (e.g., by convolution module 110 of FIG. 1) according to one of a variety of inputs or masks (e.g., 501). These mask inputs define the shape of the intended blur 507, the size of the intended blur, indicate to which images the blur is to be applied, indicate how the blur is to be changed from image to image, indicate the quality of the blur, indicate the number of passes, or provide other characteristics of the blur. In some cases, the input mask 501 is a gradient 502 that is to be applied to specified pixels in the sequential images 506. In other cases, the input mask 501 is an image 503 whose data is convolved with one or more of the existing sequential images 506 to create the blur effect 507. In still other cases, the input mask 501 includes instructions 504 (e.g., from user 112 of FIG. 1) indicating how, where, and to what degree the blur effect 507 is to be applied to the video media item 505. In one example, the instructions indicate that the top half or bottom half of the image is to be blurred. In other cases, a square blur effect, a circular blur effect, or a rectangular blur effect is applied. Any of the inputs, including the gradient 502, the image 503, or the procedural instructions 504, may provide an indication of how and where the blur effect 507 is to be generated and applied. This process is repeated for each image in the video media item. Some images receive no blur effect, while others receive one or more different blur effects. In some cases, each image is analyzed and blurred, while, in other cases, images are analyzed but are not blurred, thereby preserving resources. For instance, if a blur effect is the same or substantially the same between two or more images, the system will omit generation of a unique blur for that subsequent frame. In some cases, the computer system 101 of FIG. 1 determines that the amount of change in a blur effect between two or more frames is below a specified threshold. When the amount of change is below the threshold, the computer system 101 does not recompute the blur effect. However, when the amount of change (e.g., as indicated by the mask inputs 501) is above the threshold, the computer system 101 will recompute the blur effect. In some cases, the threshold is raised or lowered dynamically based on the amount of available computing resources. Thus, in such cases, the blur effect 507 is generated and applied dynamically in a manner that provides a sufficient blur effect while still preserving computing resources on the computer system 101. As with the blur effect, the number of samples taken prior to generating a blur is also dynamically controlled according to user specification, device specification, or current computing resources. FIG. 6, for example, illustrates an embodiment in which pixels are sampled from within a sliding window. The number of pixels sampled, and the size and/or shape of the sliding window changes based on a variety of inputs and circumstances. In some cases, for example, the number of pixels that are to be sampled within a given sliding window is chosen by a user. In such cases, for instance, user 601 provides a user input indicating the characteristics of the sliding window (e.g., size, shape, placement, rotation, diagonal angle, etc.) and/or indicating how many pixels are to be sampled within the sliding window. In cases where the sliding window is larger, more pixels are sampled. In cases where the sliding window is smaller, fewer pixels are sampled. Thus, if the user selects a mid-size sliding window 606, more pixels will be sampled than if the user had chosen the smaller window 605. However, fewer pixels will be sampled than if the user 601 had chosen the larger sliding window 607. Thus, at least in some cases, the number of pixels sampled is dependent on the size of the sliding window. In other cases, the user simply specifies the number of pixels that are to be sampled, or specifies a computing resource threshold. This computing resource threshold provides an upper limit on computing resources used for sampling and generating a blur effect, thus preserving at least some computing resources for other purposes. In some cases, the number of pixels that are to be sampled within the sliding window is selected based on electronic device specifications and/or available processing resources on the electronic device. In FIG. 6, for example, device characteristics 602 including device specifications or available processing resources are used to determine how many pixels are sampled for a given image and further specify characteristics of the sliding window. The device specifications or characteristics indicate, for example, the CPU's number of cores and/or clock speed, the total amount of random access memory, the amount of data storage on the device, the types of radios and bandwidth available on those radios, the GPU's number of cores, memory, and/or clock speed, or other computing resources' specifications. In some cases, the computer system 101 of FIG. 1 or some other computer system is designed to run performance metrics on known devices (e.g., mobile phones, tablets, gaming systems, streaming devices, televisions, wearable devices, etc.). These performance metrics provide an indication of how well the device performs at receiving and transmitting data, encoding or decoding data, encrypting or decrypting data, generating blur or other effects, playing back movies or videos at a higher or lower rate, playing back videos at lower and higher resolutions, holding a steady number of frames per second, or provide other performance metrics. These performance metrics are then stored for each device, as they are gathered over time. In some embodiments, the computer system 101 or some other computer system accesses these stored performance metrics to predict how well movie or video playback will work, or how well blur effects will be generated and applied on that electronic device. Then, without ever testing actual performance on a given device, the computer system will use its knowledge of previously tested systems and determine how well blur (or other) effects will be generated on the new device. In the case of blur effect generation and application, for instance, the computer system 101 determines that a minimum quality blur effect will be generated and applied to an image using X number of samples on that device. Thus, without even testing a new device, the computer system 101 determines that blur effects may be applied, but only with a certain number of samples X. The blur effect quality is thus increased or decreased to a certain level that fits the new device without having previously tested the new device. In such cases, or in cases where the blur effect is to be applied using a device that has been test and has known processing resource constraints, the number of pixels that are to be sampled within the sliding window is dynamically adapted based on currently available processing resources. Thus, regardless of what the electronic device's specifications are, or how well the device fared in previous performance tests, the number of samples taken for any given image (or the size or radius or shape of the sliding window) varies based on the amount of currently available processing resources (e.g., 603 of FIG. 6). For example, even if a user input 601 specified that Y number of samples were to be taken on a given image, or even if the device's specifications (on the device generating the blur effect) indicated that a superior quality blur effect was possible, the currently available processing resources 603 may indicate that the number of samples (or the size of the sliding window) is to be reduced. The computer system 101 determines, in some cases for example, that CPU and memory resources are being diverted to other tasks. The computer system 101 then determines that the pixel sampling module 110 can only sample Z number of pixels and then dynamically reduces the number of samples taken for image 114 (and for subsequent images) to Z. If the computer system's available computing resources 603 increase because another task finished, the number of samples Z is dynamically increased to a new value that will provide an optimum quality blur effect. Thus, even if a user specifies a desire for a very high-quality blur effect, the computer system 101 will dynamically reduce the quality of the blur effect (if the currently available computing resources calls for such) by dynamically reducing the number of samples taken for each image. In some cases, the computer system 101 determines how many pixels to sample prior to accessing the image and then samples the determined number of pixels for that image. As such, the computer system determines, on a pixel-per-pixel basis, whether that pixel is to be sampled for that image. Similarly, the characteristics of the sliding window 116 are also determined or adjusted with the sampling of each pixel. Thus, both the number of samples and the size, shape, or other characteristics of the sliding window are also determined on a pixel-by-pixel basis. This dynamism in generating blur and other effects ensures that the viewer sees the highest possible quality blur, while working within the operating constraints of the electronic device. In addition to the method described above in conjunction with FIG. 2, a corresponding system includes at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to determine that at least a portion of an image is to be blurred, where the image includes multiple pixels arranged along at least one of a horizontal axis or a vertical axis, identify a boundary size for a sliding window within which pixel values are to be sampled from the image, and sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The physical processor is further configured to perform an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels and then present the image, at some of which is blurred as a result of the initial convolution pass. FIGS. 7A and 7B illustrate embodiments in which pixels are sampled at different diagonal angles depending on the form or structure of the underlying content. In some embodiments, the physical processor (e.g., 102 of FIG. 1) of computer system 101 is configured to identify various portions of content within the image 114. The content is identified using machine learning or similar content acquisition techniques. The computer system then determines that the identified content has various prominent angles. For example, as shown in FIG. 7A, the content of image 701A comprises a cross 702. The cross 702 includes multiple straight lines along the vertical and horizontal axes. If samples were to be taken in the traditional manner along the vertical and horizontal axes of the image grid, the blur effect generated from the samples would include ghosting and other artifacts. In the embodiments herein, the computer system 101 identifies the content in the image 701A and, based on the content, alters the sampling of pixels (e.g., 703). The alterations ensure that the sampling occurs along an axis that is diagonal relative to the horizontal and/or vertical axes of the image. In this manner, the altered sampling avoids the prominent horizontal and vertical angles of the cross 702 (or other image content). In a similar fashion, if the underlying image content changes, such that the underlying content includes predominantly diagonal lines (e.g., rotated cross 704), the computer system still identifies the content and the predominant lines and determines how to appropriately alter the sampling so that samples are taken along lines that are diagonal to the existing diagonal lines (e.g., samples 705). By avoiding samples taken along the predominant lines, image artifacts generated during convolution are circumvented. In cases where the image includes text (e.g., on a user interface), many of the text characters have vertical and horizontal edges. In such cases, the diagonal pixel samples within the sliding window are taken along a 30-degree angle or a 45-degree angle relative to the horizontal edges of the sliding window. In some cases, the sliding window is rotated, and samples are again taken but at a different angle. After the sample is taken (for that pixel or for the whole image), the sliding window is rotated (or moved side to side) again, creating a “wiggle” effect as the sliding window circles or jitters around a pixel or set of pixels. In this manner, the sliding window includes and excludes different pixel samples as the sliding window continually changes. In some cases, the amount, direction, and type of change applied to the sliding window depends on which angles and shapes are determined to be the most prominent in the image content. The wiggling and other changes to the sliding widow improve the sampling of pixels and thereby improve the subsequent blur effect. As noted above, the sampling (and the blur effect quality) is affected by the number of samples taken for each image. The blur effect quality is also affected by which pixels are sampled. As discussed earlier, simply sampling along horizontal or vertical image grid lines results in suboptimal quality, with artifacts and other visual anomalies. In some embodiments, the computer system 101 of FIG. 1 samples pixels according to a noise pattern. The noise pattern, in some cases, is a white noise pattern that omits no frequencies, and in other cases, the noise pattern is a blue noise filter or other noise filter that omits (or includes only) certain frequencies. In some case, the noise values are precomputed and are stored in a lookup table. The computer system 101 then consults this lookup table to access the noise values. The noise values are then used when determining which pixels to sample. Instead of sampling pixels (purely) randomly, or purely along a straight line, optimal blur effects are generated by sampling the pixels along a diagonal, but in a pseudo-random fashion. This pseudorandom fashion is a noise pattern or other similar pattern that yields optimal pixel sampling. FIG. 8 illustrates an embodiment in which a computer system 803 implements blue noise values in an efficient manner. The computer system 803 (which is the same as or different than computer system 101 of FIG. 1) receives or otherwise accesses media item 801 that has a single image or a set of images 802. The data accessing module accesses noise values stored in data store 807, which is local or remote to computer system 803. In some cases, the data accessing module 805 accesses blue noise values 809 in lookup table 808. In other cases, the data accessing module 805 accesses white noise values 810 or some other noise or other pseudorandom values in lookup table 808. The sampling module 806 of computer system 803 then selects random numbers from the (blue) noise distribution indicating which pixels are to be sampled. This blue noise distribution provides a balanced sampling result when attempting to create an optimal blurring effect from a limited number of pixel samples. In some cases, these accessed (blue) noise values are implemented for multiple samplings and multiple convolutions before new noise values are accessed. In other cases, new noise values are accessed for each new pixel sampling. This, however, will be taxing on the computer system's resources. Accordingly, in order to conserve computing resources, noise values (e.g., 809) are used for multiple pixel samplings and across multiple convolutions. In this manner, computing resources are further preserved on a mobile (potentially lower specification) device, while still producing a blurring that is aesthetically pleasing to the viewer. In some cases, after the blurring effect has been applied to the image(s), the computer system 803 is configured to transition back to the original, unblurred image. This transition back to the original image, including the removal of the image blurring, may occur over a specified amount of time and may occur slowly or quickly to emphasize or deemphasize the transition. In some examples, the above-described method is encoded as computer-readable instructions on a computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to determine that at least a portion of an image is to be blurred, where the image includes multiple pixels arranged along at least one of a horizontal axis or a vertical axis, identify a boundary size for a sliding window within which pixel values are to be sampled from the image, and sample, from pixels that lie on an axis that is diagonal relative to at least one of the horizontal axis of the image or the vertical axis of the image, various pixel values from within the boundary of the sliding window. The pixels sampled along the diagonal angle within the sliding window are selected according to a specified noise pattern. The computing device is further configured to perform an initial convolution pass on pixels surrounding the sampled pixels to blur at least some of the pixels surrounding the sampled pixels and then present the image, at some of which is blurred as a result of the initial convolution pass. In this manner, the embodiments described above allow blurring effects to be applied to images in a way that can be performed on substantially any device including low-specification mobile devices. The embodiments allow for dynamic variation in the choice of which pixels to sample, how many pixels to sample, how big the sampling window is, and in many other variables. Each of these selections may then change throughout the blurring process according to computing resources that are available (or not available) at the time. This allows the computing system to continually generate optimal blurring effects, regardless of any changes in the computing environment. The following will provide, with reference to FIG. 9, detailed descriptions of exemplary ecosystems in which content is provisioned to end nodes and in which requests for content are steered to specific end nodes. The discussion corresponding to FIGS. 10 and 11 presents an overview of an exemplary distribution infrastructure and an exemplary content player used during playback sessions, respectively. These exemplary ecosystems and distribution infrastructures are implemented in any of the embodiments described above with reference to FIGS. 1-8. FIG. 9 is a block diagram of a content distribution ecosystem 900 that includes a distribution infrastructure 910 in communication with a content player 920. In some embodiments, distribution infrastructure 910 is configured to encode data at a specific data rate and to transfer the encoded data to content player 920. Content player 920 is configured to receive the encoded data via distribution infrastructure 910 and to decode the data for playback to a user. The data provided by distribution infrastructure 910 includes, for example, audio, video, text, images, animations, interactive content, haptic data, virtual or augmented reality data, location data, gaming data, or any other type of data that is provided via streaming. Distribution infrastructure 910 generally represents any services, hardware, software, or other infrastructure components configured to deliver content to end users. For example, distribution infrastructure 910 includes content aggregation systems, media transcoding and packaging services, network components, and/or a variety of other types of hardware and software. In some cases, distribution infrastructure 910 is implemented as a highly complex distribution system, a single media server or device, or anything in between. In some examples, regardless of size or complexity, distribution infrastructure 910 includes at least one physical processor 912 and at least one memory device 914. One or more modules 916 are stored or loaded into memory 914 to enable adaptive streaming, as discussed herein. Content player 920 generally represents any type or form of device or system capable of playing audio and/or video content that has been provided over distribution infrastructure 910. Examples of content player 920 include, without limitation, mobile phones, tablets, laptop computers, desktop computers, televisions, set-top boxes, digital media players, virtual reality headsets, augmented reality glasses, and/or any other type or form of device capable of rendering digital content. As with distribution infrastructure 910, content player 920 includes a physical processor 922, memory 924, and one or more modules 926. Some or all of the adaptive streaming processes described herein is performed or enabled by modules 926, and in some examples, modules 916 of distribution infrastructure 910 coordinate with modules 926 of content player 920 to provide adaptive streaming of multimedia content. In certain embodiments, one or more of modules 916 and/or 926 in FIG. 9 represent one or more software applications or programs that, when executed by a computing device, cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of modules 916 and 926 represent modules stored and configured to run on one or more general-purpose computing devices. One or more of modules 916 and 926 in FIG. 9 also represent all or portions of one or more special-purpose computers configured to perform one or more tasks. In addition, one or more of the modules, processes, algorithms, or steps described herein transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein receive audio data to be encoded, transform the audio data by encoding it, output a result of the encoding for use in an adaptive audio bit-rate system, transmit the result of the transformation to a content player, and render the transformed data to an end user for consumption. Additionally or alternatively, one or more of the modules recited herein transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device. Physical processors 912 and 922 generally represent any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, physical processors 912 and 922 access and/or modify one or more of modules 916 and 926, respectively. Additionally or alternatively, physical processors 912 and 922 execute one or more of modules 916 and 926 to facilitate adaptive streaming of multimedia content. Examples of physical processors 912 and 922 include, without limitation, microprocessors, microcontrollers, central processing units (CPUs), field-programmable gate arrays (FPGAs) that implement softcore processors, application-specific integrated circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, and/or any other suitable physical processor. Memory 914 and 924 generally represent any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, memory 914 and/or 924 stores, loads, and/or maintains one or more of modules 916 and 926. Examples of memory 914 and/or 924 include, without limitation, random access memory (RAM), read only memory (ROM), flash memory, hard disk drives (HDDs), solid-state drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, and/or any other suitable memory device or system. FIG. 10 is a block diagram of exemplary components of content distribution infrastructure 910 according to certain embodiments. Distribution infrastructure 910 includes storage 1010, services 1020, and a network 1030. Storage 1010 generally represents any device, set of devices, and/or systems capable of storing content for delivery to end users. Storage 1010 includes a central repository with devices capable of storing terabytes or petabytes of data and/or includes distributed storage systems (e.g., appliances that mirror or cache content at Internet interconnect locations to provide faster access to the mirrored content within certain regions). Storage 1010 is also configured in any other suitable manner. As shown, storage 1010 may store a variety of different items including content 1012, user data 1014, and/or log data 1016. Content 1012 includes television shows, movies, video games, user-generated content, and/or any other suitable type or form of content. User data 1014 includes personally identifiable information (PII), payment information, preference settings, language and accessibility settings, and/or any other information associated with a particular user or content player. Log data 1016 includes viewing history information, network throughput information, and/or any other metrics associated with a user's connection to or interactions with distribution infrastructure 910. Services 1020 includes personalization services 1022, transcoding services 1024, and/or packaging services 1026. Personalization services 1022 personalize recommendations, content streams, and/or other aspects of a user's experience with distribution infrastructure 910. Encoding services 1024 compress media at different bitrates which, as described in greater detail below, enable real-time switching between different encodings. Packaging services 1026 package encoded video before deploying it to a delivery network, such as network 1030, for streaming. Network 1030 generally represents any medium or architecture capable of facilitating communication or data transfer. Network 1030 facilitates communication or data transfer using wireless and/or wired connections. Examples of network 1030 include, without limitation, an intranet, a wide area network (WAN), a local area network (LAN), a personal area network (PAN), the Internet, power line communications (PLC), a cellular network (e.g., a global system for mobile communications (GSM) network), portions of one or more of the same, variations or combinations of one or more of the same, and/or any other suitable network. For example, as shown in FIG. 10, network 1030 includes an Internet backbone 1032, an internet service provider 1034, and/or a local network 1036. As discussed in greater detail below, bandwidth limitations and bottlenecks within one or more of these network segments triggers video and/or audio bit rate adjustments. FIG. 11 is a block diagram of an exemplary implementation of content player 920 of FIG. 9. Content player 920 generally represents any type or form of computing device capable of reading computer-executable instructions. Content player 920 includes, without limitation, laptops, tablets, desktops, servers, cellular phones, multimedia players, embedded systems, wearable devices (e.g., smart watches, smart glasses, etc.), smart vehicles, gaming consoles, internet-of-things (IoT) devices such as smart appliances, variations or combinations of one or more of the same, and/or any other suitable computing device. As shown in FIG. 11, in addition to processor 922 and memory 924, content player 920 includes a communication infrastructure 1102 and a communication interface 1122 coupled to a network connection 1124. Content player 920 also includes a graphics interface 1126 coupled to a graphics device 1128, an input interface 1134 coupled to an input device 1136, and a storage interface 1138 coupled to a storage device 1140. Communication infrastructure 1102 generally represents any type or form of infrastructure capable of facilitating communication between one or more components of a computing device. Examples of communication infrastructure 1102 include, without limitation, any type or form of communication bus (e.g., a peripheral component interconnect (PCI) bus, PCI Express (PCIe) bus, a memory bus, a frontside bus, an integrated drive electronics (IDE) bus, a control or register bus, a host bus, etc.). As noted, memory 924 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or other computer-readable instructions. In some examples, memory 924 stores and/or loads an operating system 1108 for execution by processor 922. In one example, operating system 1108 includes and/or represents software that manages computer hardware and software resources and/or provides common services to computer programs and/or applications on content player 920. Operating system 1108 performs various system management functions, such as managing hardware components (e.g., graphics interface 1126, audio interface 1130, input interface 1134, and/or storage interface 1138). Operating system 1108 also provides process and memory management models for playback application 1110. The modules of playback application 1110 includes, for example, a content buffer 1112, an audio decoder 1118, and a video decoder 1120. Playback application 1110 is configured to retrieve digital content via communication interface 1122 and play the digital content through graphics interface 1126. Graphics interface 1126 is configured to transmit a rendered video signal to graphics device 1128. In normal operation, playback application 1110 receives a request from a user to play a specific title or specific content. Playback application 1110 then identifies one or more encoded video and audio streams associated with the requested title. After playback application 1110 has located the encoded streams associated with the requested title, playback application 1110 downloads sequence header indices associated with each encoded stream associated with the requested title from distribution infrastructure 910. A sequence header index associated with encoded content includes information related to the encoded sequence of data included in the encoded content. In one embodiment, playback application 1110 begins downloading the content associated with the requested title by downloading sequence data encoded to the lowest audio and/or video playback bitrates to minimize startup time for playback. The requested digital content file is then downloaded into content buffer 1112, which is configured to serve as a first-in, first-out queue. In one embodiment, each unit of downloaded data includes a unit of video data or a unit of audio data. As units of video data associated with the requested digital content file are downloaded to the content player 920, the units of video data are pushed into the content buffer 1112. Similarly, as units of audio data associated with the requested digital content file are downloaded to the content player 920, the units of audio data are pushed into the content buffer 1112. In one embodiment, the units of video data are stored in video buffer 1116 within content buffer 1112 and the units of audio data are stored in audio buffer 1114 of content buffer 1112. A video decoder 1120 reads units of video data from video buffer 1116 and outputs the units of video data in a sequence of video frames corresponding in duration to the fixed span of playback time. Reading a unit of video data from video buffer 1116 effectively de-queues the unit of video data from video buffer 1116. The sequence of video frames is then rendered by graphics interface 1126 and transmitted to graphics device 1128 to be displayed to a user. An audio decoder 1118 reads units of audio data from audio buffer 1114 and output the units of audio data as a sequence of audio samples, generally synchronized in time with a sequence of decoded video frames. In one embodiment, the sequence of audio samples is transmitted to audio interface 1130, which converts the sequence of audio samples into an electrical audio signal. The electrical audio signal is then transmitted to a speaker of audio device 1132, which, in response, generates an acoustic output. In situations where the bandwidth of distribution infrastructure 910 is limited and/or variable, playback application 1110 downloads and buffers consecutive portions of video data and/or audio data from video encodings with different bit rates based on a variety of factors (e.g., scene complexity, audio complexity, network bandwidth, device capabilities, etc.). In some embodiments, video playback quality is prioritized over audio playback quality. Audio playback and video playback quality are also balanced with each other, and in some embodiments audio playback quality is prioritized over video playback quality. Graphics interface 1126 is configured to generate frames of video data and transmit the frames of video data to graphics device 1128. In one embodiment, graphics interface 1126 is included as part of an integrated circuit, along with processor 922. Alternatively, graphics interface 1126 is configured as a hardware accelerator that is distinct from (i.e., is not integrated within) a chipset that includes processor 922. Graphics interface 1126 generally represents any type or form of device configured to forward images for display on graphics device 1128. For example, graphics device 1128 is fabricated using liquid crystal display (LCD) technology, cathode-ray technology, and light-emitting diode (LED) display technology (either organic or inorganic). In some embodiments, graphics device 1128 also includes a virtual reality display and/or an augmented reality display. Graphics device 1128 includes any technically feasible means for generating an image for display. In other words, graphics device 1128 generally represents any type or form of device capable of visually displaying information forwarded by graphics interface 1126. As illustrated in FIG. 11, content player 920 also includes at least one input device 1136 coupled to communication infrastructure 1102 via input interface 1134. Input device 1136 generally represents any type or form of computing device capable of providing input, either computer or human generated, to content player 920. Examples of input device 1136 include, without limitation, a keyboard, a pointing device, a speech recognition device, a touch screen, a wearable device (e.g., a glove, a watch, etc.), a controller, variations or combinations of one or more of the same, and/or any other type or form of electronic input mechanism. Content player 920 also includes a storage device 1140 coupled to communication infrastructure 1102 via a storage interface 1138. Storage device 1140 generally represents any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. For example, storage device 1140 is a magnetic disk drive, a solid-state drive, an optical disk drive, a flash drive, or the like. Storage interface 1138 generally represents any type or form of interface or device for transferring data between storage device 1140 and other components of content player 920. Many other devices or subsystems are included in or connected to content player 920. Conversely, one or more of the components and devices illustrated in FIG. 11 need not be present to practice the embodiments described and/or illustrated herein. The devices and subsystems referenced above are also interconnected in different ways from that shown in FIG. 11. Content player 920 is also employed in any number of software, firmware, and/or hardware configurations. For example, one or more of the example embodiments disclosed herein are encoded as a computer program (also referred to as computer software, software applications, computer-readable instructions, or computer control logic) on a computer-readable medium. The term “computer-readable medium,” as used herein, refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, etc.), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other digital storage systems. A computer-readable medium containing a computer program is loaded into content player 920. All or a portion of the computer program stored on the computer-readable medium is then stored in memory 924 and/or storage device 1140. When executed by processor 922, a computer program loaded into memory 924 causes processor 922 to perform and/or be a means for performing the functions of one or more of the example embodiments described and/or illustrated herein. Additionally or alternatively, one or more of the example embodiments described and/or illustrated herein are implemented in firmware and/or hardware. For example, content player 920 is configured as an Application Specific Integrated Circuit (ASIC) adapted to implement one or more of the example embodiments disclosed herein. As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor. In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory. In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor. Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks. In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive data to be transformed, transform the data, output a result of the transformation to sample one or more pixels, use the result of the transformation to blur the pixels, and store the result of the transformation after presenting the blurred pixels. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device. In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems. The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein are shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed. The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure. Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” 16904554 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:32AM Apr 27th, 2022 08:32AM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Apr 26th, 2022 12:00AM Oct 30th, 2019 12:00AM https://www.uspto.gov?id=US11317158-20220426 Video playback in an online streaming environment A computer-implemented method of displaying video content includes, based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, wherein the second media player begins playing the video content item based on the value of the second state descriptor. 11317158 1. A method, comprising: based on user input to transition between playback of first video content in a first media player included in a plurality of media players concurrently displayed in a user interface and playback of second video content in a second media player included in the plurality of media players, determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin the playback of the second video content, wherein the second media player begins the playback of the second video content based on the value of the second state descriptor. 2. The method of claim 1, further comprising, after setting the value of the second state descriptor to match the current value of the first state descriptor, causing the first media player to stop playing the first video content. 3. The method of claim 1, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. 4. The method of claim 3, wherein the first attribute and the second attribute correspond to a same characteristic of playback of a video content item. 5. The method of claim 3, wherein the first attribute of the first media player comprises one of a mute setting, a volume level, and a current play time associated with playback of the first video content on the first media player or the second attribute of the second media player comprises one of a mute setting, a volume level, and a current play time associated with playback of the second video content on the second media player. 6. The method of claim 3, wherein the first attribute is included in a first array of attributes that are each associated with the current playback state of the first media player and the second attribute is included in a second array of attributes that are each associated with the current playback state of the second media player. 7. The method of claim 1, wherein the user input is received via the user interface. 8. The method of claim 1, wherein determining the current value of the first state descriptor associated with the first media player is further based on a predetermined value for a third state descriptor associated with the second media player. 9. The method of claim 8, further comprising, prior to determining the current value of the first state descriptor associated with the first media player, determining that a current value for the third state descriptor equals the predetermined value. 10. The method of claim 1, wherein the first playback state descriptor is associated with an attribute of a current playback state of the first video content and the second playback state descriptor is associated with the attribute of the current playback state of the second video content. 11. The method of claim 1, wherein the first state descriptor is associated with a volume level of the first media player and the second state descriptor is associated with a volume level of the second media player. 12. The method of claim 1, further comprising causing at least a portion of the first media player and at least at least a portion of the second media player to be displayed concurrently by a same display device. 13. A non-transitory computer readable medium storing instructions that, when executed by a processor, cause the processor to perform the steps of: based on user input to transition between playback of first video content in a first media player included in a plurality of media players concurrently displayed in a user interface and playback of second video content in a second media player included in the plurality of media players determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin the playback of the second video content, wherein the second media player begins the playback of the second video content based on the value of the second state descriptor. 14. The non-transitory computer readable medium of claim 13, further comprising, after setting the value of the second state descriptor to match the current value of the first state descriptor, causing the first media player to stop playing the first video content. 15. The non-transitory computer readable medium of claim 13, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. 16. The non-transitory computer readable medium of claim 15, wherein the first attribute and the second attribute correspond to a same characteristic of playback of a video content item. 17. The non-transitory computer readable medium of claim 13, wherein the first state descriptor is associated with a volume level of the first media player and the second state descriptor is associated with a volume level of the second media player. 18. A system, comprising: a memory that stores instructions; and a processor that is coupled to the memory and is configured to perform the steps of, upon executing the instructions: based on user input to transition between playback of first video content in a first media player included in a plurality of media player concurrently displayed in a user interface and playback of second video content in a second media player included in the plurality of media players, determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin the playback of the second video content, wherein the second media player begins the playback of the second video content based on the value of the second state descriptor. 19. The system of claim 18, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. 19 CROSS-REFERENCE TO RELATED APPLICATION This application claims priority benefit of the U.S. Provisional Patent Application titled, “CONTENT PREVIEW INTERFACE,” filed on Nov. 2, 2018 and having Ser. No. 62/755,326. The subject matter of this related application is hereby incorporated herein by reference. BACKGROUND Field of the Various Embodiments The present disclosure relates generally to playing streaming video and, more specifically, to video playback in an online streaming environment. Description of the Related Art With on-demand video streaming becoming one of the most popular forms of media consumption, the number of titles available to the on-demand viewer has been increased dramatically. However, despite the wide variety of titles now available, on-demand viewers frequently have difficulty locating content that they find interesting. For example, by simply scanning a list of a possible video titles to view, the on-demand viewer is unlikely to make a satisfactory selection; even if the list of titles is sorted by genre, selection of a particular title in this way is little more than a random process unless the subject matter of that particular title is already known to the on-demand viewer. Similarly, reading a plot summary for each title option being considered is an ineffective way for the on-demand viewer to locate content of interest; searching for a suitable video title in this way is very time-consuming and generally requires the user to put significant effort into researching each and every title that sounds at all interesting. In light of the above, video streaming services currently facilitate the selection process for a prospective on-demand viewer by presenting to the viewer a static representative image for each available title. Each static image serves the same function for the associated title as the cover art of a book, video tape, or digital versatile disk (DVD), promoting the title while visually providing some insight into the subject matter of the title. For example, a static image can quickly convey one or more of the genre, subject matter, starring actors, or other attributes of a title. Additional textual information, such as a summary sentence or a tag line, may also be included in the static image. Generally, a plurality of such static images is presented in an array, so that the on-demand viewer can visually browse through a large number of possible video titles quickly. One drawback of relying on an array of such images for enabling video selection is that a static, box-art-like image cannot convey much detailed information about a particular video title. As a result, a prospective viewer is less likely to be drawn in by such a selection process and select a video title. By contrast, trailer and other summary videos are well-known to give significant insight into the subject matter of a video title in a short time. Consequently, summary videos are more likely to successfully gain the interest of the prospective viewer and influence the viewer to select the video title associated with the trailer. However, summary videos are data-intensive, and downloading of such videos can overload or slow the streaming connection employed by the prospective viewer. In addition, the various user interactions that are enabled by streaming video to a computer can further complicate the streaming process. For example, requesting preview videos for multiple available titles can overload the download process and impact other processes running in parallel on the computer. As the foregoing illustrates, what is needed in the art are systems and methods for more effectively displaying video previews. SUMMARY A method of displaying video content includes, based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, wherein the second media player begins playing the video content item based on the value of the second state descriptor. At least one technological improvement of the disclosed embodiments is that coordination of video playback is enabled between multiple UI components that can each perform video playback simultaneously. Consequently, downloading multiple video data streams when transitioning video playback from one UI component to another UI component can be avoided. In addition, multiple UI components are prevented from simultaneously performing playback of video content. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the various embodiments can be understood in detail, a more particular description of the inventive concepts, briefly summarized above, may be had by reference to various embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concepts and are therefore not to be considered limiting of scope in any way, and that there are other equally effective embodiments. FIG. 1 illustrates a network infrastructure, according to various embodiments; FIG. 2 is a more detailed illustration of the content server of FIG. 1, according to various embodiments; FIG. 3 is a more detailed illustration of the control server of FIG. 1, according to various embodiments; FIG. 4 is a more detailed illustration of the endpoint device of FIG. 1, according to various embodiments; FIG. 5 illustrates a graphical user interface (GUI) generated by the interface engine of FIG. 4, according to various embodiments; FIGS. 6A-6G illustrate the GUI of FIG. 5 at various points in a process of displaying preview videos of available content items, according to various embodiments; FIG. 7 is a block diagram of preview playback application, configured to perform various embodiments; FIG. 8 is a schematic illustration of the playback state database of FIG. 7, according to an embodiment; FIG. 9 sets forth a flowchart of method steps for updating a playback state database, according to various embodiments; and FIG. 10 sets forth a flowchart of method steps for transitioning playback of video content from a first user interface component to a second user interface component, according to various embodiments. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one of skilled in the art that the inventive concepts may be practiced without one or more of these specific details. Overview Trailer videos and other summary videos give insight into the subject matter of a video title in a short time. Thus, summary videos are more likely to successfully gain the interest of the prospective viewer and influence the viewer to select the video title associated with the trailer. However, summary videos are data-intensive, and downloading of such videos can overload or slow the streaming connection employed by the prospective viewer. Further, the media players integrated into a user interface for streaming videos generally operate independently from each other, where each media player is unaware of the playback state of other media players associated with the user interface. Consequently, coordination between such media players can be problematic; a user requesting a video preview for multiple video titles can result in multiple videos being played concurrently. In addition, when playback of a preview video transitions from a first media player to a second media player (for example, from a lower resolution media player to a higher resolution media player), the second media player begins playing the preview video from the beginning, which can be frustrating to the user. The disclosed techniques enable a smooth transition between the playback of a preview video in one media player implemented in a streaming video user interface and another media player implemented in the user interface. As a result, a user is more likely to be drawn in by such a selection process and select a video title. In various embodiments, playback state descriptors are employed to track certain attributes of each user interface (UI) component (such as a media player) instantiated in a user interface. These attributes include the current play time of a video being played by the UI component, a sound volume level for a video being played by the UI component, a mute state of the UI component, and the like. The current values of such playback state descriptors for multiple UI components are stored in a client-side database. When transitioning playback of video content from a first UI component (such as a first media player) to a second UI component (such as a second media player), a user interface engine determines a current value of a playback state descriptor of the first UI component from the database. The user interface engine then sets the corresponding playback state descriptor of the second UI component to match the current value of the playback state descriptor of the first UI component and begins playback of the video content with the second UI using the current value of the playback state descriptor of the first UI component. As a result, the second UI component begins playback of the video content in a playback state that is the same as the playback state of the first UI component when playback of the video content is transitioned from the first UI component. For example, in one embodiment, the second UI component begins playback of the video content at the point in time at which the first UI stopped playback of the video content. In another embodiment, the second UI component begins playback of the video content at the same sound level employed by the first UI component when performing playback of the video content. Advantageously, coordination of video playback is enabled between multiple UI components that can each perform video playback simultaneously. Consequently, multiple UI components are prevented from simultaneously performing playback of video content. Further, with only a single video being played back at a given time, downloading multiple video data streams is avoided when transitioning video playback from one UI component to another UI component. System Configuration FIG. 1 illustrates a network infrastructure 100, according to various embodiments. As shown, the network infrastructure 100 includes content servers 110, control server 120, and endpoint devices 115, each of which are connected via a communications network 105. Network infrastructure 100 is configured to distribute content to content servers 110, and such content is then distributed on-demand to endpoint devices 115. Each endpoint device 115 communicates with one or more content servers 110 (also referred to as “caches” or “nodes”) via the network 105 to download content, such as textual data, graphical data, audio data, video data, and other types of data. The downloadable content, also referred to herein as a “file,” is then presented to a user of one or more endpoint devices 115. In various embodiments, the endpoint devices 115 may include computer systems, set top boxes, mobile computer, smartphones, tablets, console and handheld video game systems, digital video recorders (DVRs), DVD players, connected digital TVs, dedicated media streaming devices, (e.g., the Roku® set-top box), and/or any other technically feasible computing platform that has network connectivity and is capable of presenting content, such as text, images, video, and/or audio content, to a user. Each content server 110 may include a web-server, database, and server application 217 (described below) configured to communicate with the control server 120 to determine the location and availability of various files that are managed by the control server 120. Each content server 110 may further communicate with cloud services 130 and one or more other content servers 110 in order to “fill” each content server 110 with copies of various files. In addition, content servers 110 may respond to requests for files received from endpoint devices 115. The files may be distributed from the content server 110 or via a broader content distribution network. In some embodiments, the content servers 110 enable users to authenticate (e.g., using a username and password) in order to access files stored on the content servers 110. Although only a single control server 120 is shown in FIG. 1, in various embodiments multiple control servers 120 may be implemented to track and manage files. In various embodiments, the cloud services 130 may include an online storage service (e.g., Amazon® Simple Storage Service, Google® Cloud Storage, etc.) in which a catalog of files, including thousands or millions of files, is stored and accessed in order to fill the content servers 110. Cloud services 130 also may provide compute or other processing services. Although only a single cloud services 130 is shown in FIG. 1, in various embodiments multiple cloud services 130 may be implemented. FIG. 2 is a more detailed illustration of content server 110 of FIG. 1, according to various embodiments. As shown, the content server 110 includes, without limitation, a central processing unit (CPU) 204, a system disk 206, an input/output (I/O) devices interface 208, a network interface 210, an interconnect 212, and a system memory 214. The CPU 204 is configured to retrieve and execute programming instructions, such as server application 217, stored in the system memory 214. Similarly, the CPU 204 is configured to store application data (e.g., software libraries) and retrieve application data from the system memory 214. The interconnect 212 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 204, the system disk 206, I/O devices interface 208, the network interface 210, and the system memory 214. The I/O devices interface 208 is configured to receive input data from I/O devices 216 and transmit the input data to the CPU 204 via the interconnect 212. For example, I/O devices 216 may include one or more buttons, a keyboard, a mouse, and/or other input devices. The I/O devices interface 208 is further configured to receive output data from the CPU 204 via the interconnect 212 and transmit the output data to the I/O devices 216. The system disk 206 may include one or more hard disk drives, solid state storage devices, or similar storage devices. The system disk 206 is configured to store non-volatile data such as files 218 (e.g., audio files, video files, subtitles, application files, software libraries, etc.). The files 218 can be retrieved by one or more endpoint devices 115 via the network 105. In some embodiments, the network interface 210 is configured to operate in compliance with the Ethernet standard. The system memory 214 includes a server application 217 configured to service requests for files 218 received from endpoint device 115 and other content servers 110. When the server application 217 receives a request for a file 218, the server application 217 retrieves the corresponding file 218 from the system disk 206 and transmits the file 218 to an endpoint device 115 or a content server 110 via the network 105. Files 218 include a plurality of digital visual content items, such as videos and still images. In addition, files 218 may include textual content associated with such digital visual content items, such as movie metadata. In some embodiments, a plurality of files 218 may be associated with a single video title. For example, in one such embodiment, for a specific video title that is available via content server 110 (such as a movie), there are multiple video streaming files associated with that specific video title, where each file is for a different resolution of the specific video title. Further, in such embodiments, the multiple video streaming files can further include multiple preview video files that are each configured for a different resolution of a preview video for the specific video title. FIG. 3 is a more detailed illustration of control server 120 of FIG. 1, according to various embodiments. As shown, the control server 120 includes, without limitation, a central processing unit (CPU) 304, a system disk 306, an input/output (I/O) devices interface 308, a network interface 310, an interconnect 312, and a system memory 314. The CPU 304 is configured to retrieve and execute programming instructions, such as control application 317, stored in the system memory 314. Similarly, the CPU 304 is configured to store application data (e.g., software libraries) and retrieve application data from the system memory 314 and a database 318 stored in the system disk 306. The interconnect 312 is configured to facilitate transmission of data between the CPU 304, the system disk 306, I/O devices interface 308, the network interface 310, and the system memory 314. The I/O devices interface 308 is configured to transmit input data and output data between the I/O devices 316 and the CPU 304 via the interconnect 312. The system disk 306 may include one or more hard disk drives, solid state storage devices, and the like. The system disk 206 (shown in FIG. 2) is configured to store a database 318 of information associated with content servers 110, cloud services 130, and files 218. The system memory 314 includes a control application 317 configured to access information stored in the database 318 and process the information to determine the manner in which specific files 218 will be replicated across content servers 110 included in the network infrastructure 100. The control application 317 may further be configured to receive and analyze performance characteristics associated with one or more of the content servers 110 and/or endpoint devices 115. FIG. 4 is a more detailed illustration of the endpoint device 115 of FIG. 1, according to various embodiments. As shown, the endpoint device 115 may include, without limitation, a CPU 410, a graphics subsystem 412, an I/O device interface 414, a mass storage unit 416, a network interface 418, an interconnect 422, and a memory subsystem 430. In some embodiments, the CPU 410 is configured to retrieve and execute programming instructions stored in the memory subsystem 430. Similarly, the CPU 410 is configured to store and retrieve application data (e.g., software libraries) residing in the memory subsystem 430. The interconnect 422 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 410, graphics subsystem 412, I/O devices interface 414, mass storage 416, network interface 418, and memory subsystem 430. In some embodiments, the graphics subsystem 412 is configured to generate frames of video data and transmit the frames of video data to display device 450. In some embodiments, the graphics subsystem 412 is also configured to generate a graphical user interface (GUI) and transmit the GUI to display device 450. In some embodiments, the graphics subsystem 412 may be integrated into an integrated circuit, along with the CPU 410. The display device 450 may comprise any technically feasible means for generating an image for display. For example, the display device 450 may be fabricated using liquid crystal display (LCD) technology, cathode-ray tube technology, and light-emitting diode (LED) display technology. An input/output (I/O) device interface 414 is configured to receive input data from user I/O devices 452 and transmit the input data to the CPU 410 via the interconnect 422. For example, user I/O devices 452 may comprise one or more buttons or other pointing devices, such as the “up,” “down,” “left,” “right,” and “select” buttons on a television remote or video game console. The I/O device interface 414 also includes an audio output unit configured to generate an electrical audio output signal. User I/O devices 452 includes an audio speaker configured to generate an acoustic output in response to the electrical audio input signal. In alternative embodiments, the display device 450 may include the speaker. Examples of suitable devices known in the art that can display video frames and generate an acoustic output include televisions, smartphones, smartwatches, electronic tablets, and the like. A mass storage unit 416, such as a hard disk drive or flash memory storage drive, is configured to store non-volatile data. A network interface 418 is configured to transmit and receive packets of data via the network 105. In some embodiments, the network interface 418 is configured to communicate using the well-known Ethernet standard. The network interface 418 is coupled to the CPU 410 via the interconnect 422. In some embodiments, the memory subsystem 430 includes programming instructions and application data that comprise an operating system 432, an interface engine 434, and a playback application 436. The operating system 432 performs system management functions such as managing hardware devices including the network interface 418, mass storage unit 416, I/O device interface 414, and graphics subsystem 412. The operating system 432 also provides process and memory management models for the interface engine 434 and the playback application 436. The interface engine 434, such as a window and object metaphor, provides a mechanism for user interaction with endpoint device 115, such as a graphical user interface (GUI). Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into the endpoint device 108. In some embodiments, the playback application 436 is configured to request and receive content from the content server 105 via the network interface 418. Further, the playback application 436 is configured to interpret the content and present the content via display device 450 and/or user I/O devices 452. User Interface with Embedded Video Players FIG. 5 illustrates a graphical user interface (GUI) 500 generated by interface engine 434 of FIG. 4, according to various embodiments. GUI 500 can be displayed on display device 450 of endpoint device 115, particularly when endpoint device 115 is configured as a laptop computer, desktop computer, or other computing system. As shown, GUI 500 includes various display canvases 510 that are each configured to display information pertaining to a specific video content item, such as a particular video title. In some embodiments, each display canvas 510 includes an instance of a media player. Thus, each display canvas 510 is a fixed or moveable display region that can display a still image or a preview video for a specific video content item. Examples of video content items include movies, television series, documentaries, specific sporting events, and the like. Display canvases 510 can include different configurations of display region. For example, display canvases 510 include a single large-presentation canvas, referred to herein as a “billboard canvas” 511, and a plurality of array-sized canvases, referred to herein as “title card canvases” 512, or a combination thereof. In various embodiments, billboard canvas 511 is larger than title card canvases 512 and is configured to automatically begin playing a preview video for a predetermined video content item, such as a newly released or otherwise featured video content item. In some embodiments, billboard canvas 511 is sized and positioned more prominently than any of title card canvases 512 to attract the attention of an on-demand user. For example, in some embodiments, billboard canvas 511 is initially or permanently positioned in a top region of GUI 500. Additionally or alternatively, in some embodiments, a billboard canvas 511 can be disposed between arrays of title card canvases 512. In such embodiments, as a user scrolls downward within GUI 500, at least one billboard canvas 511 is visible to the on-demand user. By contrast, title card canvases 512 are typically displayed in an array 530 and are generally configured to enable the on-demand user to visually scan the title card canvases 512 of a large number of available video content items. For example, in the embodiment illustrated in FIG. 5, title card canvases 512 are arranged in multiple rows 531, where each row 531 includes title card canvases 512 of a specific genre or category. In addition to display canvases 510, GUI 500 displays a cursor 501 and, in some embodiments, additional navigation tools, such as a side-bar menu and/or drop-down menus (not shown). In some embodiments, moving cursor 501 to an edge of GUI 500 causes scrolling of the contents of GUI 500 in an appropriate direction. For example, in such embodiments, positioning cursor 501 at or near a bottom edge 502 of GUI 500 can cause display canvases 510 to move upward, revealing additional display canvases 510. FIGS. 6A-6G illustrate GUI 500 at various points in a process of displaying preview videos of available content items, according to various embodiments. In FIG. 6A, an on-demand user has started playback application 436 and GUI 500 is displayed to an on-demand user. In the embodiment illustrated in FIGS. 6A-6G, upon start-up of playback application 435, cursor 501 is displayed in a neutral position, i.e., cursor 501 is not positioned within the boundaries of any of title card canvases 512. As a result of GUI 500 being started up, a preview video for a featured video content item begins playing in billboard canvas 511. Because cursor 501 is in a neutral position, no other display actions are implemented in GUI 500 and title card canvases 512 each display a static image associated with a different video content item that is currently available for selection. In FIG. 6B, the on-demand user has moved cursor 501 to a particular title card canvas 612 and hovers cursor 501 over title card canvas 612. Title card canvas 612 is a specific title card canvas associated with a video content item of interest to the on-demand user. As shown, until cursor 501 has hovered over title card canvas 612 for a predetermined hover time interval (e.g., 1 second), the preview video for the featured video content item continues playing in billboard canvas 511 and a static image (not shown) associated with the video content item of interest continues to be displayed in title card canvas 612. In FIG. 6C, the on-demand user has hovered cursor 501 over title card canvas 612 for the predetermined hover time interval. In response, title card canvas 612 appears to expand to larger video preview canvas 613, which then begins playback of a preview video for the video content item of interest. For example, in some embodiments, a series of animated images are displayed at or in lieu of title card canvas 612, causing the still image displayed by title card canvas 612 to appear to expand in size to the dimensions of video preview canvas 613. Simultaneously, or at approximately the same time, billboard canvas 511 halts or pauses playback of the preview video for the featured video content item. When billboard canvas 511 pauses playback of the preview video for the featured video content item, billboard canvas 511 displays a still image that includes the last frame of the preview video for the featured video content item. When billboard canvas 511 halts playback of the preview video, billboard canvas 511 displays a predetermined still image associated with the featured video content item. In either case, in FIG. 6C, billboard canvas 511 is no longer performing playback of the preview video for the featured video content item. Therefore, according to the embodiment illustrated in FIGS. 6A-6G, even though billboard canvas 511 is configured to automatically play a preview video without user input, when a user input indicates a request for a different canvas to playback a preview video (e.g., by hovering cursor 501 over title card canvas 612), the instance of media player included in billboard canvas 511 halts or pauses playback. Thus, playback of a single video takes place at one time in GUI 500, and the on-demand viewer is not confused or distracted by having multiple preview videos playing simultaneously. Further, the necessity for multiple streaming videos to be downloaded simultaneously to endpoint device 115 is avoided. In alternative embodiments, in response to the on-demand user hovering cursor 501 over title card canvas 612, title card canvas 612 does not appear to expand to larger video preview canvas 613. Instead, in such embodiments, video preview canvas 613 has the same dimensions as title card canvas 612, and begins playback of the preview video for the video content item of interest in the same portion of GUI 500 previously occupied by title card canvas 612. Therefore, in such embodiments, video preview canvas 613 is instantiated in substantially the same location as title card canvas 612. In yet other alternative embodiments, in response to the on-demand user hovering cursor 501 over title card canvas 612, an instance of a media player already included in or associated with title card canvas 612 begins playback of the preview video for the video content item of interest. In FIG. 6D, the on-demand user has performed an input operation indicating a request for more detailed information regarding the video content item associated with title card canvas 612 or video preview canvas 613. As shown, when the input operation is performed, the preview video for the featured video content item continues playing in video preview canvas 613. The input operation performed in FIG. 6D can be any suitable input operation compatible with endpoint device 115 and any of the input devices associated therewith. For example, in some embodiments, the input operation is a left click with a computer mouse on a more details icon 601 that is displayed in video preview canvas 613. Alternatively, in some embodiments, the input operation can be one or more of: hovering a portion of cursor 501 over more details icon 601; selection of a more details option from a drop-down menu (not shown), such as a drop-down menu that is displayed in response to a right click with a computer mouse on a portion of video preview canvas 613; depression of a hot key or combination of hot keys on a computer keyboard while cursor 501 hovers over a portion of more details icon 601 and/or video preview canvas 613; and the like. In embodiments in which endpoint device 115 comprises a mobile computing device (such as a smartphone), the input operation can include any input operation compatible with a mobile device interface (such as a screen touch or swipe). In embodiments in which endpoint device 115 comprises a television (such as a smart TV), the input operation can include any input operation compatible with a television interface (such as a TV remote section or input). In FIG. 6E, in response to the input operation indicating the request for more detailed information, a feature-sized video preview canvas 614 is implemented in GUI 500 that continues playback of the preview video for the video content item of interest. For example, in some embodiments, feature-sized video preview canvas 614 is configured with similar dimensions, textual content, and/or control icons as billboard canvas 511. Alternatively, in some embodiments, feature-sized video preview canvas 614 replaces, is overlayed on, and/or or otherwise obscures video preview canvas 613. Simultaneously, or at approximately the same time, video preview canvas 613 halts or pauses playback of the featured video content item. Therefore, according to the embodiment illustrated in FIGS. 6A-6G, even though video preview canvas 613 is configured to playback the preview video for the video content item of interest, video preview canvas 613 halts or pauses playback of the preview video for the video content item in response to the input operation indicating the request for more detailed information. Thus, playback of a single video takes place at one time in GUI 500. In FIGS. 6C and 6D, a first instance of the preview video for the video content item of interest is played in a first instance of a media player associated with video preview canvas 613, and in FIG. 6E the first instance of the media player halts or pauses the first instance of the preview video in a first playback state. By contrast, in FIG. 6E, a second instance of a media player associated with feature-sized video preview canvas 614 begins playback of a second instance of the preview video for the video content item. According to various embodiments, the second instance of the preview video begins playback in feature-sized video preview canvas 614 in the first playback state. That is, the second instance of the media player associated with feature-sized video preview canvas 614 begins playback of the second instance of the preview video in the same playback state as the first instance of the preview video when the first instance of the preview video is halted or paused in video preview canvas 613. In FIG. 6F, the on-demand user has performed an input operation indicating an interest in the video content item associated with a different display canvas 510 than feature-sized video preview canvas 614. In some embodiments, the input operation performed in FIG. 6F can be any suitable input operation compatible with endpoint device 115 and any of the input devices associated therewith, including hovering cursor 501 over a particular display canvas 510, selecting a particular display canvas 510 with a left click with a computer mouse, and the like. In the embodiment illustrated in FIG. 6F, the on-demand user has hovered at least a portion of cursor 501 over a portion of billboard canvas 511, which has been partially scrolled out of GUI 500. Thus, the on-demand user has indicated an interest in the video content item associated with billboard canvas 511. As shown, when the input operation is performed, the preview video for the video content item associated with feature-sized video preview canvas 614 continues to be played in feature-sized video preview canvas 614. In FIG. 6G, in response to the input operation indicating an interest in the video content item associated with billboard canvas 511, billboard canvas 511 is repositioned within GUI 500. In addition, the preview video for the featured video content item associated with billboard canvas 511 begins playing in billboard canvas 511. For example, in some embodiments, a media player associated with billboard canvas 511 begins playback of the preview video. According to some embodiments, playback of the preview video for the featured video content item begins in the same playback state as the playback state of the preview video when last paused or halted. Consequently, the on-demand user, having already seen a beginning portion of the preview video, is not forced to re-watch the beginning portion of the preview video. In addition, the audio state of the preview video, e.g., mute state or volume level, is the same as the audio state of the preview video most recently played in GUI 500. In FIG. 6G, the preview video most recently played in GUI 500 is the preview video for the video content item that was being played in feature-sized video preview canvas 614. Simultaneously, or at approximately the same time, feature-sized video preview canvas 614 halts or pauses playback of the featured video content item. Thus, playback of a single video takes place at one time in GUI 500. Generally, the media players implemented in GUI 500 are instantiated by interface engine 434 and include the media player associated with billboard canvas 511, the media player associated with video preview canvas 613, and the media player associated with feature-sized video preview canvas 614, among others. Further, each such media player generally operates independently from the other media players implemented in GUI 500. Consequently, coordination between such media players as described above can be problematic, since each media player is unaware of the playback state of other media players associated with GUI 500. According to various embodiments, seamless transition is enabled between the playback of a preview video in one media player implemented in GUI 500 and another media player implemented in GUI 500. Specifically, when an on-demand user requests playback of a preview video to transition from a first display canvas 510 to a second display canvas 510, a media player associated with the second display canvas 510 performs playback of the preview video based on the playback state of the media player performing playback of the preview video in the first display canvas. One such embodiment is illustrated in FIG. 7. Preview Playback Application FIG. 7 is a block diagram of preview playback application 700, configured to perform various embodiments. As shown, preview playback application 700 interacts with interface engine 434 and is implemented as an element of playback application 436 (shown in FIG. 4). Alternatively, in some embodiments, preview playback application 700 is implemented as a client-side application that is separate from playback application 436 and runs in parallel with playback application 436 on endpoint device 115. Preview playback application 700 includes a playback state database 720, preview playback logic 730, and a UI message log 740. Interface engine 434 is configured to instantiate or generate UI components of a GUI, such as canvases 510 of GUI 500, and to destroy such UI components. Further, interface engine 434 is configured to interact with preview playback application 700. More specifically, interface engine 434 is configured to read out data from playback state database 720, for example to modify one or more characteristics of playback by a UI component. In addition, interface engine 434 is configured to send messages to UI message log 740, for example in response to user inputs received by via a UI component. Playback state database 720 stores values for playback state descriptors associated with the various display canvases included in a GUI generated by interface engine 434. Specifically, for each display canvas or associated media player that is included in the GUI, playback state database 720 stores a settable value for one or more playback state descriptors. Thus, for GUI 500 of FIGS. 5-6G, playback state database 720 stores playback state descriptor values for each display canvas 510 that currently is playing back or has played back at least a portion of a preview video. One embodiment of playback state database 720 is described below in conjunction with FIG. 8. FIG. 8 is a schematic illustration of playback state database 720, according to an embodiment. Playback state database 720 includes a plurality of descriptor arrays 820, each associated with a different display canvas generated by interface engine 434. As shown, each descriptor array 820 includes a display canvas identification (ID) 821, a plurality of playback state descriptors 822, and a plurality of values 823 that are each associated with a respective playback state descriptor 822. In some embodiments, when playback of a preview video is started in a particular display canvas, a descriptor array 820 is generated for that particular display canvas, media player, or other UI component. In such embodiments, some descriptor arrays 820 can be associated with one type of UI component, such as a video preview canvas 613 in FIG. 6C, other descriptor arrays 820 can be associated with another type of UI component, such as a billboard canvas 511 in FIG. 5, and still other descriptor arrays 820 can be associated with yet another type of UI component, such as a feature-sized video preview canvas 614 in FIG. 6E. Display canvas ID 821 includes a value that uniquely identifies a particular UI component that is included in a GUI generated by interface engine 434. In some embodiments, display canvas ID 821 includes one or more values indicating a specific type of UI component that is associated with descriptor array 820. In some embodiments, display canvas ID 821 includes one or more values indicating a specific video content item that is being played by the UI component or has most recently been played by the UI component. In some embodiments, display canvas ID 821 may include one or more values representing other information or attributes of the UI component and/or of the specific video content item that is being played by the UI component, such as location of the UI component in the GUI generated by interface engine 434, metadata associated with the specific video content item, etc. Each playback state descriptor 822 in descriptor array 820 represents an attribute of the playback state of the UI component associated with that descriptor array 820. Thus, by changing a value 823 associated with a particular playback state descriptor 822 in a particular descriptor array 820, a characteristic of playing video content with the UI component associated with the particular descriptor array 820 changes. In the embodiment illustrated in FIG. 8, playback state descriptors 822 include mute state (e.g., ON or OFF), play/pause state (e.g., Playing, Paused, Ended), sound volume level (e.g., 55%), and play time (e.g., 12.5 seconds). Other playback state descriptors 822 for other attributes of the playback state of the UI component associated with that descriptor array 820 can also be included in descriptor array 820. In operation, playback of a first instance of a preview video is halted or paused in a first UI component while the first instance of the preview video is in a first playback state. Simultaneously, or at approximately the same time, playback of a second instance of the preview video in a second UI component can begin, where the second instance of the preview video begins in the first playback state. Consequently, playback of the preview video smoothly transitions from playback by the first UI component to playback by the second UI component. In FIG. 8, for purposes of description, values 823 of playback state descriptors 822 are generally depicted as non-numeric values. In practice, values 823 stored for some or all playback state descriptors can be numeric values stored in a suitable data structure, where a specific meaning is attributed to different numeric values. In some embodiments, playback state database 720 also includes a general descriptor array 850. In such embodiments, general descriptor array 850 includes one or more general playback state descriptors 852 and a plurality of values 853 that are each associated with a respective general playback state descriptor 822. In such embodiments, each general playback state descriptor 852 represents a general attribute of the playback state of any of the UI components currently included in the GUI. For example, in the embodiment illustrated in FIG. 8, general descriptor array 850 includes a video mute state and a video sound level. In embodiments in which playback state database 720 includes general descriptor array 850, the current values of general playback state descriptors 852 are employed by whatever UI component is currently performing playback of a preview video. Returning to FIG. 7, preview playback logic 730 is configured to implement a seamless transition between the playback of a preview video in one media player (or other UI component) implemented in a GUI and another media player (or other UI component) implemented in the same GUI. Thus, in some embodiments, preview playback logic 730 receives messages, for example via UI message log 740, from UI components that receive user inputs. Preview playback logic 730 then updates playback state database 720 accordingly. For example, in an embodiment, when a cursor is hovered over a particular display canvas included in a GUI, that particular display canvas sends a message to UI message log 740. In the embodiment, the message sent to UI message log 740 is an input indicating that a media player associated with or included in the particular display canvas is requested to play. In response to such an input, preview playback logic 730 sets the value 823 of an appropriate playback state descriptor 822, such as the playback state descriptor 822 that represents the play/pause state of that particular display canvas. The playback state descriptor 822 that is set is included in the descriptor array 820 for that particular display canvas. In another example, in another embodiment, when a cursor is moved off of a particular display canvas included in a GUI, that particular display canvas sends a message to UI message log 740. In the embodiment, the message sent to UI message log 740 is an input indicating that a media player associated with or included in the display canvas is requested to stop playback of the preview video currently being played in that particular display canvas. In response, preview playback logic 730 sets all of the current values 823 included in the descriptor array 820 for that particular display canvas to reflect the current playback state of that particular display canvas. In this way, the playback state of that particular display canvas persists when the preview video that was being played in that particular display canvas begins to be played back in a later instance of a display canvas. UI message log 740 receives and stores messages received from interface engine 434. In some embodiments, such messages originate from interface engine 434. Alternatively or additionally, in some embodiments, such messages originate from UI components included in a GUI generated by interface engine 434. Implementation of Preview Playback Application FIG. 9 sets forth a flowchart of method steps for updating a playback state database, according to various embodiments. Although the method steps are described with respect to the systems of FIGS. 1-8, persons skilled in the art will understand that any system configured to perform the method steps, in any order, falls within the scope of the present disclosure. As shown, a method 900 begins at step 901, in which preview playback application 700 receives a message from interface engine 434. In some embodiments, preview playback application 700 receives the message by determining that UI message log 740 has been updated by interface engine 434. In step 902, based on the message received in step 901, preview playback application 700 determines one or more playback state descriptors to be set. For example, when preview playback application 700 receives a message indicating that a value for a particular playback state descriptor 822 for a particular display canvas should be changed (e.g. mute state changed to ON), preview playback application 700 determines that the descriptor array 820 associated with that particular display canvas and that particular playback state descriptor 822 is to be set. In another example, when preview playback application 700 receives a message indicating that a media player associated with or included in a display canvas is requested to stop playback of the preview video currently being played in that particular display canvas, preview playback application 700 determines that all playback state descriptors 822 in the descriptor array 820 associated with the display canvas that has been requested to stop playback are to be set. In some embodiments, in step 902 preview playback application 700 determines that one or more playback state descriptors 822 are to be changed in multiple descriptor arrays 820. For example, in one such embodiment, in step 901 preview playback application 700 receives a message indicating that a value 823 should be changed for a particular playback state descriptor 822 that applies to any instance of the video content currently being played (e.g., current play time). In the embodiment, preview playback application 700 determines that in each descriptor array 820 associated with the video content currently being played, the value 823 for the particular playback state descriptor 822 is to be set. Thus, when activated by a user, any display canvas that is associated with the video content currently being played can start at the current play time stored for that particular video content. In some embodiments, in step 902 preview playback application 700 determines which descriptor arrays 820 included in playback state database 720 are associated with the video content currently being played. In such embodiments, preview playback application 700 makes the determination based on the display canvas ID 821 of each descriptor array 820. In alternative embodiments, in step 902 preview playback application 700 makes the determination via a lookup table or other data structure. In step 903, preview playback application 700 sets the playback state descriptors 822 determined in step 902. For example, when preview playback application 700 has determined in step 902 that the descriptor array 820 associated with a particular display canvas and a particular playback state descriptor 822 is to be set, preview playback application 700 sets the value 823 of that particular playback state descriptor 822 to match an appropriate value (e.g. mute state set to ON). Similarly, when preview playback application 700 determines that all playback state descriptors 822 in a particular descriptor array 820 are to be set, preview playback application 700 sets the values 823 of all playback state descriptors 822 in that particular descriptor array 820 to match values indicating the current playback state of the display canvas associated with that particular descriptor array 820. For example, all playback state descriptors 822 in a particular descriptor array 820 may be set when the display canvas associated therewith has been requested to stop playback. Thus, the attributes of the display canvas that has been requested to stop playback are captured in an associated descriptor array 820 for subsequent application to another display canvas that is instantiated to display the same video content. FIG. 10 sets forth a flowchart of method steps for transitioning playback of video content from a first UI component to a second UI component, according to various embodiments. Although the method steps are described with respect to the systems of FIGS. 1-9, persons skilled in the art will understand that any system configured to perform the method steps, in any order, falls within the scope of the present disclosure. As shown, a method 1000 begins at step 1001, in which interface engine 434 generates or otherwise instantiates GUI 500. For example, in some embodiments, interface engine 434 opens GUI 500 in response to playback application 436 being executed on endpoint device 115. In step 1002, interface engine 434 determines whether an input is received by a UI component of GUI 500 indicating that playback of a preview video for a particular video content item is requested. In some embodiments, such an input is received via an input operation, such as hovering a cursor over a particular display canvas of GUI 500, or selecting a more details icon 601 of GUI 500. If no, method 100 returns to step 1002; if yes, method 1000 proceeds to step 1003. In step 1003, interface engine 434 determines whether playback of any other preview video has been performed by GUI 500. If no, method 1000 proceeds to step 1021; if yes, method 1000 proceeds to step 1004. In step 1004, interface engine 434 determines whether playback of the preview video for the particular video content item indicated in step 1002 has been performed by a UI component of GUI 500. If yes, method 1000 proceeds to step 1031; if no, method 100 proceeds to step 1005. In step 1005, interface engine 434 performs a lookup of values 853 for the general playback state descriptors 852 for GUI 500, if applicable. For example, in some embodiments, interface engine 434 performs a lookup of audio state descriptors (e.g., preview video mute state, preview video sound volume level, etc.). In step 1006, if applicable, interface engine 434 generating a media player or other UI component for performing playback of the preview video for the particular video content item requested in step 1002. In some instances, the UI component for performing playback of the preview video already exists. For example, when a user hovers cursor 501 over a title card canvas 612 that has previously displayed at least a portion of the preview video, interface engine 434 does not create another media player or other UI component. In step 1007, interface engine 434 modifies one or more characteristics of playback by the UI component based on values 853 for the general playback state descriptors 852 looked up in step 1005 and/or values 823 for the playback state descriptors 822 looked up in step 1031. For example, the mute state, sound volume level, and/or play time of the UI component are set to match values 823 and 853. As a result, the UI component generated or selected in step 1006 is configured to perform video playback in a specified playback state, such as a playback state of a display canvas that is most recently associated with the video content to be played by the UI component generated or selected in step 1006. In step 1008, when applicable, interface engine 434 causes a UI component that is currently performing playback of other video content to pause or halt playback of the other video content. For example, prior to step 1008, such a UI component may be playing another preview video for a different video content item. Alternatively, in some embodiments, the UI component currently performing playback of the other video content sends a message to UI message log 740 indicating that the UI component is requested to be stopped. In such embodiments, preview playback logic 730 updates playback state database 720 as described above in conjunction with method 900 of FIG. 9. Further, in some embodiments, preview playback logic 730 causes downloading of video data associated with the other video content to stop or notifies playback application 436 to cause downloading of video data associated with the other video content to stop. In step 1009, interface engine 434 begins playback of the preview video requested in step 1002. When applicable, one or more characteristics of playback by the UI component beginning playback of the preview video have been modified based on values 853 for general playback state descriptors 852 and/or values 823 for the playback state descriptors 822. Consequently, when the preview video requested in step 1002 has been performed previously by a UI component of GUI 500, playback of the preview video begins at the point the previous playback of the preview video ended. Further, in some embodiments, the audio state of the UI component beginning playback of the preview video matches that of the UI component that was previously performing playback in GUI 500. Once playback of the preview video begins, method 1000 returns to step 1002. Step 1021 is performed in response to interface engine 434 determining that playback of no other preview video has been performed by GUI 500. For example, step 1021 is commonly performed shortly after GUI 500 is first opened. In step 1021, interface engine 434 creates a media player or other UI component for performing playback of the preview video for the particular video content item requested in step 1002. In step 1022, interface engine 434 modifies one or more characteristics of playback by the UI component created in step 1021 based on default values for values 853 for general playback state descriptors 852 and/or values 823 for the playback state descriptors 822. Method 1000 then proceeds to step 1008, where playback of the preview video requested in step 1002 begins. Step 1031 is performed in response to interface engine 434 determining that playback of the preview video indicated in step 1002 has been performed previously by a UI component of GUI 500. In step 1031, interface engine 434 performs a lookup of one or more values 823 for the playback state descriptors 822 of the UI component that has most recently performed playback of the preview video indicated in step 1002. For example, in some embodiments, interface engine 434 performs a lookup of a value 823 for the play time of the UI component that has most recently performed playback of the preview video indicated in step 1002. In sum, various embodiments set forth systems and techniques for transitioning playback of video content from a first UI component to a second UI component in a video streaming user interface. In the embodiments, current values of playback state descriptors for multiple UI components are stored in a client-side database. When transitioning playback of video content from the first UI component to the second UI component, a user interface engine determines a current value of a playback state descriptor of the first UI component from the database. The user interface engine then sets the corresponding playback state descriptor of the second UI component to match the current value of the playback state descriptor of the first UI component and begins playback of the video content with the second UI component using the current value of the playback state descriptor of the first UI component. At least one technological improvement of the disclosed embodiments is that coordination of video playback is enabled between multiple UI components that can each perform video playback simultaneously. Consequently, downloading multiple video data streams when transitioning video playback from one UI component to another UI component can be avoided. In addition, multiple UI components are prevented from simultaneously performing playback of video content. 1. In some embodiments, a method comprises, based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player, setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor, and, after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, where the second media player begins playing the video content item based on the value of the second state descriptor. 2. The method of clause 1, further comprising, after setting the value of the second state descriptor to match the current value of the first state descriptor, causing the first media player to stop playing the video content item. 3. The method of clause 1 or 2, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. 4. The method of any of clauses 1-3, wherein the first attribute and the second attribute correspond to a same characteristic of playback of a video content item. 5. The method of any of clauses 1-4, wherein the first attribute of the first media player comprises one of a mute setting, a volume level, and a current play time associated with playback of the video content item on the first media player or the second attribute of the second media player comprises one of a mute setting, a volume level, and a current play time associated with playback of the video content item on the second media player. 6. The method of any of clauses 1-5, wherein the first attribute is included in a first array of attributes that are each associated with the current playback state of the first media player and the second attribute is included in a second array of attributes that are each associated with the current playback state of the second media player. 7. The method of any of clauses 1-6, wherein the input to transition playback of the video content item comprises a user input. 8. The method of any of clauses 1-7, wherein the user input is received via the user interface. 9. The method of any of clauses 1-8, wherein the input to transition playback of the video content item comprises a predetermined value for a third state descriptor associated with the second media player. 10. The method of any of clauses 1-9, further comprising, prior to determining the current value of the first state descriptor associated with the first media player, determining that a current value for the third state descriptor equals the predetermined value. 11. The method of any of clauses 1-10, wherein the first playback state descriptor is associated with an attribute of a current playback state of the video content item and the second playback state descriptor is associated with the attribute of the current playback state of the video of content item. 12. The method of any of clauses 1-11, further comprising instantiating the second media player concurrently with the first media player. 13. The method of any of clauses 1-12, further comprising causing at least a portion of the first media player and at least at least a portion of the second media player to be displayed concurrently by a same display device. 14. The method of any of clauses 1-13, further comprising generating the second media player based on the input to transition playback of the video content item from the first media player to the second media player. 15. In some embodiments, a non-transitory computer readable medium stores instructions that, when executed by a processor, cause the processor to perform the steps of: based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player, setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor, and, after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, wherein the second media player begins playing the video content item based on the value of the second state descriptor. 16. The non-transitory computer readable medium of clause 15, further comprising, after setting the value of the second state descriptor to match the current value of the first state descriptor, causing the first media player to stop playing the video content item. 17. The non-transitory computer readable medium of clause 15 or 16, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. 18. The non-transitory computer readable medium of any of clauses 15-17, wherein the first attribute and the second attribute correspond to a same characteristic of playback of a video content item. 19. In some embodiments, a system comprises a memory that stores instructions, and a processor that is coupled to the memory and is configured to perform the steps of, upon executing the instructions: based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player, setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor, and after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, wherein the second media player begins playing the video content item based on the value of the second state descriptor. 20. The system of clause 19, wherein the first playback state descriptor is associated with a first attribute of a current playback state of the first media player and the second playback state descriptor is associated with a second attribute of a current playback state of the second media player. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present disclosure and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. 16669150 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:32AM Apr 27th, 2022 08:32AM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Apr 5th, 2022 12:00AM Dec 29th, 2016 12:00AM https://www.uspto.gov?id=US11297138-20220405 Techniques for dynamically benchmarking cloud data store systems In various embodiments, a benchmarking engine automatically tests a data store to assess functionality and/or performance of the data store. The benchmarking engine generates data store operations based on dynamically adjustable configuration data. As the benchmarking engine generates the data store operations, the data store operations execute on the data store. In a complementary fashion, as the data store operations execute on the data store, the benchmarking engine generates statistics based on the results of the executed data store operations. Advantageously, because the benchmarking engine adjusts the number and/or type of data store operations that the benchmarking engine generates based on any changes to the configuration data, the workload that executes on the data store may be fine-tuned as the benchmarking engine executes. 11297138 1. A computer-implemented method, comprising: processing one or more workload generation operations to generate a first plurality of data store operations based on first configuration data, wherein the first configuration data comprises a workload traffic pattern for the first plurality of data store operations, and wherein the workload traffic pattern specifies at least one of a temporal proximity of the first plurality of data store operations or a spatial proximity of data within a data store; executing at least one of the data store operations included in the first plurality of data store operations on the data store to obtain first statistics that are associated with a performance of the data store; and while continuing to process the one or more workload generation operations: receiving second configuration data; modifying the one or more workload generation operations to generate a second plurality of data store operations based on the second configuration data; executing at least one of the data store operations included in the second plurality of data store operations on the data store to obtain second statistics that are associated with the performance of the data store; and displaying or transmitting for further processing at least one of the first statistics and the second statistics. 2. The computer-implemented method of claim 1, further comprising receiving an end command and, in response, ceasing to process the one or more workload generation operations. 3. The computer-implemented method of claim 1, further comprising, prior to processing the one or more workload generation operations: receiving a first command that specifies the data store; receiving a second command that specifies a driver; and establishing a connection to the data store through the driver. 4. The computer-implemented method of claim 3, wherein the driver comprises a driver application that is written in a programming language or a dynamic plugin that is associated with a script. 5. The computer-implemented method of claim 1, wherein executing the at least one of the data store operations included in the first plurality of data store operations comprises: assigning the at least one of the data store operations to at least one thread included in a thread pool to generate at least one configured thread; and causing the data store to execute the at least one configured thread. 6. The computer-implemented method of claim 5, further comprising, prior to modifying the one or more workload generation operations, modifying a number of threads included in the thread pool based on the second configuration data. 7. The computer-implemented method of claim 1, further comprising, while continuing to process the one or more workload generation operations: receiving at least one subsequent configuration data; and for each subsequent configuration data included in the at least one subsequent configuration data: modifying the one or more workload generation operations to generate a subsequent plurality of data store operations based on the subsequent configuration data, executing at least one of the data store operations included in the subsequent plurality of data store operations on the data store to obtain subsequent statistics that are associated with the performance of the data store, and displaying or transmitting for further processing the subsequent statistics. 8. The computer-implemented method of claim 1, wherein the first configuration data further includes at least one of a rate of read operations, a rate of write operations, a number of threads executing the read operations, or a number of threads executing the write operations. 9. The computer-implemented method of claim 8, wherein the second configuration data includes at least one of an updated rate of read operations, an updated rate of write operations, and an updated number of threads. 10. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps of: establishing a connection to a data store through a driver; generating a first workload based on first configuration data, wherein the first configuration data comprises a workload traffic pattern for a first plurality of data store operations, and wherein the workload traffic pattern specifies at least one of a temporal proximity of the first plurality of data store operations or a spatial proximity of data within the data store; causing the first workload to execute on the data store to obtain first statistics that are associated with a performance of the data store and the first configuration data; and while remaining connected to the data store: generating a second workload based on second configuration data, causing the second workload to execute on the data store to obtain second statistics that are associated with the performance of the data store and the second configuration data; and displaying or transmitting for further processing at least one of the first statistics and the second statistics. 11. The non-transitory computer-readable storage medium of claim 10, wherein generating the first workload comprises processing one or more workload generation operations to generate the first plurality of data store operations based on the first configuration data; and generating the second workload comprises modifying the one or more workload generation operations to generate a second plurality of data store operations based on the second configuration data. 12. The non-transitory computer-readable storage medium of claim 11, further comprising receiving an end command and, in response, ceasing to process the one or more workload generation operations. 13. The non-transitory computer-readable storage medium of claim 10, wherein the first workload comprises the first plurality of data store operations, and causing the first workload to execute on the data store comprises: assigning at least one of the data store operations included in the first plurality of data store operations to at least one thread included in a thread pool to generate at least one configured thread; and causing the data store to execute the at least one configured thread. 14. The non-transitory computer-readable storage medium of claim 13, further comprising, prior to generating the second workload, modifying a number of threads included in the thread pool based on the second configuration data. 15. The non-transitory computer-readable storage medium of claim 10, wherein the first configuration data includes at least one of a rate of read operations or a rate of write operations. 16. The non-transitory computer-readable storage medium of claim 15, wherein the workload traffic pattern comprises a sliding window associated with the first plurality of data store operations. 17. The non-transitory computer-readable storage medium of claim 10, wherein transmitting at least one of the first statistics and the second statistics comprises transmitting at least one of the first statistics and the second statistics to an analysis application for further processing. 18. A system comprising: a memory storing instructions associated with a benchmarking engine; and a processor that is coupled to the memory and, when executing the instructions, is configured to: process one or more workload generation operations to generate a first plurality of data store operations based on first configuration data, wherein the first configuration data comprises a workload traffic pattern for the first plurality of data store operations, and wherein the workload traffic pattern specifies at least one of a temporal proximity of the first plurality of data store operations or a spatial proximity of data within a data store; assign at least one of the first plurality of data store operations included in the first plurality of data store operations to at least a first thread included in a thread pool to generate at least a first configured thread; cause the data store to execute the at least a first configured thread to obtain first statistics that are associated with a performance of the data store; and while continuing to process the one or more workload generation operations: receive second configuration data, modify at least one of the workload generation operations and the thread pool based on the second configuration data, assign at least one of the data store operations included in the second plurality of data store operations to at least a second thread included in the thread pool to generate at least a second configured thread; and cause the data store to execute the at least a second configured thread to obtain second statistics that are associated with the performance of the data store; and display or transmit for further processing at least one of the first statistics and the second statistics. 19. The system of claim 18, wherein the processor is further configured to receive an end command and, in response, cease to process the workload generation operations. 20. The system of claim 18, wherein the first configuration data further includes at least one of a rate of read operations, a rate of write operations, and a number of threads. 21. The system of claim 18, wherein the processor is further configured to, prior to processing the one or more workload generation operations, generate one or more write operations that store initial data in the data store. 21 CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the priority benefit of the United States Provisional Patent Application having Ser. No. 62/382,209 and filed on Aug. 31, 2016. The subject matter of this related application is hereby incorporated herein by reference. BACKGROUND OF THE INVENTION Field of the Invention Embodiments of the present invention relate generally to computer science and, more specifically, to techniques for dynamically benchmarking cloud data store systems. Description of the Related Art Many software applications rely on external services known as “cloud data stores,” which are systems that execute on cloud computing platforms and are designed to store and manage collections of client data. Examples of cloud data stores include Netflix Dynomite, Apache Cassandra, and Amazon Elastic File System, to name a few. Oftentimes, the overall functionality and performance of applications that rely on cloud data stores correlate to the functionality and performance of the cloud data stores themselves. Thus, to evaluate the applications, the cloud data stores also need to be evaluated. However, executing a complex application or system of applications across a wide range of operating conditions to evaluate the functionality and performance of a cloud data store or to compare multiple cloud data stores for a variety of use cases is prohibitively time consuming. To reduce the time required to evaluate the functionality and performance of a cloud data store, an engineer may implement a benchmarking engine instead of an application or system of applications to test the cloud data store. In operation, the benchmarking engine typically generates different workloads that can be executed on the cloud data store. Workload operations that execute on the cloud data store are referred to herein as “data store operations.” During testing, as various data store operations execute on the cloud data store, the benchmarking engine monitors the performance of the cloud data store. In general, the workloads are designed to emulate loads on the cloud data store for one or more use cases. For example, to emulate the load on a cloud data store while a video streaming service responds to clients during evening hours, the benchmarking engine could generate and execute a workload that is characterized by a high number of read operations per second. In another example, to emulate the load on a cloud data store during a denial of service attack, the benchmarking engine could generate and execute a workload that includes a dramatic spike in a number of read operations per second. One limitation of a typical benchmarking engine is that the workloads cannot be adjusted while the benchmarking engine executes. Thus, the workloads cannot be dynamically updated based on the performance of the cloud data store during any given testing scenario. For example, an engineer could determine that a current workload exceeds the throughput of the cloud data store and, therefore, want to reduce the number of data store operations per second. However, with a conventional benchmarking engine, the engineer would not be able to adjust the number of data store operations per second without terminating the benchmarking engine, configuring the benchmarking engine to generate a new workload, and re-executing the benchmarking engine. Re-configuring and re-executing the benchmarking engine to modify the workload can dramatically increase the time required to evaluate the cloud data store. Another limitation of a typical benchmarking engine is that the benchmarking engine usually executes for a predetermined amount of time and then terminates. Thus, evaluating the performance of a cloud data store with respect to an application that rarely terminates is quite difficult. For example, conventional benchmarking engine testing likely would not be able to detect or indicate the presence of a memory leak that incrementally reduces the amount of available application memory over long periods of time. Such memory leaks could be quite problematic because they could cause a long-running application to terminate unexpectedly. Among other things, the architecture of a conventional benchmarking engine typically constrains the length of time that the benchmarking engine can successfully execute without overflow errors, etc. As the foregoing illustrates, what is needed in the art are more effective techniques for benchmarking cloud data stores. SUMMARY OF THE INVENTION One embodiment of the present invention sets forth a computer-implemented method for testing a data store. The method includes processing one or more workload generation operations to generate first data store operations based on first configuration data; executing at least one of the data store operations included in the first data store operations on a data store to obtain first statistics that are associated with a performance of the data store; while continuing to process the one or more workload generation operations, receiving second configuration data, modifying the one or more workload generation operations to generate second data store operations based on the second configuration data, executing at least one of the data store operations included in the second data store operations on the data store to obtain second statistics that are associated with the performance of the data store; and displaying or transmitting for further processing at least one of the first statistics and the second statistics. One advantage of the disclosed techniques is that, unlike conventional testing techniques, the workload generation operations may be modified without terminating the workload generating operations. Consequently, a user may adjust the workload based on statistics that are generated as the data store executes the workload. Fine-turning the workload as the data store executes the workload can dramatically reduce the time required to evaluate the performance of the data store. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 is a conceptual illustration of a benchmarking system configured to implement one or more aspects of the present invention; FIG. 2 is a more detailed illustration of the operations performed by the benchmarking subsystem of FIG. 1 when testing a cloud data store, according to various embodiments of the present invention; FIG. 3 illustrates an example configuration of the benchmarking interface of FIG. 2, according to various embodiments of the present invention; and FIG. 4 is a flow diagram of method steps for testing a cloud data store, according to various embodiments of the present invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skilled in the art that the present invention may be practiced without one or more of these specific details. System Overview FIG. 1 is a conceptual illustration of a benchmarking system 100 configured to implement one or more aspects of the present invention. As shown, the benchmarking system 100 includes, without limitation, a cloud data store 110, a benchmarking cluster 120, and an analysis cluster 190. In alternate embodiments, the benchmarking system 100 may include any number of cloud data stores 110, any number of benchmarking clusters 120, and any number of analysis clusters 190. For explanatory purposes, multiple instances of like objects are denoted with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. The cloud data store 110 executes on a cloud computing platform and is designed to store and manage collections of client data. The cloud data store 110 is also commonly referred to as a “data store cluster.” Examples of cloud data stores 110 include Netflix Dynomite, Apache Cassandra, and Amazon Elastic File System, to name a few. As shown, the cloud data store 110 includes, without limitation, any number of data store nodes 112. The data store nodes 112 within the cloud data store 110 are typically interconnected computers or virtual machines, where each computer or virtual machine supplies data store services via a client-server architecture. In alternate embodiments, each of the data store nodes 130 may be any instruction execution system, apparatus, or device capable of executing software applications. The data store nodes 130 may be organized in any technically feasible fashion and across any number of geographical locations. Oftentimes, the overall functionality and performance of applications that rely on cloud data stores correlate to the functionality and performance of the cloud data stores themselves. Thus, to evaluate applications that rely on the cloud data stores, the cloud data stores also need to be evaluated. However, executing a complex application or system of applications across a wide range of operating conditions to evaluate the functionality and performance of a cloud data store or to compare multiple cloud data stores for a variety of use cases is prohibitively time consuming. To reduce the time required to evaluate the functionality and performance of a cloud data store, an engineer may implement a conventional benchmarking engine instead of an application or system of applications to test the cloud data store. In operation, the conventional benchmarking engine typically generates different workloads that can be executed on the cloud data store. Workload operations that execute on the cloud data store are referred to herein as “data store operations.” During testing, as various data store operations execute on the cloud data store, the conventional benchmarking engine monitors the performance of the cloud data store. In general, the workloads are designed to emulate loads on the cloud data store for one or more use cases. One limitation of a typical conventional benchmarking engine is that the workloads cannot be adjusted while the conventional benchmarking engine executes. Thus, the workloads cannot be dynamically updated based on the performance of the cloud data store during any given testing scenario. For example, an engineer could want to optimize a number of data store operations per second based on the observed throughput of the cloud data store. However, with a conventional benchmarking engine, the engineer would not be able to adjust the number of data store operations per second without terminating the conventional benchmarking engine, configuring the conventional benchmarking engine to generate a new workload, and re-executing the conventional benchmarking engine. Re-configuring and re-executing the conventional benchmarking engine to modify the workload can dramatically increase the time required to evaluate the cloud data store Another limitation of a typical conventional benchmarking engine is that the conventional benchmarking engine usually executes for a predetermined amount of time and then terminates. Thus, evaluating the performance of a cloud data store with respect to an application that rarely terminates is quite difficult. For example, conventional benchmarking engine testing likely would not be able to detect or indicate the presence of a memory leak that incrementally reduces the amount of available application memory over long periods of time. Such memory leaks could be quite problematic because they could cause a long-running application to terminate unexpectedly. Efficiently and Flexibly Testing a Cloud Data Store To enable engineers to more efficiently and flexibly test the cloud data store 110, the benchmarking system 100 includes the benchmarking cluster 120. The benchmarking cluster 120 supports a variety of integration standards, enables dynamic re-configuration of workloads, and executes workloads for indeterminate amounts of time. As shown, the benchmarking cluster 120 includes, without limitation, any number of benchmarking nodes 130. The benchmarking nodes 130 within the benchmarking cluster 120 are typically computers or virtual machines, where each computer or virtual machine independently executes an instance of a benchmarking subsystem 140. The benchmarking nodes 130 may be organized in any technically feasible fashion and across any number of geographical locations. As shown for the benchmarking node 130(1), each of the benchmarking nodes 130 includes, without limitation, a processor 132 and a memory 136. In alternate embodiments, each of the benchmarking nodes 130 may be configured with any number (including zero) of processors 132 and memories 136, and the configuration of the benchmarking nodes 130 may vary. In other embodiments, each of the benchmarking nodes 130 may be any instruction execution system, apparatus, or device capable of executing software applications. In operation, the processor 132(1) is the master processor of the benchmarking node 130(1), controlling and coordinating operations of other components included in the benchmarking node 130(1). The memory 136(1) stores content, such as software applications and data, for use by the processor 132(1) of the benchmarking node 130(1). As shown, the memory 136(1) includes, without limitation, the instance of the benchmarking subsystem 140 that executes on the processor 132(1). In a complementary fashion, for each of the other benchmarking nodes 130(2)-130(N), the memory 136(x) includes a different instance of the benchmarking subsystem 140 that executes on the processor 132(x). As shown, the benchmarking subsystem 140 includes, without limitation, a benchmarking interface 150, a driver interface 170 that interfaces with a data store driver 180, and a benchmarking engine 160. The benchmarking interface 150 may be any type of interface that enables configuration of the benchmarking subsystem 140 via any number and type of configuration data. In some embodiments, the benchmarking interface 150 comprises a graphical user interface (GUI). In other embodiments, the benchmarking interface 150 comprises an application programming interface (API). For instance, in some embodiments, the benchmarking interface 150 may comprise a Representational State Transfer (REST) API. The REST API may support any number and type of data interchange formats, such as JavaScript Object Notation (JSON), HyperText Markup Language (HTTP), and Extensible Markup Language (XML), to name a few. The configuration data includes, without limitation, a data store selection, a driver configuration, workload properties, a workload type, and benchmarking commands. The data store selection specifies the cloud data store 110 that is to be tested. The driver configuration selects the data store driver 180 through which the benchmarking subsystem 140 interfaces with the cloud data store 110. As depicted with a dotted box, the data store driver 180 implements the driver interface 170 to enable the benchmarking subsystem 140 to interact with the cloud data store 110. More specifically, in some embodiments, the data store driver 180 implements the driver interface 170 to enable the benchmarking subsystem 140 to: initialize the cloud data store 110, shutdown the cloud data store 110, perform a single read operation on the cloud data store 110, perform a single write operation on the cloud data store 110, get connection information from the cloud data store 110, and run a workflow for a functional test on the cloud data store 110. The workload properties, the workload type, and the benchmarking commands configure the benchmarking engine 160. When configured to test the cloud data store 110, the benchmarking engine 160 generates a workload based on the workload properties and the workload type and causes the workload to execute on the cloud data store 110. For each of the benchmarking nodes 130, the workload properties and the workload type may be specified separately via the benchmarking interface 150. Accordingly, the various instances of the benchmarking engine 160 included in the different instances of the benchmarking subsystems 140 may be configured to generate different workloads. As a general matter, the workload properties may include any number and type of data, and the benchmarking engine 160 may generate the workload based on the workload properties in any technically feasible fashion. For instance, in some embodiments, the workload properties include, without limitation: numKeys that specifies the sample space for randomly generated keys, numValues that specifies the sample space for generated values, dataSize that specifies the size of each value, numWriters that specifies the number of threads per benchmarking node 130 that execute write operations on the cloud data store 110, numReaders that specifies the number of threads per benchmarking node 130 that execute read operations on the cloud data store 110, writeEnabled that enables or disables write operations on the cloud data store 110, readEnabled that enables or disables read operations on the cloud data store 110, writeRateLimit that specifies the number of write operations per second on the cloud data store 110 readRateLimit that specifies the number of read operations per second on the cloud data store 110 userVariableDataSize that enables or disables the ability of the payload to be randomly generated In general, the workload type specify a different pluggable workload traffic pattern. For instance, in some embodiments, the workload type specifies random traffic pattern, a sliding window traffic pattern, or sliding window flip traffic pattern. The sliding window traffic pattern specifies a workload that concurrently exercises data that is repetitive inside a window, thereby providing a combination of temporally local data and spatially local data. For example, the window could be designed to exercise both a caching layer provided by the cloud data store 110 and the Input/Output Operations Per Second (IOPS) of a disk managed by the cloud data store 110. The workload generation commands may configure any number of the instances of the benchmarking engine 160 to start, pause, and/or finish generating and executing the workload in any technically feasible fashion. For example, if a “run writes” command and a single benchmarking node 130 is selected, then the instance of the benchmarking engine 160 associated with the selected benchmarking node 130 executes write operations on the cloud data store 110. By contrast, if “run writes” and “run reads” commands and multiple benchmarking nodes 130 are selected, then multiple instances of the benchmarking engines 160 independently and substantially in parallel execute write and read operations on the cloud data store 110. As the cloud data store 110 executes data store operations, the cloud data store 110 transmits results to the benchmarking engine 160. For example, the result of a read operation could be client data and the result of a write operation could be an acknowledgment. The benchmarking engine 160 generates statistics (not shown in FIG. 1) based on the results and transmits the statistics to the benchmarking interface 150 for display purposes. The statistics may include any number and type of data that provide insight into the performance of the cloud data store 110, and the benchmarking engine 160 may generate the statistics in any technically feasible fashion. The benchmarking engine 160 also transmits the statistics to the analysis cluster 190. In general, the benchmarking subsystem 140 provides plugin functionality that enables the benchmarking engine 160 to interface with any number of compatible analysis clusters 190. The analysis cluster 190 may be any number and type of software applications (e.g., external time series database, monitoring system, etc.) that provides analysis and/or monitoring services. For example, the analysis cluster 190 could include, without limitation, a Netflix Servo interface that exposes publishing metrics in Java, and a Netflix Altas backend that manages dimensional time series data. After the analysis cluster 190 receives the statistics, the analysis cluster 190 generates any number of metrics based on the statistics. In some alternate embodiments, the benchmarking engine 160 may transmit the statistics to the analysis cluster 190, but may not transmit the statistics to the benchmarking interface 150. In other alternate embodiments, the benchmarking engine 160 may transmit the statistics to the benchmarking interface 150, but may not transmit the statistics to the analysis cluster 190. In such embodiments, the benchmarking system 100 may not include the analysis cluster 190. In various embodiments, the cloud data store 110 may transmit data store statistics to the analysis cluster 190 instead of or in addition to the statistics that the benchmarking engine 160 transmits to the analysis cluster 190. Notably, while the benchmarking engine 160 generates and executes the workload based on “current” workload properties, the benchmarking engine 160 may be dynamically reconfigured via the benchmarking interface 150. More specifically, the benchmarking engine 160 may receive “new” workload properties via the benchmarking interface 150. In response, the benchmarking engine 160 generates and executes the workload based on the new workload properties instead of the current workload properties without ceasing to generate and execute the workload. Accordingly, the new workload properties become the current workload properties. In some embodiments, all of the workload properties may be dynamically configured. In other embodiments, one or more of the workload properties may be dynamically configured, while the remaining workload properties are statically configured prior to connecting to the cloud data store 110 and/or generating and executing the workload. For example, in some embodiments, the writeRateLimit and the readRateLimit may be dynamically configured, while the remaining workload properties are statically configured prior to configuring the driver connection. In alternate embodiments, the workload type may be dynamically configured. In operation, the benchmarking engine 160 continues to generate and execute the workload based on the current workload properties until the benchmarking engine 160 receives a “pause” or an “end” workload generation command via that benchmarking interface 150. Unless an end workload command is received, the benchmarking node 130 continues to generate and execute the workload for an infinite amount of time, thereby efficiently emulating the operating conditions of long-running applications. In alternate embodiments, the benchmarking interface 150 may be configured to generate an “end” workload generation command in any technically feasible fashion. For example, the benchmarking interface 150 could implement a servlet context listener that detects when an application that is associated with the benchmarking interface 150 is terminated. When the servlet context listener detects that the application is terminated, then the benchmarking interface 150 could generate an end workload generation command. In some embodiments, the workload generation commands may include any number and type of additional commands that customize and/or optimize the benchmarking of the cloud data store 110. For instance, in some embodiments, the workload generation commands include a “backfill” command. If the benchmarking subsystem 140 receives the backfill command, then the benchmarking subsystem 140 executes one or more write commands on the cloud data store 110 prior to executing the workload. The one or more write commands store initial data in the cloud data store 110. The stored initial data, also commonly referred to as “hot” data, reduces the time required to test the cloud data store 110. Note that the techniques described herein are illustrative rather than restrictive, and may be altered without departing from the broader spirit and scope of the invention. In particular, the functionality provided by the benchmarking subsystem 140, the benchmarking engine 160, the benchmarking interface 150, the driver interface 170, the data store driver 180, the analysis cluster 190, and the cloud data store 110 may be implemented in any number of software applications in any combination. Further, in various embodiments, any number of the techniques disclosed herein may be implemented while other techniques may be omitted in any technically feasible fashion. Many modifications and variations on the functionality provided by the benchmarking subsystem 140, the benchmarking engine 160, the benchmarking interface 150, the driver interface 170, the data store driver 180, the analysis cluster 190, and the cloud data store 110 will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Alternate embodiments include any benchmarking application that enables dynamic reconfiguration of a workload executing on a database and/or generates and executes a workload on the database until explicitly terminated. FIG. 2 is a more detailed illustration of the operations performed by the benchmarking subsystem 140 of FIG. 1 when testing the cloud data store 110, according to various embodiments of the present invention. For explanatory purposes only, FIG. 2 depicts a sequence of events involved in testing the cloud data store 110 with circles that are labeled 1 through 13. In alternate embodiments, the number and sequence of events involved in testing the cloud data store 110 may vary. First, as depicted with the circle labeled 1, the benchmarking subsystem 140 receives a data store selection 210 via the benchmarking interface 150. As shown, the data store selection 210 specifies that the cloud data store 110 is to be the target of testing operations. The benchmarking subsystem 140 may enable the specification of the data store selection 210 in any technically feasible fashion. For example, in various embodiments, the benchmarking subsystem 140 could identify a number of available cloud data stores 110 via a discovery process and configure the benchmarking interface 150 to display the available cloud data stores 110 in a selection window. Subsequently, as depicted with the circle labeled 2, the benchmarking subsystem 140 receives a driver configuration 220 that configures the benchmarking subsystem 140 to interface with the cloud data store 110 via the data store driver 180(1). In operation, upon receiving the driver configuration 220 that specifies the data store driver 180(1), the benchmarking subsystem 140 connects to the cloud data store 110 via the data store driver 180(1). As shown, the data store driver 180(1) is included in a driver list 215 that includes, without limitation, any number of the data store drivers 180 and a dynamic plugin 282. As a general matter, each of the data store drivers 180 and the dynamic plugin 282 implement the driver interface 170. The data store drivers 180 are typically written in a programming language and, consequently, are configured statically. Examples of the data store drivers 180 include DataStax Java Driver (Cassandra Query Language), Cassandra Astyanax (Thrift), ElasticSearch API, and Dyno (Jedis support). By contrast, the dynamic plugin 282 is dynamically configured via a script that is written in a scripting language, such as Groovy. In alternate embodiments, the driver list 215 may include any number and type of software applications that implement the driver interface 170. Subsequently, as depicted with the circle numbered 3, the benchmarking engine 160 receives workload properties 230 via the benchmarking interface 150. The benchmarking engine 160 may receive any number and type of workload properties 230 in any technically feasible fashion. Similarly, as depicted with the circle numbered 5, the benchmarking engine 160 receives a workload type 235. As depicted with the circle numbered 6, the benchmarking engine 160 then receives a start workload command 240 that configures the benchmarking engine 160 to generate and execute the workload based on the workload properties 230 and the workload type 235. As part of generating and executing the workload, the benchmarking engine 160 configures a thread pool 260 that includes any number of threads 262 that execute on the cloud data store 110. More specifically, the benchmarking engine 160 performs operations that configure the thread pool 260 based on the workload properties 230. The benchmarking engine 160 may configure the thread pool 260 based on any number and type of the workload properties 230. For instance, in some embodiments, the benchmarking engine 160 configures the thread pool 260 to include a number of the threads 262 that execute read operations on the cloud data store 110 based on the benchmarking parameter 230 “numReaders.” The benchmarking engine 160 may configure the thread pool 260 and manage the threads 262 in any technically feasible fashion as known in the art. After the benchmarking engine 160 configures the thread pool 260, the benchmarking engine generates data store operations 250 based on the workload properties 230 and the workload type 235. The benchmarking engine 160 may generate the data store operations 250 based on any number and type of the workload properties 230 and the workload type 235. For example, the benchmarking engine 160 could generate read operations at a rate that is specified by the benchmarking parameter 230 “readRateLimit” and based on a sliding window traffic pattern that is specified by the workload type 235. As depicted with the circle labeled 7, as the benchmarking engine 160 generates each of the data store operations 250, the benchmarking engine 160 assigns the data store operation 250 to one of the threads 262 included in the thread pool 260. The thread 262 then executes the data store operation 250 on the cloud data store 110 via the data store driver 180(1). In alternate embodiments, the benchmarking engine 160 may cause the data store operations 250 to execute on the cloud data store 110 in any technically feasible fashion. The processes of generating the data store operations 250 and causing the data store operations 250 to execute on the cloud data store 110 is also referred to herein as “generating and executing the workload.” As the cloud data store 110 executes the data store operations 250, the cloud data store 110 transmits the results to the benchmarking engine 160. The benchmarking engine 160 receives the results of the data store operations 250 and generates statistics 280. The statistics 280 may include any amount and type of data that measures the functionality and/or performance of the cloud data store 110. As depicted with the circles labeled 8, the benchmarking engine 160 transmits the statistics 280 to the analysis cluster 190 and the benchmarking interface 150. The benchmarking interface 150 then displays the statistics 280. As a general matter, the benchmarking engine 160 is configured to execute multiple operations during the benchmarking process substantially in parallel. For example, the benchmarking engine 160 typically generates the statistics 280 associated with the data store operations 250 that have finished executing on the cloud data store 110 data store while generating new data store operations 250. Dynamically Re-Configuring a Workload In particular, as depicted with the circle labeled 9, as the benchmarking engine 160 generates and executes the workload based on current workload properties 230 and the workload type 235, the benchmarking engine 160 receives new workload properties 230. As depicted with the circle labeled 10, the benchmarking engine 160 modifies the workload based on the new workload properties 230. More specifically, if the new workload properties 230 are associated with the thread pool 260, then the benchmarking engine 160 re-configures the thread pool 260. For example, the benchmarking engine 160 could increase or decrease the number of threads 262 that are included in the thread pool 260. Further, the benchmarking engine 160 generates subsequent data store operations 250 based on the new workload properties 230 instead of the current workload properties 230. Accordingly, the new workload properties 230 become the current workload properties 230. As depicted with the circle labeled 10, the benchmarking engine 160 continues to assign the data store operations 250 to the threads 262 that execute the data store operations 250 on the cloud data store 110. In a complementary fashion, as depicted with the circle labeled 12, the benchmarking engine 160 continues to generate and transmit the statistics 280 to the analysis cluster 190 and the benchmarking interface 150. Such a process causes the benchmarking interface 150 to dynamically display the statistics 280. The benchmarking subsystem 140 continues to perform testing operations in this fashion until the benchmarking subsystem 140 receives an end workload command 290 (depicted with the circle labeled 13) via the benchmarking interface 150. FIG. 3 illustrates an example configuration of the benchmarking interface 150 of FIG. 2, according to various embodiments of the present invention. As shown, the benchmarking interface 150 includes, without limitation, the data store selection pane 310, the driver configuration pane 320, the workload properties subpane 332, the workload generation pane 340, and the statistics display pane 380. In alternate embodiments, the benchmarking interface 150 may include any number and type of interface widgets (e.g., panes, sliders, buttons, menus, etc.) that enable an engineer to configure and execute the benchmarking subsystem 140 to test the cloud data store 110. In yet other alternate embodiments, the benchmarking interface 150 may be replaced with an API. As shown, the data store selection pane 310 identifies the data store selection 210 of the cloud data store 110 “localhost.” The driver configuration pane 320 identifies that the benchmarking subsystem 140 is connected to the cloud data store 110 via the data store driver 180(1) “InMemoryTest.” The workload properties subpane 330 displays the values for the workload properties 230. As shown, the workload properties 230 include “initial settings” and “runtime settings.” The initial settings can be modified before the benchmarking engine 160 connects to the cloud data store 110 via the data store driver 180(1). By contrast, the runtime settings can be modified at any time. The workload generation pane 340 identifies that the benchmarking engine 160 included in the benchmarking node 130 “localhost:8080” is generating and executing a workload on the data store 110. The workload generation pane 340 further identifies that the workload is of the workload type 235 “random.” In a complementary fashion, the statistics display pane 380 displays a selection of the statistics 280 that the benchmark engine 160 included in the benchmarking node 130 generates based on the results received from the data store 210. FIG. 4 is a flow diagram of method steps for testing a cloud data store, according to various embodiments of the present invention. Although the method steps are described with reference to the systems of FIGS. 1-3, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. For explanatory purposes only, the context of FIG. 4 is that a single instance of the benchmarking subsystem 140 included in a single benchmarking node 130 is executing the method steps. As a general matter, any number of instances of the benchmarking subsystem 140 included in any number of the benchmarking nodes 130 may execute any number of the method steps independently and substantially in parallel As shown, a method 400 begins at step 404, where the benchmarking subsystem 140 receives the data store selection 210 via the benchmarking interface 150. The data store selection 210 specifies the cloud data store 110. At step 406, the benchmarking subsystem 140 receives the driver configuration 220 via the benchmarking interface. The driver configuration 220 requests that the benchmarking engine 160 interface with the cloud data store 110 via the data store driver 180(1). At step 408, the benchmarking engine 160 connects to the cloud data store 110 via the data store driver 180(1). At step 410, the benchmarking engine 160 receives the workload properties 230 via the benchmarking interface 150, and the benchmarking engine 160 sets “current” workload properties 230 equal to the workload properties 230. At step 412, the benchmarking engine 160 receives the workload type 235 via the benchmarking interface 150. At step 413, the benchmarking engine 160 receives the start workload command 240 via the benchmarking interface 150. At step 414, the benchmarking engine 160 configures the thread pool 260 based on the current workload properties 230. At step 416, the benchmarking engine 160 generates the data store operations 250 based on the current workload properties 230 and the workload type 235. As the benchmarking engine 160 generates each of the data store operations 250, the benchmarking engine 160 assigns the data store operation 250 to one of the threads 262 included in the thread pool 260 for execution on the cloud data store 110 via the data store driver 180(1). At step 416, the benchmarking engine 160 receives results of the executed data store operations 250 from the cloud data store 110, generates the statistics 280 based on the results, and transmit the statistics 280 to the analysis cluster 190 and the benchmarking interface 150. The benchmarking interface 150 displays the statistics 280 via the statistics display pane 380. At step 422, the benchmarking engine 160 determines whether the benchmarking engine 160 has received new workload properties 230 via the benchmarking interface 150. If, at step 422, the benchmarking engine 160 determines that the benchmarking engine 160 has received new workload properties 230, then the method 400 proceeds to step 424. At step 424, the benchmarking engine sets the current workload properties 230 equal to the new workload properties 230, and the method 400 returns to step 414 where the benchmarking engine 160 adjusts the workload based on the current workload properties 230. The benchmarking engine 160 continues to cycle through steps 414-424, dynamically adjusting the workload based on new workload properties 230 until the benchmarking engine 160 does not receive any new workload properties 230. If, however, at step 422, the benchmarking engine 160 determines that the benchmarking engine 160 has not received any new workload properties 230, then the method 400 proceeds directly to step 426. At step 426, the benchmarking engine 160 determines whether the benchmarking engine 160 has received the end workload command 290. If, at step 426, the benchmarking engine 160 determines that the benchmarking engine 160 has not received the end workload command 290, then the method 400 returns to step 416, where the benchmarking engine 160 continues to generate the data store operations 250. The benchmarking engine 160 continues to cycle through steps 416-426, generating and executing the workload until the benchmarking engine 160 receives the end workload command 290. If, however, at step 426, the benchmarking engine 160 determines that the benchmarking engine 160 has received the end workload command, then the method 400 terminates. In sum, the disclosed techniques may be used to test a cloud data store. A benchmarking subsystem includes, without limitation, a benchmarking interface, a driver interface, and a benchmarking engine. The benchmarking interface enables selection of a cloud data store and a data store driver, dynamic specification of workload properties, and generation and execution of a workload on the cloud data store. The driver interface enables the benchmarking engine to interface with the cloud data store via a compatible data store driver. To test the cloud data store, the benchmarking engine generates data store operations based on the workload properties and a workload type. The benchmarking engine attaches the data store operations to threads that execute on the cloud data store via the data store driver. As the cloud data store executes the data store operations, the benchmarking engine generates statistics based on the results of the executed data store operations. The benchmarking engine transmits the statistics to an analysis cluster that performs any number of analysis operations. The benchmarking engine also transmits the metrics to the benchmarking interface for display purposes. Notably, the workload properties may be dynamically updated via the benchmarking interface while the benchmarking engine generates and executes data store operations without terminating the benchmarking engine. Upon receiving new workload properties, the benchmarking engine generates data store operations based on the new workload properties instead of the previously specified workload properties. As a general matter, the benchmarking engine continues to generate data store operations based on the current workload properties until the benchmarking engine receives an end command via the benchmarking interface. Advantageously, the benchmarking subsystem may be configured to automatically test data stores for use cases that are not efficiently supported by conventional benchmarking engines. Unlike conventional benchmarking engines, because the workload properties may be updated without terminating the benchmarking engine, a user may adjust the workload based on the statistics generated by the benchmarking engine. Fine-turning the workload as the benchmarking engine executes can dramatically reduce the time required to evaluate the performance of the cloud data store. Further, because the benchmarking engine executes until receiving an end command, the benchmarking engine may be configured to provide statistics for long-running use cases that are not supported by conventional benchmarking engines. Finally, because the benchmarking subsystem implements a variety of flexible interfaces, the benchmarking engine may be integrated with a wide range of cloud data stores, data store drivers, analysis clusters, cloud services, and the like. 1. In some embodiments, a method comprises processing one or more workload generation operations to generate a first plurality of data store operations based on first configuration data; executing at least one of the data store operations included in the first plurality of data store operations on a data store to obtain first statistics that are associated with a performance of the data store; while continuing to process the one or more workload generation operations, receiving second configuration data, modifying the one or more workload generation operations to generate a second plurality of data store operations based on the second configuration data, executing at least one of the data store operations included in the second plurality of data store operations on the data store to obtain second statistics that are associated with the performance of the data store; and displaying or transmitting for further processing at least one of the first statistics and the second statistics. 2. The method of clause 1, further comprising receiving an end command and, in response, ceasing to process the workload generation operations. 3. The method of clauses 1 or 2, further comprising, prior to processing the one or more workload generation operations, receiving a first command that specifies the data store; receiving a second command that specifies a driver; and establishing a connection to the data store through the driver. 4. The method of any of clauses 1-3, wherein the driver comprises a driver application that is written in a programming language or a dynamic plugin that is associated with a script. 5. The method of any of clauses 1-4, wherein executing at least one of the data store operations included in the first plurality of data store operations comprises assigning the at least one of the data store operations to at least one thread included in a thread pool to generate at least one configured thread; and causing the data store to execute the at least one configured thread. 6. The method of any of clauses 1-5, further comprising, prior to modifying the one or more workload generation operations, modifying a number of threads included in the thread pool based on the second configuration data. 7. The method of any of clauses 1-6, further comprising, while continuing to process the one or more workload generation operations, receiving at least one subsequent configuration data; and for each subsequent configuration data included in the at least one subsequent configuration data, modifying the one or more workload generation operations to generate a subsequent plurality of data store operations based on the subsequent configuration data, executing at least one of the data store operations included in the subsequent plurality of data store operations on the data store to obtain subsequent statistics that are associated with the performance of the data store, and displaying or transmitting for further processing the subsequent statistics. 8. The method of any of clauses 1-7, wherein the first configuration data includes at least one of a rate of read operations, a rate of write operations, a number of threads, a size of data, and a traffic pattern. 9. The method of any of clauses 1-8, wherein the second configuration data includes at least one of an updated rate of read operations, an updated rate of write operations, and an updated number of threads. 10. In some embodiments, a computer-implemented computer-readable storage medium includes instructions that, when executed by a processor, cause the processor to perform the steps of establishing a connection to a data store through a driver; generating a first workload based on first configuration data; causing the first workload to execute on the data store to obtain first statistics that are associated with a performance of the data store and the first configuration data; while remaining connected to the data store, generating a second workload based on second configuration data, causing the second workload to execute on the data store to obtain second statistics that are associated with the performance of the data store and the second configuration data; and displaying or transmitting for further processing at least one of the first statistics and the second statistics. 11. The computer-implemented method of claim 10, wherein generating the first workload comprises processing one or more workload generation operations to generate a first plurality of data store operations based on the first configuration data; and generating the second workload comprises modifying the one or more workload generation operations to generate a second plurality of data store operations based on the second configuration data. 12. The computer-readable storage medium of clauses 10 or 11, further comprising receiving an end command and, in response, ceasing to process the one or more workload generation operations. 13. The computer-readable storage medium of any of clauses 10-12, wherein the first workload comprises a plurality of data store operations, and causing the first workload to execute on the data store comprises assigning at least one of the data store operations included in the plurality of data store operations to at least one thread included in a thread pool to generate at least one configured thread; and causing the data store to execute the at least one configured thread. 14. The computer-readable storage medium of any of clauses 10-13, further comprising, prior to generating the second workload, modifying a number of threads included in the thread pool based on the second configuration data. 15. The computer-readable storage medium of any of clauses 10-14, wherein the first configuration data includes at least one of a rate of read operations, a rate of write operations, a number of threads, a size of data, and a traffic pattern. 16. The computer-readable storage medium of any of clauses 10-15, wherein the traffic pattern comprises a sliding window of data that is characterized by at least one of temporally proximate data and spatially proximate data. 17. The computer-readable storage medium of any of clauses 10-16, wherein transmitting at least one of the first statistics and the second statistics comprises transmitting at least one of the first statistics and the second statistics to an analysis application for further processing. 18. In some embodiments, a system comprises a memory storing instructions associated with a benchmarking engine; and a processor that is coupled to the memory and, when executing the instructions, is configured to process one or more workload generation operations to generate a first plurality of data store operations based on first configuration data; assign at least one of the data store operations included in the first plurality of data store operations to at least a first thread included in a thread pool to generate at least a first configured thread; cause the data store to execute the at least a first configured thread to obtain first statistics that are associated with a performance of the data store; while continuing to process the one or more workload generation operations, receive second configuration data, modify at least one of the workload generation operations and the thread pool based on the second configuration data, assign at least one of the data store operations included in the second plurality of data store operations to at least a second thread included in the thread pool to generate at least a second configured thread; and cause the data store to execute the at least a second configured thread to obtain second statistics that are associated with the performance of the data store; and display or transmit for further processing at least one of the first statistics and the second statistics. 19. The system of clause 18, wherein the processor is further configured to receive an end command and, in response, cease to process the workload generation operations. 20. The system of clause 18 or 19, wherein the first configuration data includes at least one of a rate of read operations, a rate of write operations, a number of threads, a size of data, and a traffic pattern. 21. The system of any of clauses 18-20, wherein the processor is further configured to, prior to processing the one or more workload generation operations, generate one or more write operations that store initial data in the data store. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a ““module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. 15394448 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 5th, 2022 04:33PM Apr 5th, 2022 04:33PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Feb 10th, 2015 12:00AM Jan 4th, 2013 12:00AM https://www.uspto.gov?id=US08954495-20150210 Proxy application with dynamic filter updating The disclosure describes a proxy server application that supports the dynamic modification of proxy rules implemented by a proxy server. The proxy rules implemented by the proxy server specify network behaviors to be performed at various points during the handling of requests from client applications. A proxy server implements the proxy rules by processing one or more user-generated network traffic filters for managing network traffic. In an embodiment, users generate network traffic filters by creating network traffic filter source code that specify processing steps to be performed by a proxy server relative to network messages the proxy server receives. In an embodiment, user-generated network traffic filters may be added, removed, reordered, or otherwise modified in a proxy server application at runtime in order to respond to current network conditions or to achieve other desired proxy configurations. 8954495 1. A method comprising: in a proxy server that is configured to receive requests directed toward one or more origin servers and to distribute the requests to one or more of the origin servers for processing, loading, from a data repository, one or more first network traffic filters, wherein each of the one or more first network traffic filters comprises an executable unit of computer program code specifying processing criteria and one or more actions; while the proxy server is executing and without ending execution of the proxy server, performing one or more of: loading and initiating operation of one or more second network traffic filters; removing one or more of the first network traffic filters; reordering one or more of the first network traffic filters; receiving, at the proxy server, a network message; for a particular network traffic filter of the one or more first network traffic filters, wherein the particular network traffic filter comprises particular processing criteria and one or more particular actions: determining whether the network message satisfies the particular processing criteria; in response to determining that the network message satisfies the particular processing criteria, causing the one or more particular actions to be performed; wherein the method is performed on one or more computing devices. 2. The method of claim 1, wherein the one or more first network traffic filters comprise one or more of: a first network traffic filter chain comprising one or more pre-processing network traffic filters that are configured to process the requests before the requests are distributed to the one or more of the origin servers; a second network traffic filter chain comprising one or more dispatch network traffic filters that are configured to distribute the requests to the one or more of the origin servers; and a third network traffic filter chain comprising one or more post-processing network traffic filters that are configured to process responses returned by the one or more of the origin servers before the response is sent to a requesting client. 3. The method of claim 2, wherein the one or more first network traffic filters further comprise one or more of: a fourth network traffic filter chain comprising one or more static network traffic filters that are configured to process requests and return response messages without distributing the requests to the one or more of the origin servers; and a fifth network traffic filter chain comprising one or more error network traffic filters that are configured to process errors generated by one or more other network traffic filters. 4. The method of claim 1, wherein the one or more particular actions include one or more of: validating the network message, authenticating the network message, modifying the network message, caching the network message, storing information associated with the network message, sending the network message to one or more second network elements, causing the sending or delivery of the network message to be delayed, modifying application behavior, replying to the network message. 5. The method of claim 1, wherein the determining whether the network message satisfies the particular processing criteria includes examining one or more of: a header associated with the network message, a message body associated with the network message, contextual data generated by one or more of the first network traffic filters and second network traffic filters. 6. The method of claim 1, wherein the determining whether the network message satisfies the particular specified criteria includes determining one or more of: a type of device that generated the network message, a network address associated with the network message, a geographic location associated with a client generating the network message, a user associated with the network message, a resource requested by the network message. 7. The method of claim 1, wherein determining whether the network message satisfies the particular specified criteria is based at least in part on one or more of: random sampling, algorithmic sampling. 8. The method of claim 1, wherein the proxy server loading one or more first network traffic filters further comprises: the proxy server receiving one or more execution order values, wherein each execution order value determines an order to evaluate a particular network traffic filter relative to the other first network traffic filters; the proxy server ordering the first network traffic filters according to the received execution order values. 9. The method of claim 1, wherein the proxy server loading, from the data repository, one or more first network traffic filters further comprises loading one or more network traffic filter source code files. 10. The method of claim 9, wherein each of the one or more network traffic filter source code files specifies one or more of: a filter type, an execution order value, processing criteria, and one or more processing actions. 11. A non-transitory computer-readable data storage medium storing one or more sequences of instructions which when executed cause one or more processors to perform, in a proxy server that is configured to receive requests directed toward one or more origin servers and to distribute the requests to one or more of the origin servers for processing: loading, from a data repository, one or more first network traffic filters, wherein each of the one or more first network traffic filters comprises an executable unit of computer program code specifying processing criteria and one or more actions; while the proxy server is executing and without ending execution of the proxy server, performing one or more of: loading and initiating operation of one or more second network traffic filters; removing one or more of the first network traffic filters; reordering one or more of the first network traffic filters; receiving, at the proxy server, a network message; for a particular network traffic filter of the one or more first network traffic filters, wherein the particular network traffic filter comprises particular processing criteria and one or more particular actions: determining whether the network message satisfies the particular processing criteria; in response to determining that the network message satisfies the particular processing criteria, causing the one or more particular actions to be performed. 12. The non-transitory computer-readable data storage medium of claim 11, wherein the one or more first network traffic filters comprise one or more of: a first network traffic filter chain comprising one or more pre-processing network traffic filters that are configured to process the requests before the requests are distributed to the one or more of the origin servers; a second network traffic filter chain comprising one or more dispatch network traffic filters that are configured to distribute the requests to the one or more of the origin servers; and a third network traffic filter chain comprising one or more post-processing network traffic filters that are configured to process responses returned by the one or more of the origin servers before the response is sent to the requesting client. 13. The non-transitory computer-readable data storage medium of claim 12, wherein the one or more first network traffic filters further comprise one or more of: a fourth network traffic filter chain comprising one or more static network traffic filters that are configured to process requests and return response messages without distributing the requests to the one or more of the origin servers; and a fifth network traffic filter chain comprising one or more error network traffic filters that are configured to process errors generated by one or more other network traffic filters. 14. The non-transitory computer-readable data storage medium of claim 11, wherein the one or more particular actions include one or more of: validating the network message, authenticating the network message, modifying the network message, caching the network message, storing information associated with the network message, sending the network message to one or more second network elements, causing the sending or delivery of the network message to be delayed. 15. The non-transitory computer-readable data storage medium of claim 11, wherein the determining whether the network message satisfies the particular processing criteria includes examining one or more of: a header associated with the network message, a message body associated with the network message, contextual data generated by one or more of the first network traffic filters and second network traffic filters. 16. The non-transitory computer-readable data storage medium of claim 11, wherein the determining whether the network message satisfies the particular specified criteria includes determining one or more of: a type of device that generated the network message, a network address associated with the network message, a resource requested by the network message, a geographic location associated with a client generating the network message, a user associated with the network message. 17. The non-transitory computer-readable data storage medium of claim 11, wherein determining whether the network message satisfies the particular specified criteria is based at least in part on one or more of: random sampling, algorithmic sampling. 18. The non-transitory computer-readable data storage medium of claim 11, further comprising instructions which, when executed by the one or more processors, cause the one or more processors to perform: the proxy server receiving one or more execution order values, wherein each execution order value determines an order to evaluate a particular network traffic filter relative to the other first network traffic filters; the proxy server ordering the first network traffic filters according to the received execution order values. 19. The non-transitory computer-readable data storage medium of claim 18, wherein the proxy server loading, from the data repository, one or more first network traffic filters further comprises loading one or more network traffic filter source code files. 20. The non-transitory computer-readable data storage medium of claim 18, wherein each of the one or more network traffic filter source code files specifies one or more of: a filter type, an execution order value, processing criteria, and one or more processing actions. 20 TECHNICAL FIELD The present disclosure generally relates to the use of proxy servers in computer networks. The disclosure relates more specifically to a proxy server application that provides for dynamic updating of defined network behaviors implemented by a proxy server. BACKGROUND The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. The servers that provide some of the most popular web-based services in networked computing may often attract network traffic from thousands of client device types that in total generate millions, or even billions, of network requests on a daily basis. In order to efficiently handle network traffic levels of these scales, among other reasons, web-based application providers commonly utilize proxy servers in web-based application network infrastructures. In general, a proxy server acts as an intermediary between requesting clients and the origin servers that process the client requests. In this manner, proxy servers provide a centralized point of ingress and egress for network traffic in a web-based application network infrastructure and enable the implementation of various network policies or functions at the proxy in order to reduce processing demands on the origin servers, manage the flow of network traffic, and gain insights into system behavior. Examples of policies or functions include caching, diagnosing error conditions, load balancing, and authentication, and authorization. Certain proxy servers are primarily implemented as application software that runs on a server and are generally configured for specific situations. However, existing proxy server applications have a number of disadvantages. For example, existing proxy server applications provide primarily for the specification of statically defined network behaviors that are configurable in only a limited number of ways defined by the application. Further, even minor modifications to existing proxy server applications typically require the redeployment or rebooting of the entire proxy server application to any proxy servers running the application. These factors and others often complicate the challenge of responding to the ever-changing network conditions in web-based application environments that often call for timely modifications to be made to proxy server configurations in order to protect back-end systems, combat rogue clients, diagnose problems, modify application behavior, and otherwise ensure the accessibility of web-based services. BRIEF DESCRIPTION OF THE DRAWINGS In the drawings: FIG. 1 illustrates a proxy server in a computer network; FIG. 2A illustrates a first example proxy server application arrangement in a local area network; FIG. 2B illustrates a second example proxy server application arrangement in a local area network; FIG. 3A illustrates an example of an architecture for a proxy server application; FIG. 3B illustrates examples of a proxy server application routing network requests in a network, according to an embodiment; FIG. 4 illustrates an example network traffic filter source code file; FIG. 5 illustrates an example processing flow of a network request by proxy server application; FIG. 6 illustrates a method of processing network messages by proxy server; FIG. 7 illustrates a computer system upon which an embodiment may be implemented. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. Embodiments are described herein according to the following outline: 1.0 General Overview 2.0 Structural and Functional Overview 3.0 Proxy Server Application Overview 3.1 Network Traffic Filters 3.2 Overview of Example Operation 4.0 Implementation Mechanisms—Hardware Overview 5.0 Other Aspects of Disclosure 1.0 General Overview In one embodiment, the disclosure describes a proxy server application that supports the dynamic modification of defined proxy rules implemented by a proxy server. In general, in an embodiment a proxy server is configured to receive requests from clients directed toward one or more origin servers, dispatch the requests to origin servers for processing, receive responses from the origin servers, and send the responses back to the requesting clients. The proxy rules implemented by the proxy server specify desired network behaviors to be performed at various points during the handling of client requests by the proxy server. In an embodiment, a proxy server implements particular proxy rules by processing one or more network traffic filters. In this context, a network traffic filter refers to an executable unit of computer program code that performs one or more processing steps relative to a network message received by a proxy server. Network traffic filters may comprise virtually any processing actions to be performed by a proxy server in order to respond to current network conditions or otherwise achieve desired proxy configurations. In an embodiment, a proxy server processes network traffic filters as part of one or more network traffic filter chains, with each filter chain including one or more individual network traffic filters and corresponding to a particular point or other event during the handling of a network message received by the proxy server. In an embodiment, users may generate network traffic filters by specifying the attributes of the network traffic filters in one or more filter source code files. In an embodiment, user-created filter source code files may specify one or more of the following network traffic filter attributes: an associated filter chain, a processing order relative to other filters, one or more filter processing actions, criteria for performance of the processing actions, and other functions. In an embodiment, users may create and distribute, or publish, new and/or updated filter source code files to a centralized data repository that stores and makes the filter source code files available for use by one or more proxy server applications. In an embodiment, a proxy server application enables dynamic updating of proxy rules by periodically retrieving published filter source code files from a data repository and loading the filter source code files at runtime to be processed by the proxy server application as one or more network traffic filter objects, also referred to herein as network traffic filters. “Dynamic updating,” in this context, includes loading a new network traffic filter into the proxy server application, or removing a particular network traffic filter in the proxy server application, or changing the processing order or other functionality of an existing network traffic filter in the proxy server application, or moving a particular network traffic filter from a first filter chain or set of network traffic filters to a different filter chain or set of network traffic filters in the proxy application, at runtime without pausing, stopping or restarting the proxy server application and without a full deployment of the proxy server application. 2.0 Structural and Functional Overview FIG. 1 illustrates a proxy server 106 in a network. In the example of FIG. 1, computer clients 102 are coupled directly or indirectly through one or more networks 104 to a proxy server 106. The proxy server 106, which may comprise a computer or a process hosted on or executed on other elements of FIG. 1, is coupled directly or indirectly through one or more networks 108 to one or more origin servers 112. In this arrangement, the proxy server 106 intermediates communication between clients 102 and applications residing on the distributed set of origin servers 112, which may be interconnected by a wide area network and/or local area network. There may be a multitude of proxy servers 106 in other embodiments, but one proxy server 106 is shown in FIG. 1 for the purposes of illustrating a clear example. In an embodiment, clients 102 generally include any computing devices capable of requesting services over a network and include, for example, personal computers, smartphones, tablet computers, processor-equipped televisions, set-top boxes, game consoles, digital video recorders, etc. Networks 104, 108 each comprises a computer network, such as, for example, a local area network (LAN), wide area network (WAN), or internetwork such as the Internet. Networks 104, 108 may contain additional network elements such as routers. In the context of a proxy arrangement, typically network 104 is the Internet or other WAN and network 108 is a LAN, but the present disclosure is not limited to this network arrangement and other network arrangements including proxy server 106 are within the scope of the present disclosure. In an embodiment, origin servers 112 comprise one or more computing devices that host, execute, or otherwise implement one or more applications that provide various resources or services over a network to requesting clients. For example, origin servers 112 may comprise web servers, database servers, media content streaming servers, or any other types of application servers. In general, origin servers 112 respond to incoming requests from clients for some resource or service by processing the request and providing a response to the requesting client. In an embodiment, proxy server 106 is a network element, computer program, or some combination thereof, that acts as an intermediary for requests from clients 102 seeking resources from one or more origin servers 112. In the network arrangement illustrated in FIG. 1, proxy server 106 may be referred to a “reverse” proxy server due to its proximity to the origin servers 112 in the network arrangement. In an embodiment, proxy server 106 is a Hypertext Transfer Protocol (HTTP) proxy and proxies HTTP requests from clients 102 directed toward origin servers 112, however, the proxy server application described herein is generally applicable to any network traffic protocol and is not limited to proxying HTTP requests. In operation, as one of clients 102 sends a request for a resource or service provided by an origin server 112, the request traverses through the proxy server 106. In an embodiment, in response to receiving a network request, a proxy server application running on proxy server 106 performs one or more processing steps, as described in further detail herein, and dispatches the request on to one or more of origin servers 112. In response to receiving a dispatched request from proxy server 106, an origin server 112 processes the request and generates a response message that is returned to proxy server 106. In an embodiment, in response to receiving a response message from an origin server 112, the proxy server application running on proxy server 106 may perform one or more additional processing steps before sending the response message back to the requesting client of clients 102. In various embodiments, a proxy server application may be utilized in network architectures as a separate server tier or embedded in an existing service. FIGS. 2A, 2B illustrate example proxy server application arrangements. Referring first to FIG. 2A, a proxy server 210A comprises a stand-alone computer tier in a local area network 206A. In FIG. 2A, service 208A may comprise an application program hosted on a computer that receives requests from clients 202A over a network 204A and communicates with origin servers 212A in order to process client requests. For example, service 208A may be an application programming interface (API) service that receives requests from clients 202A and sends additional requests to origin servers 212A in order to service the requests from clients 202A. In this arrangement, proxy server 210A may be arranged between the service 208A and origin servers 212A. In an embodiment, proxy server 210A comprises a proxy server application and intermediates requests between service 208A and origin servers 212A, performing one or more processing steps on network messages that traverse through proxy server 210A. In another embodiment, FIG. 2B illustrates an example of a proxy server application embedded in a service 208B. For example, service 208B may receive requests from clients 202B over network 204B and, similarly to service 208A described above, service 208B may reroute the requests or send additional requests to origin servers 212B in order to process requests from clients 202B. In an embodiment, instead of adding an additional standalone proxy server tier between service 208B and origin servers 212B, a proxy server application 210B instead may be embedded in service 208B. In the example of FIG. 2B, requests sent from service 208B to origin servers 212B are intercepted and processed by the embedded proxy server application 210B in a similar manner as if proxy server application 210B was installed on a separate server tier. The proxy server application deployment arrangements illustrated in FIG. 2A, 2B are provided as examples, and the techniques described herein are not limited to these network arrangements. 3.0 Proxy Server Application Overview FIG. 3A illustrates a computer system 300 that includes an example architecture for a proxy server application that enables dynamic updating of network traffic filters. The computer system 300 includes a file publisher 304, file repository 306, and a proxy server application 308. In an embodiment, proxy server application 308 comprises a file manager 310, a loader 312, proxy application processor 314, and network traffic filter chains 316A-316C. Each of the file publisher 304, file repository 306, proxy server application 308, file manager 310, loader 312, proxy application processor 314, and network traffic filter chains 316A-316C may be implemented, in various embodiments, as one or more computer programs, code segments, scripts, configuration files, firmware, hardware logic, special-purpose computers, or a combination thereof. Referring now to FIG. 3A, a file publisher 304 is configured to manage the storage of user-generated filter source code files 302 in a file repository 306. In an embodiment, the user-generated filter source code files 302 comprise program code and other logic that form the basis of network traffic filters used by a proxy server application 308. For example, a filter source code file may be a user-generated scripting language source file comprising program code specifying one or more desired proxy rules. In an embodiment, users may create the filter source code files 302 using a dynamic scripting language such as the Groovy programming language. The Groovy programming language is based on open source software currently available online at the Internet domain groovy.codehaus.org. As described in further detail below, use of a dynamic scripting language such as Groovy enables user-generated program code to be dynamically loaded into a proxy server application at runtime for execution by the proxy server application. The Groovy programming language is provided only as an example, and the present disclosure is not limited to any particular programming language. A user may create a filter source code file locally on the user's computer and desire that the filter source code file be made available to a proxy server application 308 in order for the filter source code file to be loaded as an active network traffic filter for use by the proxy server application. In an embodiment, a user may cause a filter source code file to be made available to proxy server application 308 by storing the filter source code file in a file repository 306 via a file publisher 304. For example, file publisher 304 may be a command line tool or other application program that enables a user to transmit and store user-generated filter source code files in a file repository 306. Other techniques for publishing user-generated filter source code files to a file repository 306 may be used according to the requirements of the implementation at hand and the present disclosure is not limited to any particular technique by which filter source code files are stored in a file repository 306. In an embodiment, file repository 306 stores the user-generated filter source code files and causes stored filter source code files to be available to a proxy server application 308. In general, a file repository 306 provides a centralized storage location of user-generated filter source code files that is accessible to any of the proxy servers that are configured to implement the proxy rules defined by the filter source code files in the repository. File repository 306 may be implemented using a data storage mechanism such as, for example, a database system or a commonly mounted file system. A proxy server application 308 may access file repository 306 over a network, or the file repository may be stored locally on a proxy server hosting the proxy server application. In some embodiments, file repository 306 is implemented using the Apache Cassandra distributed database management system. Apache Cassandra is open source software maintained by the Apache Software Foundation and currently available online at the Internet domain cassandra.apache.org. In FIG. 3A, proxy server application 308 retrieves and loads user-generated filter source code files for use from a file repository 306 using a file manager 310 and a loader 312. In an embodiment, file manager 310 may poll file repository 306 and periodically determines whether the repository contains new filter source code files or updated versions of filter source code files currently loaded by proxy server application 308. In an embodiment, in response to determining that one or more new and/or updated filter source code files are available in file repository 306, file manager 310 retrieves any new and/or updated filter source code files from file repository 306 and stores the filter source code files locally in one or more directories on a proxy server hosting proxy server application 308. In an embodiment, new and/or updated filter source code files retrieved from file repository 306 by file manager 310 are sent to a loader 312. In an embodiment, file manager 310, loader 312, or another process may poll the directories storing the filter source code files on the proxy server for changes and push any new and/or updated filter source code files to the loader 312. In an embodiment, filter source code files pushed to the loader 312 are dynamically compiled and loaded as program objects that may be accessed by proxy application processor 314 at runtime without ending or restarting execution of the proxy server application 308. In this context, dynamic compilation of the filter source code files includes translating, at runtime, the filter source code files into a binary form that is stored in computer memory. In an embodiment, dynamic loading may comprise a virtual machine creating program objects, representing network traffic filters, in computer memory from the compiled binary form of the filter source code files. For example, proxy server application 308 may be running in a Java virtual machine (JVM) or other similar virtual machine framework that enables dynamic compilation and class loading. For example, filter source code files 302 may be coded in a scripting language or other programming language capable of being dynamically compiled into classes and loaded into a JVM. In an embodiment, the resulting program objects created by the loader 312 correspond to one or more network traffic filters to be processed by proxy application processor 314. The dynamic loading of user-generated filter source code files by loader 312 into active network traffic filters enables new and/or updated network traffic filters to be injected into proxy server application 308 without requiring a static binary version of proxy server application 308 to be re-built and re-deployed to a proxy server hosting the proxy server application. In an embodiment, the program objects loaded by loader 312 may be cached in memory. The cached program objects may be used, for example, in the event it is desired to roll back to a previous version of a network traffic filter based on the program objects. For example, a user may determine that the most recent update to a particular network traffic filter is operating improperly and in response, the user may cause proxy server application 308 to revert to an older cached version of the program object corresponding to the particular network traffic filter until the issue is resolved. In an embodiment, a proxy application processor 314 processes the network traffic filters loaded by loader 312 in response to a proxy server hosting proxy server application 308 receiving network messages or the occurrence of other network message processing events. Processing network traffic filters, in this context, may comprise executing the network traffic filters and providing, as input, information about one or more network messages or events. For example, in response to proxy server application 308 receiving a client request network message, proxy application processor 314 may process one or more particular loaded network traffic filters associated with the processing of client requests. As another example, another set of loaded network traffic filters may be processed in response to receiving a response message from an origin server, or during other points of handling received network messages. In an embodiment, proxy application processor 314 processes loaded network traffic filters according to one or more network traffic filter chains. A network traffic filter chain may comprise one or more individual network traffic filters, organized in a serial sequence, and corresponding to a particular processing phase or event during the handling of network messages by proxy server application 308. In an embodiment, proxy application processor 314 includes logic that determines, in response to the occurrence of particular network message processing phases or events, which one or more particular filter chains to process and an order in which to process the network traffic filters within the particular filter chains. Referring to FIG. 3A, network traffic filter chains 316A-316C illustrate three example network traffic filter chains loaded by proxy server application 308, with each filter chain comprising a number of network traffic filters. For example, filter chain 316A may correspond to a “pre-processing” phase for received network requests, and may comprise network traffic filters that perform processing steps on network requests received by proxy server application 308 before the requests are dispatched to an origin server. Network traffic filter chain 316B may correspond, for example, to a dispatch phase of handling a network message and may include one or more network traffic filters that implement logic for dispatching received network request messages to one or more origin servers. Network traffic filter chain 316C may correspond to a “post-processing” phase of handling a network message and include one or more network traffic filters that process a response network messages received by proxy server application 308 before sending the response to the requesting client. The network traffic filter chains illustrated in FIG. 3A are provided only as examples and fewer or more network traffic filter chains may be defined in a proxy server application than those illustrated in FIG. 3A. For example, other possible network traffic filter chains include an “error” network traffic filter chain that includes network traffic filters configured to respond to errors that occur in one or more of the other network traffic filters. Another example network traffic filter chain is a “static” filter chain that performs processing on a network request and returns a response to a requesting client without proxying the request to an origin server. In an embodiment, network traffic filters in a first filter chain may trigger the execution of one or more second filter chains during processing of the first filter chain. In this manner, arbitrary hierarchies of filter chains may be defined. In an embodiment, network traffic filters currently loaded by proxy server application 308 may also be unloaded from the proxy server application. For example, if it is discovered that a particular loaded network traffic filter does not operate as intended or is no longer desired, a user may cause the particular network traffic filter to be unloaded from the proxy server application 308. After unloading a particular network traffic filter, a proxy server application 308 no longer includes the particular network traffic filter in its processing of network messages. In an embodiment, a user may cause a proxy server application 308 to unload a particular network traffic filter by causing the filter source code file representing to the particular network traffic filter to be removed from file repository 306. For example, a user may use file publisher 304 or another mechanism to remove a particular filter source code file or otherwise indicate to a proxy server application 308 that a particular network traffic filter is no longer intended for use by proxy server application 308. In an embodiment, during the polling of file repository 306 by file manager 310, the file manager may detect that the filter source code files representing one or more currently loaded network traffic filters are no longer available in the repository. In response to determining that the filter source code files are no longer available in the repository, file manager 310 may remove the corresponding filter source code files stored on the proxy server hosting proxy server application 308 and further cause the one or more particular network traffic filters to be unloaded and no longer processed by proxy server application 308. 3.1 Network Traffic Filters In an embodiment, network traffic filters may encode or define one or more proxy rules to be implemented by a proxy server and specified criteria for the execution of those proxy rules. The proxy rules defined by network traffic filters may implement a wide variety of processing actions relative to network messages received by a proxy including, for example, authenticating and/or validating network requests, modifying the content of a network message, modifying the behavior of applications in a network, and implementing various traffic management and load-balancing policies. In general, network traffic filters are able to modify any aspect of a network message as it traverses through a proxy server hosting a proxy server application. Network traffic filters may also access other environmental variables made accessible to the network traffic filters by a proxy server application. For example, a proxy server application may information related to track network traffic volume levels, status information about the proxy server, or any other information pertaining to current network conditions, and a network traffic filter may use the information to make various processing decisions. In an embodiment, filters may make use of shared contextual data to coordinate decisions that affect application behavior. For example, a first filter could add contextual data to a shared application context and a second filter could examine the data to determine processing actions to perform. As described above, a network traffic filter may be initially specified by a user in a filter source code file comprising logic to be processed by a proxy server application that has loaded the filter source code file. In an embodiment, the logic included in a filter source code file representing a network traffic filter may comprise the specification of a filter type, an execution order value, processing criteria, and one or more processing actions. In general, a proxy server application processes each loaded network traffic filter by determining whether the filter's specified processing criteria are satisfied based on a received network message or other available information as input, and in response to determining that the specified processing criteria are satisfied, causing the one or more specified processing actions to be performed. In an embodiment, if a particular network traffic filter's processing criteria are not satisfied, processing of the particular network traffic filter ends and the proxy server application continues processing any network traffic filters remaining to be processed. In an embodiment, filter types specified in network traffic filter source code files define logical groupings of the loaded network traffic filters with each grouping corresponding to a particular point or event during the handling of a network message. In an embodiment, the groupings of network traffic filters form one or more network traffic filter chains, with the network traffic filters included in a particular filter chain processed as a group in response to the occurrence of the associated network message handling point or event. In an embodiment, a user may add a new or updated network traffic filter to an existing network filter chain by specifying a filter type value representing the existing network filter chain in the corresponding filter source code file. As described above, in an embodiment, one example filter type may be associated with a pre-processing phase of handling a network request. In an embodiment, the processing actions associated with pre-processing network traffic filters may perform one or more processing steps in response to receiving a request message and before the request message is sent to an origin server. For example, processing actions specified in a pre-processing network traffic filter may include logic that authenticates, validates, or throttles received network messages, logs or stores other auditing information about a received network message, or that modifies the contents or other aspects of a network message. In an embodiment, another example filter type may be associated with the phase of dispatching received network requests to one or more origin servers. The processing actions specified in a dispatch network traffic filter may include, for example, logic determining a particular origin server to send a received network request. For example, dispatch network traffic filters may implement load balancing policies by specifying logic that distributes received requests across multiple redundant origin servers. FIG. 3B illustrates examples of a proxy server application routing network requests in a network, according to an embodiment. The network illustrated in FIG. 3B includes two groups of clients, market A clients 320 and market B client 322, a proxy server 330, and origin servers 332, 334, 336, 338. Market A clients 320 and market B clients 322 may, for example, refer to groupings of client devices that may be grouped based on access to particular services or resources provided by the origin servers, particular geographic regions, client device types, or any other characteristics. In FIG. 3B, client devices associated with either market A clients 320 or market B clients 322 may send requests for resources or services provided by one or more of origin servers 332, 334, 336, and 338, with the requests traversing through a proxy server application running on proxy server 330. In FIG. 3B, the various dashed and solid lines connecting client devices in market A clients 320 and market B clients 322 with proxy server 330 and origin servers 332, 334, 336, 338 represent respective paths that requests from the client devices may follow, according to an embodiment. In an embodiment, the respective request paths may be determined in part based on one or more dispatch network traffic filters processed by proxy server 330. As described above, the dispatch network traffic filters include logic to direct received requests to one or more origin servers depending on various characteristics associated with the request or other information. For example, client device 324 from market A clients 320 may send a request for a particular resource or service and that traverses through proxy server 330. In response to receiving the request, the proxy server application running on proxy server 330 may process one or more network traffic filters, including one or more dispatch network traffic filters. Based on processing the one or more dispatch network traffic filters, the proxy server application may determine that the request received from client device 324 is to be routed to origin server 332, which provides the requested resource or service to market A clients. For example, a dispatch network traffic filter may determine that a request from client device 324 is to be routed to origin server 332 based on one or more characteristics of the request that identify client device 324 as a market A client such as, for example, an Internet Protocol (IP) address, a service or user identifier value, or any other characteristic. Similarly, client requests from client devices associated with market B clients 322 may be routed to origin server 338 which provides a requested resource or service to market B clients, as illustrated by the solid lines connecting the client devices in market B clients 322 to proxy server 330 and origin server 338. As another example, proxy server 330 may receive a request from client device 326 and send the received request to origin server 334. In the example of FIG. 3B, origin server 334 may represent an origin server configured as an alternative origin server to origin server 332. For example, origin server 334 may be configured for the purposes of testing a newer version of provided resources, services, or other components associated with origin server 332. In an embodiment, a dispatch network traffic filter processed by proxy server 330 may direct one or more particular requests received by proxy server 330 to origin server 334 instead of origin server 332 based on, for example, random sampling, particular times of day, or any other conditions or characteristics associated with a request. In this manner, a dispatch network traffic filter may be configured at proxy server 330 to direct a portion of received requests to origin server 334 in order to provide a controlled test the updated services on origin server 334. In an embodiment, client device 328 represents a client device that may be associated with a particular user or group of users, as illustrated by the depicted customerID value “123.” In an embodiment, proxy server 330 may receive a request from client device 328 and a processed network traffic filter may identify the request as associated with the particular user or group of users. Based on identifying that the request is associated with a particular user or group of users, the dispatch network traffic filter may direct the request to origin server 336 instead of origin server 332 or origin server 334. Origin server 336 may, for example, be configured to isolate requests from particular users for debugging, security, or other analysis purposes. The routing examples described above and illustrated in FIG. 3B are provided only as examples, and dispatch network traffic filters may be configured to implement any arbitrary routing decisions and policies. In an embodiment, another example filter type may be associated with a post-processing phase of handling a network request corresponding to the receipt of a response message from an origin server and before the response is sent back to the requesting client. Example processing actions that may be specified in post-processing network traffic filters include injecting information into the response message headers, modifying the contents of the response, delaying delivery of the response, injecting additional markup in the response message content, measuring of processing time by the origin servers, and re-sending network messages to origin servers in response to errors. In an embodiment, network traffic filters comprise processing criteria that determine whether the processing actions specified in the filter are to be performed in response to the particular network message or event. For example, the processing criteria for a network traffic filter may comprise a function that determines, based on one or more characteristics of a received network message, contextual data generated by other network traffic filters, or other event information, whether the specified criteria are satisfied. In an embodiment, processing criteria may be evaluated based on information contained in a network message including, for example, a type of device that generated the network message, a network address associated with the network message, a particular resource requested by the network message, or any other information associated with the network message. In an embodiment, a processing criteria function may return a Boolean value of true or false depending on whether or not the processing criteria are satisfied. In an embodiment, network traffic filters comprise one or more processing actions that perform one or more processing steps relative to a received network message. In an embodiment, a proxy server application is configured to execute the processing actions for a particular network traffic filter in response to the processing criteria for the particular network traffic filter being satisfied. The processing actions specified by a network traffic filter may perform virtually any operation on a network message or other accessible data elements, including validating the network message, authenticating the network message, modifying the network message, caching the network message, storing information associated with the network message, sending the network message to one or more second network elements, delaying transmission of the network message, or other functions. In general, the processing actions defined in a particular network traffic filter relate to a particular phase or event in the handling of network messages associated with the filter type specified in the particular network traffic filter. In an embodiment, network traffic filters may comprise an execution order value that determines an order to evaluate each particular network traffic filter relative to other network traffic filters in the same filter chain. For example, it may be desired that certain network traffic filters in a particular filter chain are processed earlier in the filter chain than others. For example, filters related to authentication in a pre-processing filter chain could be executed first. In an embodiment, the execution order values may be specified as numerical values that define an execution order based on the relative ordering of the numerical values. For example, network traffic filters specifying smaller numbers relative to other network traffic filters may be processed earlier in a filter chain than those filters specifying larger numbers. Numerical ordering is used as an example, however, and in other embodiments other values that define an ordering may be used. FIG. 4 illustrates an example of a network traffic filter source code file 400 comprising various code segments that provide examples of the components of a network traffic filter as described herein. Network traffic filter source code file 400 comprises examples of a filter type specification 402, execution order value specification 404, processing condition function 406, and processing actions function 408. Filter type specification 402 illustrates an example function that returns a value indicating the filter type to be associated with the network traffic filter based on filter source code file 400. In the example, the network traffic filter source code file 400 specifies a filter type of “pre”, indicating that the network traffic filter represented by the filter source code file is to be part of a “pre-processing” filter chain. Execution order value specification 404 illustrates an example function that returns an execution order value. In the example, the function is configured to return a value of “5.” As a result, the network traffic filter based on filter source code file 400 may be processed after network traffic filters in the “pre-processing” filter chain that specify an execution order value that is less than 5, but processed before network traffic filters specifying a value that is greater than 5. Processing condition function 406 illustrates example specified processing criteria that determine whether the filter processing actions are to be performed. In the example of filter source code file 400, processing condition function 406 evaluates whether a “deviceID” parameter associated with a received request matches a particular known device identification string “vendortv.” During processing of a network traffic filter based on filter source code file 400, if a received request message includes a “deviceID” parameter indicating a value of “vendortv”, processing condition function returns a Boolean value of true, otherwise returning a value of false. Processing actions function 408 illustrates example processing actions to be performed in response to determining that a particular network message satisfies the processing criteria in processing condition function 406. In the example, processing actions function 408 causes a proxy server application processing the network traffic filter represented by filter source code file 400 to suspend execution for a random time period. For example, the processing actions specified in the example processing actions function 408 may be useful in the event that a proxy server is receiving a large number of simultaneous requests, possibly due to a synchronized polling interval, from a particular type of device and throttling of the requests is desired to distribute the polling intervals over a wider period of time. In the example, in response to receiving requests that are determined to be from the particular type of device, the requests may be delayed for a random period of time in order not to bombard the origin servers with numerous requests at once. 3.2 Overview of Example Operation FIG. 5 illustrates a flow of a network message processed by a proxy server application of the present disclosure. In FIG. 5, a proxy server (not illustrated) comprising proxy server application 504 is configured to intercept a request 508 sent by a client 502 to an origin server 506. For example, request 508 may be an HTTP request for a resource hosted by origin server 506. As network messages are intercepted by the proxy server, the network messages are processed by proxy server application 504. In FIG. 5, proxy server application 504 receives a request 508 from client 502. In an embodiment, in response to receiving request 508, proxy server application 504 begins processing request 508 using a pre-processing filter chain 510. In an embodiment, proxy server application 504 processes the network traffic filters associated with pre-processing filter chain 510 in an order determined by execution order values specified by the pre-processing network traffic filters. In the example of FIG. 5, the four network traffic filters included in pre-processing filter chain 510 specify execution order values of 1, 3, 4, and 4, respectively, and are processed in that order. In an embodiment, the network traffic filters specifying the same execution order value may be executed in an arbitrary order. In an embodiment, proxy server application 504 processes pre-processing filter chain 510 by determining, for each particular network traffic filter in the filter chain, whether request 508 satisfies the processing criteria associated with the particular network traffic filter. For example, the first network traffic filter may include processing criteria that evaluate access credentials expected in request 508. As another example, the processing criteria for the second network traffic filter may specify particular information expected in one or more headers of request 508. In response to proxy server application 504 determining that the processing criteria associated with a particular network traffic filter are satisfied, proxy server application 504 causes the processing actions associated with the particular network traffic filter to be performed. In response to request 508, for example, any number of the processing actions associated with the network traffic filters in pre-processing filter chain 510 may be performed depending on satisfaction of the processing criteria in each of the filters. In FIG. 5, after each of the network traffic filters in pre-processing filter chain 510 are processed, proxy server application 504 proceeds to process the network traffic filters in a dispatch filter chain 512. In general, the network traffic filters in dispatch filter chain 512 are responsible for dispatching the request 508 to an appropriate origin server. The processing criteria for network traffic filters in dispatch filter chain 512 may be based on one or more characteristics associated with request 508 including an originating network address, user identification included in the request, originating device type identification, characteristics of the data contained in the request, contextual data generated by other network traffic filters, or any other characteristic associated with request 508. As a result of processing request 508 by dispatch filter chain 512, request 508 is sent as request 514 to an origin server 506. Dispatched request 514 may differ from the original request 508 according to any modifications made to request 508 by the network traffic filters in pre-processing filter chain 510 and dispatch filter chain 512. Origin server 506 processes request 514 and sends back response 516 to the proxy server. In FIG. 5, in response to receiving response 516, proxy server application 504 processes network traffic filters in a post-processing filter chain 518. During processing of post-processing filter chain 518, proxy server application 504 similarly evaluates the processing criteria for each of the network traffic filters in post-processing filter chain 518 and may perform one or more post-processing steps relative to response 516. In response to the processing criteria for one or more of the network traffic filters being satisfied, the network traffic filters of post-processing filter chain 518 may modify various aspects of response 516 before sending response 516 back to the requesting client 502. For example, one of the network traffic filters in post-processing filter chain 518 may inject headers into response 516 in order to enable cross-origin resource sharing (CORS). FIG. 6 illustrates a method of processing network messages received by a proxy server comprising a proxy server application as described herein. At block 600, a proxy server loads one or more network traffic filters from a filter repository. At block 602, the proxy server receives a network message. For example, the request may be a request from a client directed toward an origin server, or a response message from an origin server destined for a client device. At block 604, the proxy server determines one or more network traffic filter chains to be processed in response to receiving the network message. For example, the proxy server may determine “pre-processing” and “dispatch” filter chains are to be processed in response to receiving a request message from a client. In another example, in response to receiving a response message from an origin server, the proxy server may determine that a “post-processing” network traffic filter chain is to be processed. At block 606, the proxy server determines an order to evaluate the network traffic filters relative to the other network traffic filters in each network traffic filter chain. In an embodiment, the processing order is determined based on the proxy server receiving one or more execution order values. The proxy server processes the network traffic filters in an order that is determined based on the received execution order values. At block 608, the proxy server determines, for the next network traffic filter of a filter chain selected for processing, whether the network message satisfies particular processing criteria associated with the next network traffic filter. For example, the proxy server may determine that a network message satisfies the particular processing criteria based on determining a type of device that generated the network message, determining a network address associated with the network message, or determining a resource requested by the network message. The information contained in the network message for evaluation may be found, for example, in a network message header or in the body of the network message, or based on any other characteristics of the network message. If the network message satisfies the particular processing criteria, at block 610, the proxy server causes one or more particular actions associated with the network traffic filter to be performed. For example, the particular actions may include one or more of: modifying the network message, caching the network message, storing information associated with the network message, sending the network message to one or more second network elements, and causing the sending of the network message to be delayed, or triggering the execution of another filter chain. At block 612, after a particular network traffic filter is processed in either block 608 or block 610, the proxy server determines whether there are more network traffic filters to be processed. In response to determining that there are more network traffic filters to be processed, the processing criteria of the next traffic filter are evaluated at block 608. Otherwise, the proxy server awaits the receipt of further network messages in block 602. 4.0 Implementation Mechanisms—Hardware Overview FIG. 7 is a block diagram that illustrates a computer system 700 upon which an embodiment of the invention may be implemented. Computer system 700 includes a bus 702 or other communication mechanism for communicating information, and a processor 704 coupled with bus 702 for processing information. Computer system 700 also includes a main memory 706, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for processor 704. A storage device 710, such as a magnetic disk or optical disk, is provided and coupled to bus 702 for storing information and instructions. Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704. Another type of user input device is cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. The invention is related to the use of computer system 700 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another machine-readable medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes processor 704 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software. The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 700, various machine-readable media are involved, for example, in providing instructions to processor 704 for execution. Such a medium may take many forms, including but not limited to storage media and transmission media. Storage media includes both non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine. Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 704 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 700 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 702. Bus 702 carries the data to main memory 706, from which processor 704 retrieves and executes the instructions. The instructions received by main memory 706 may optionally be stored on storage device 710 either before or after execution by processor 704. Computer system 700 also includes a communication interface 718 coupled to bus 702. Communication interface 718 provides a two-way data communication coupling to a network link 720 that is connected to a local network 722. For example, communication interface 718 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 718 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 718 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link 720 typically provides data communication through one or more networks to other data devices. For example, network link 720 may provide a connection through local network 722 to a host computer 724 or to data equipment operated by an Internet Service Provider (ISP) 726. ISP 726 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 728. Local network 722 and Internet 728 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 720 and through communication interface 718, which carry the digital data to and from computer system 700, are exemplary forms of carrier waves transporting the information. 5.0 Other Aspects of Disclosure In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Aspects of the subject matter described herein are set out in the following numbered clauses: 1. A method comprising: in a proxy server that is configured to receive requests directed toward one or more origin servers and to distribute the requests to one or more of the origin servers for processing, loading, from a data repository, one or more first network traffic filters, wherein each of the one or more first network traffic filters comprises an executable unit of computer program code specifying processing criteria and one or more actions; while the proxy server is executing and without ending execution of the proxy server, performing one or more of: loading and initiating operation of one or more second network traffic filters; removing one or more of the first network traffic filters; reordering one or more of the first network traffic filters; receiving, at the proxy server, a network message; for a particular network traffic filter of the one or more first network traffic filters, wherein the particular network traffic filter comprises particular processing criteria and one or more particular actions: determining whether the network message satisfies the particular processing criteria; in response to determining that the network message satisfies the particular processing criteria, causing the one or more particular actions to be performed; wherein the method is performed on one or more computing devices. 2. The method of clause 1, wherein the one or more first network traffic filters comprise: a first network traffic filter chain comprising one or more pre-processing network traffic filters that are configured to process the requests before the requests are distributed to the one or more of the origin servers; a second network traffic filter chain comprising one or more dispatch network traffic filters that are configured to distribute the requests to the one or more of the origin servers; and a third network traffic filter chain comprising one or more post-processing network traffic filters that are configured to process responses returned by the one or more origin servers before the response is sent to the requesting client. 3. The method of any of clauses 1-2, wherein the one or more first network traffic filters further comprise: a fourth network traffic filter chain comprising one or more static network traffic filters that are configured to process requests and return response messages without distributing the requests to the one or more of the origin servers; and a fifth network traffic filter chain comprising one or more error network traffic filters that are configured to process errors generated by one or more other network traffic filters. 4. The method of any of clauses 1-3, wherein the one or more particular actions include one or more of: validating the network message, authenticating the network message, modifying the network message, caching the network message, storing information associated with the network message, sending the network message to one or more second network elements, causing the sending or delivery of the network message to be delayed, modifying application behavior, replying to the network message. 5. The method of any of clauses 1-4, wherein the determining whether the network message satisfies the particular processing criteria includes examining one or more of: a header associated with the network message, a message body associated with the network message, contextual data generated by one or more of the first network traffic filters and second network traffic filters. 6. The method of any of clauses 1-5, wherein the determining whether the network message satisfies the particular specified criteria includes determining one or more of: a type of device that generated the network message, a network address associated with the network message, a resource requested by the network message, a geographic location associated with a client generating the network message, a user associated with the network message. 7. The method of any of clauses 1-6, wherein determining whether the network message satisfies the particular specified criteria is based at least in part on one or more of: random sampling, algorithmic sampling. 8. The method of any of clauses 1-7, wherein the proxy server loading one or more first network traffic filters further comprises: the proxy server receiving one or more execution order values, wherein each execution order value determines an order to evaluate a particular network traffic filter relative to the other first network traffic filters; the proxy server ordering the first network traffic filters according to the received execution order values. 9. The method of any of clauses 1-8, wherein the proxy server loading one or more first network traffic filters comprises loading one or more network traffic filter source code files. 10. The method of clause 9, wherein each of the one or more network traffic filter source code files further comprises logic specifying one or more of: a filter type, an execution order value, processing criteria, and one or more processing actions. 11. A non-transitory computer-readable data storage medium storing one or more sequences of instructions which when executed cause one or more processors to perform any of the methods recited in clauses 1-10 12. A computer program product including instructions which, when implemented on one or more processors, carries out any of the methods recited in clauses 1-10. 13. A computing device having a processor configured to perform any of the methods recited in clauses 1-10. 13734864 netfilx, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open 709/203 Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Jul 9th, 2019 12:00AM Feb 12th, 2016 12:00AM https://www.uspto.gov?id=US10348589-20190709 Managing networks and machines that deliver digital content In one embodiment of the present invention, a content delivery network (CDN) monitoring system manages a CDN. The CDN monitoring system includes finite state machines (FSMs), and the current states of the FSMs reflect characteristics and/or behaviors associated with the CDN. In operation, the CDN monitoring system computes actions based on current states and/or metrics associated with the CDN. The actions may cause one or more of the FSMs execute state transitions. As part of a state transition, the current state of the FSM changes and an event is generated. The event triggers an event handler that may perform any type of management operations, such as generating performance reports and rerouting client requests. Notably, because each current state may be an aggregation of other current states, the CDN monitoring system may be configured to compute current states that accurately represent complex interactions between components within the CDN. 10348589 1. A computer-implemented method, comprising: computing a first action based on a first value of a first metric associated with a component of a content delivery network and a first state that is associated with a first finite state machine (FSM), wherein the first state corresponds to a previously received value of the first metric; executing a first state transition between a second state and a third state based on the first action, wherein the second state and the third state are associated with a second FSM associated with a plurality of components in the CDN; and in response to the first state transition, executing a first event handling operation to optimize one or more operations of the CDN. 2. The method of claim 1, wherein the first FSM is associated with an operational status of a first server, and the second FSM is associated with an operational status of a first server cluster that includes the first server. 3. The method of claim 2, wherein a third FSM is associated with an operational status of a second server that is included in the first server cluster, and further comprising: aggregating the third state and a fourth state that is associated with the third FSM to generate a second action; executing a second state transition between the third state and a fifth state that is associated with the second FSM based on the second action; and in response to the second state transition, executing a second event handling operation. 4. The method of claim 2, wherein the first state is associated with a degraded sever and computing the first action comprises: determining that a number of degraded servers included in a first server cluster is greater than a predetermined threshold; and in response, generating a first action that indicates the existence of a degraded server cluster. 5. The method of claim 1 wherein the first metric comprises a disk performance measurement, a processor performance measurement, a network performance measurement, or a configuration parameter. 6. The method of claim 1, further comprising receiving the first metric from a client machine, a network, a cache, or a server machine. 7. The method of claim 1, wherein executing the first event handling operation comprises applying a set of rules that configure a control server associated with the CDN to adjust a distribution of client requests among different server clusters included in the CDN. 8. The method of claim 1, further comprising performing one or more polling operations to obtain the first metric from a client machine, a network, a cache, or a server machine. 9. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps of: computing a first action based on a first value of a first metric associated with a component of a content delivery network, a first current state that is associated with a first finite state machine (FSM), and a second current state that is associated with a second FSM, wherein the first FSM is associated with a first operational status of the component of the CDN and the second FSM is associated with a second operational status of the CDN, and wherein the first current state corresponds to a previously received value of the first metric; computing a next state based on the first action and a third current state that is associated with a third FSM associated with a plurality of components in the CDN; determining that the next state does not equal the third current state; and setting the third current state equal to the next state and executing a first event handling operation to optimize one or more operations of the CDN. 10. The computer-readable storage medium of claim 9, wherein the first operational status corresponds to a first metric associated with an internet service provider (ISP) network. 11. The computer-readable storage medium of claim 9, wherein the first operational status corresponds to a first metric associated with a first server, the second operational status corresponds to a second metric associated with a second server, and the third FSM is associated with a third operational status of a server cluster that includes the first server and the second server. 12. The computer-readable storage medium of claim 9, wherein the first current state is associated with a degraded sever and computing the first action comprises: determining that a number of degraded servers included in a first server cluster is greater than a predetermined threshold; and in response, generating a first action that indicates the existence of a degraded server cluster. 13. The computer-readable storage medium of claim 9 wherein the first operational status corresponds to a first metric that comprises a disk performance measurement, a processor performance measurement, a network performance measurement, or a configuration parameter. 14. The computer-readable storage medium of claim 13, further comprising receiving the first metric from a client machine, a network, a cache, or a server machine. 15. The computer-readable storage medium of claim 9, wherein executing the first event handling operation comprises transmitting data indicating the third current state to a monitoring machine for display. 16. The computer-readable storage medium of claim 9, wherein executing the first event handling operation comprises adjusting a distribution of client requests among different server clusters included in the CDN. 17. A system comprising: a memory storing a monitoring engine; and a processor that is coupled to the memory and, when executing the monitoring engine, is configured to: monitor a first component associated with a content delivery network (CDN) via a first metric; compute a first action based on a current value of the first metric and a first state that is associated with a first finite state machine (FSM), wherein the first state corresponds to a previous value of the first metric; execute a first state transition between a first state and a second state based on the first action, wherein the first state and the second state are associated with the first FSM; compute a second action based on the second state and a third state that is associated with a second FSM associated with a plurality of components in the CDN; execute a second state transition between a fourth state and a fifth state based on the second action, wherein the fourth state and the fifth state are associated with a third FSM; and in response to the second state transition, execute a first event handling operation to optimize one or more operations of the CDN. 18. The system of claim 17, wherein the first FSM is associated with an operational status of a first server, the second FSM is associated with an operational status of a second server, and the third FSM is associated with an operational status of a first server cluster that includes the first server and the second server. 19. The system of claim 17, wherein the monitoring engine configures the processor to execute the first event handling operation by transmitting data indicating at least one of the second state and the fifth state to a monitoring machine for display. 20. The system of claim 17, wherein the monitoring engine further configure the processor to perform one or more polling operations to obtain the first metric from a client machine, a network, a cache, or a server machine. 20 CROSS-REFERENCE TO RELATED APPLICATIONS This application claims benefit of the U.S. Provisional Patent Application having Ser. No. 62/180,023 and filed on Jun. 15, 2015. The subject matter of this related application is hereby incorporated herein by reference. BACKGROUND OF THE INVENTION Field of the Invention Embodiments of the present invention relate generally to content delivery networks and distribution and, more specifically, to managing networks and machines that deliver digital content. Description of the Related Art Oftentimes, content delivery networks (CDNs) store multiple copies of digital content for vendors in clusters of severs that are located in different geographic regions. The servers within a server cluster are typically interconnected computers or virtual machines, where each computer or virtual machine manages a storage device and supplies services via a client-server architecture. Generally, in client-server architectures, clients request services and, in response, servers provide services. More specifically, when a client submits a request to access content stored within the CDN, a control server selects a server cluster and directs the request to the selected server cluster for processing. The control server may implement any number and types of routing algorithms and techniques to select the server cluster. For example, to optimize the overall performance of the CDN, some control servers implement server-load balancing techniques, internet service provider (ISP) caching, and so forth. In general, the control server may be configured to optimize any number and type of criteria, such as geographic proximity to the client, cost, etc. To further optimize the performance of a CDN, ensure availability of the content stored in the CDN, and provide an acceptable overall quality of experience to the clients, many CDN providers track CDN metrics that measure the performance of various components included in the CDN. Such CDN metrics typically include server response time metrics for each server included in the CDN as well as network performance metrics (e.g., latency, packet loss, etc.) for each ISP network included in the CDN. Because a CDN usually includes large numbers of servers deployed in numerous geographical locations that operate on a variety of ISP networks, the number of metrics involved in monitoring CDN performance precludes effective manual analysis of such metrics. Consequently, many CDN providers rely on CDN management software to monitor and analyze the CDN metrics in an attempt to identify any malfunctions that may negatively impact the CDN. One drawback to relying on conventional CDN management software is that the arrangement and interconnection of the components included in CDNs may be complicated. Consequently, conventional CDN management software may not effectively interpret the CDN metrics with respect to the overall configuration of the CDN. For example, suppose that a CDN includes 2000 servers organized into 40 server clusters, where each server cluster include 50 servers. Further, suppose that, in each of 39 “ok” server clusters, one server malfunctions, and, in the final “degraded” server cluster, 45 of the servers malfunction. In such a scenario, the CDN management software may analyze the server metrics and individually identify the 84 malfunctioning servers, but may fail to identify the degraded server cluster. Consequently, the CDN may continue to route requests to the degraded server cluster, and the time required for the CDN to response to such requests may become unacceptable long and/or the requests may fail. Accordingly, the availability of the content stored in the CDN and/or the overall quality of experience provided to clients may be negatively impacted. As the foregoing illustrates, what is needed in the art are more effective techniques for managing the operation of machines and networks that deliver digital content. SUMMARY OF THE INVENTION One embodiment of the present invention sets forth a computer-implemented method for managing the operation of a content delivery network (CDN). The method includes computing a first action based on a first state that is associated with a first finite state machine (FSM) and corresponds to a first metric associated with a content delivery network (CDN); executing a first state transition between a second state and a third state based on the first action, wherein the second state and the third state are associated with a second FSM; and in response to the first state transition, executing a first event handling operation. One advantage of the disclosed techniques is that a CDN monitoring subsystem may accurately and efficiently identify malfunctions that involve complex interactions between components included in the CDN. In particular, by configuring the CDN monitoring subsystems to include states that reflect aggregations of other states, the CDN monitoring subsystem may mimic a hierarchical deployment of components. Such a hierarchy of states enables the CDN monitoring subsystem to interpret the operation of the CDN in a holistic manner. Based on this accurate interpretation of the operation of the CDN, the operations of the components included in the CDN may be adjusted to ensure availability of content stored in the CDN and optimize the overall quality of experience provided to clients. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 is a conceptual illustration of a system configured to implement one or more aspects of the present invention; FIG. 2 is a more detailed illustration of the content delivery network (CDN) monitoring system of FIG. 1, according to various embodiments of the present invention; FIG. 3 illustrates how the content delivery network (CDN) monitoring system of FIG. 2 monitors a server cluster, according to various embodiments of the present invention; FIG. 4 illustrates how the finite state machines (FSMs) of FIG. 2 monitors a server cluster, according to various embodiments of the present invention; and FIG. 5 is a flow diagram of method steps for managing a content delivery network, according to various embodiments of the present invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skilled in the art that the present invention may be practiced without one or more of these specific details. System Overview FIG. 1 is a conceptual illustration of a system 100 configured to implement one or more aspects of the present invention. As shown, the system 100 includes a content delivery network (CDN) 110 connected to a variety of client machines. The client machines may include any machine capable of interacting with the CDN 110 in order to request and receive a content asset stored within the CDN 110. Example of client machines include, without limitation, a desktop computer 102, a laptop 110, a smartphone 120, a smart television 122, a game console 124, a tablet 128, television-connected devices (not shown), handheld devices (not shown), and streaming entertainment devices (not shown). The client machines may interact with the CDN 110 using any type of communication paths and any protocols as known in the art. For example, in some embodiments, a broadband internet service provider (ISP) network (not shown) includes one or more machines and is responsible for managing internet traffic in a geographic region that either is proximate to the geographic location of one or more client machines or includes the geographic location of client machines. In such embodiments, the client machines may communicate with the CDN 110 via the broadband ISP network. The CDN 110 may include any number of computing machine instances 140 configured with any number (including zero) of central processing units (CPUs) 142, graphics processing units (GPUs) 144, memory 146, etc. In operation, the CPU 142 is the master processor of the computing machine instance 140, controlling and coordinating operations of other components included in the computing machine instance 140. In particular, the CPU 142 issues commands that control the operation of the GPU 144. The GPU 144 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry. In various embodiments, GPU 144 may be integrated with one or more of other elements of the computing machine instance 140. The memory 146 stores content, such as software applications and videos, for use by the CPU 142 and the GPU 144 of the compute instance 140. In alternate embodiments, any number of the computing machine instances 140 may be replaced with any instruction execution system, apparatus, or device capable of executing software applications. In some embodiments, the memory 146 may be replaced with any storage device capable of storing software applications, such as a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing Any number of the computing machine instances 140 included in the CDN 110 are configured as control servers 160. The control servers 160, among other things, are responsible for managing the delivery of one or more content assets to client machines in response to one or more requests for such content assets from the client machines. Upon receiving a request for a content asset, the control server 160 selects a server cluster 150 that includes a server 152 that stores the content asset and then routes the request to the selected server cluster 150. As a general matter, the control servers 160 may implement any distributed data store techniques as known in the art for storing content assets across the servers 152 and, subsequently, accessing the stored content assets. Any number of the computing machine instances 140 included in the CDN 110 are configured as the servers 152. Each of the servers 152 manages any number (including zero) of storage devices, also referred to herein as a “disk,” that store content assets. As persons skilled in the art will recognize, the memory required to store all content assets may exceed the memory included in the storage devices managed by a single server 152. Further, the network hardware associated with a single server 152 may not be capable of handling the traffic with all potential clients. To address these storage and traffic limitations, the servers 152 may be organized into any number of the server clusters 150. Collectively, the resources (storage devices, network hardware, etc.) associated with the servers 152 included in a given server cluster 150 are capable of storing all the content assets and handling the traffic with all potential clients. Further, the server cluster 150 is configured to provide redundancy that increases the tolerance of the server cluster 150 to malfunctions associated with individual servers 152. More specifically, if some of the servers 152 included in the server cluster 150 malfunction, then the server cluster 150 may continue to operate acceptably. To provide additional redundancy and tolerance to malfunctions, each content asset is typically stored in multiple storage devices that are managed by different servers 152 included in different server clusters 150. As a general matter, the storage processes implemented in the CDN 110 typically ensure availability of the content assets irrespective of the individual availability of each of the servers 152 and the server clusters 150. Upon receiving a client request for content assets, the control server 160 may implement any number and types of routing algorithms and techniques to select a particular server cluster 150 and then route the client request to the selected server cluster 150. For example, to optimize the overall quality of experience for clients, some control servers implement server-load balancing techniques, internet service provider (ISP) caching, and so forth. In general, the control server 150 may be configured to optimize any number and type of criteria, such as geographic proximity to the client, cost, etc. To further optimize the performance of a conventional CDN and assure that the CDN responds adequately to client requests, many CDN providers rely on CDN management software. The CDN management software monitors CDN metrics that reflect the performance of various components included in the CDN. Such CDN metrics typically include server response time metrics for each of the servers included in the CDN as well as network performance metrics (e.g., latency, packet loss, etc.) for each ISP network included in the CDN. Based on the CDN metrics, the CDN management software and/or the CDN provider attempt to identify and optionally mitigate any malfunctions that may negative impact the performance of the CDN, jeopardize the availability of the content assets, and/or reduce the overall quality of experience provided to clients. However, conventional CDN management software analyzes the performance of individual components of the CDN in isolation. Consequently, conventional CDN management software may not effectively identify and/or mitigate malfunctions that involve multiple components of the CDN. For explanatory purposes, multiple instances of like objects are denoted with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. Further, a range of “X” like objects are denoted with a parenthetical range (i.e., (0:X−1)). For example, suppose that the CDN 110 includes the forty server clusters 150(1:40) and each of the server clusters 150(1:40) includes fifty of the servers 152. Further, suppose that in each of the thirty-nine server clusters 150(1:39), one of the servers 154 malfunctions, and in the server cluster 150(40), forty-five of the servers 152 malfunction. In such a scenario, the CDN management software may analyze the server metrics, determine that eighty-four of the servers 152 have malfunctioned and, in response, configure a graphical user interface to display a warning. However, the CDN management software may fail to identify the “degraded” server cluster 150(40), and the control servers 160 may continue to route requests to the server cluster 150(40). Consequently, the time required for the CDN 110 to respond to requests that are routed to the server cluster 150(40) may become unacceptably long and/or the requests may fail. Content Delivery Network (CDN) Monitoring System To enable more effective analysis and/or modification of the operation of the CDN 110 than is provided by conventional CDN management software, one or more of the compute machine instances 140 implements a CDN monitoring subsystem 170. The CDN monitoring subsystem 170 is stored in the memory 146 and executes on the CPU 142. In alternate embodiments, the functionality included in the CDN monitoring subsystem 170 may be distributed across any number of computing machine instances 140. In some alternate embodiments, the CDN monitoring subsystem 170 may be implemented as a stand-alone software application that is not included in the CDN 110. In other alternate embodiments, functionality of the CDN monitoring subsystem 170 may be distributed between any number of software application, and each software application may or may nor be included in the CDN 110. In general, the CDN monitoring subsystem 170 or any number of software applications that implement the functionality of the CDN monitoring subsystem 170 that is described herein may be stored in any storage device and may execute on any instruction execution system, apparatus, or device capable of executing software applications. In operation, the CDN monitoring subsystem 170 implements a multi-stage analysis process that computes current states associated with the CDN 110 at any given point in time. As a general matter, each of the current states reflects some operational status of the CDN 110. The CDN monitoring subsystem 170 may be configured to implement any number of current states and compute each current state in any technically feasible fashion. In particular, the CDN monitoring subsystem 170 may be configured to compute each current state based on system data (e.g., CDN metrics, configuration data, etc.), other current states, or any combination thereof. Because the CDN monitoring subsystem 170 may combine current states to generate aggregated current states, the CDN monitoring subsystem 170 may compute and analyze operational statuses that reflect relationships and interactions between various components associated with the CDN 110. For example, in some embodiments, the CDN monitoring subsystem 170 may be configured to compute current states that reflect the overall health of the servers 152 based on server response time metrics. Further, for each of the server clusters 150, the CDN monitoring subsystem 170 may be configured to compute a current state that reflects the overall health of the server cluster 150 based on the current states of the servers 152 included in the server cluster 150. In this fashion, the CDN monitoring subsystem 170 may be configured to analyze and respond to current states that reflect complex interactions between components included in the CDN 110 at varying granularities. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. For example, the techniques described herein may be applied to monitor and/or manage any set of objects that are interconnected in any fashion via any number of communication paths and are configured to perform any type of content delivery services. FIG. 2 is a more detailed illustration of the content delivery network (CDN) monitoring system 170 of FIG. 1, according to various embodiments of the present invention. As shown, the CDN monitoring system 170 includes, without limitation, a data processing subsystem 230, a state processing subsystem 260, and an event processing subsystem 290. In alternate embodiments, the CDN monitoring system 170 may include any number of subsytems and the functionality described herein may be distributed in any manner between the subsystems. In general, to enable the CDN monitoring system 170 to interpret the operation of the components associated with the CDN 110 in a contextually-relevant manner, the data processing subsystem 260 and the state processing subsystem 270 implement a bi-directional connection. More specifically the state processing subsystem 230 computes actions 240 based on system data 220 and current states 250, and the state processing subsystem 270 computes “new” current states 250 based on the actions 240 and the current states 250. Because the state processing subsystem 270 feedbacks the current states 250 to the data processing subsystem 260, the data processing subsystem 260 may aggregate any number of the current states 250 to compute actions that effect any number of other current states 250. The CDN monitoring subsystem 170 may be configured to include any number of the current states 250, and each of the current states 250 may be configured to monitor the operation of the CDN 110 in any technically feasible fashion. Further, as part of an initialization process, the CDN monitoring subsystem 170 may set the current states 250 to initial states. For example, system engineers may select and define the current states 250 to mirror the deployment of the server clusters 150, the servers 152, and networks 212 associated with the CDN 170. The system engineers may then select the initial states based on a configuration 218 associated with the CDN 110. As shown, the data processing subsystem 230 receives system data 220 from data sources 210. The system data 220 and the data sources 210 may include any number and type of objects that may influence the performance of the CDN 110. For example, the system data 220 may include CDN metrics such as CPU metrics, disk metrics, ISP network performance metrics, and the like. The data sources 210 may include, without limitation, the servers 152, the networks 212, caches 214, clients 216, and a configuration 218 associated with the CDN 110. The data processing subsystem 230 includes, without limitation, an orchestrator 232, any number of pollers 234, and any number of action generators (AG) 236. The orchestrator 232 manages the operations of the other components included in the data processing subsystem 230. For example, the orchestrator 232 may configure one or more of the pollers 234 to request the system data 220 from the data sources 210. In alternate embodiments, the data sources 210 are configured to periodically transmit the system data 220 to the data processing subsystem 230. In general, the data processing subsystem 230 may receive the system data 220 in any technically feasible fashion via any communication paths and in response to any number and type of active and/or passive events. The AGs 236 compute the actions 240 based on the system data 220 and/or the current states 250 and then transmit the actions 240 to the state processing subsystem 260. Each of the actions 240 reflects an operational status of the CDN 110, and any number of the AGs 236 may be configure to compute any number of the actions 240. For example, in some embodiments, the state processing subsystem 260 may include a single AG 236. For each of the servers 152, the AG 236 may be configured to compute the action 240 associated with the server 152 based on the CPU metrics and the disk metrics associated with the server 154 and included in the system data 220. Further, the AG 236 may be configured to compute the action 240 associated with the server cluster 150 based on the current states 250 associated with the servers 152 included in the server cluster 150. The data processing subsystem 230 conveys the actions 240 to the state processing subsystem 260 in any technically feasible fashion. For example, in some embodiments, the AG 236 may perform write operations to specific location in the memory 246 to convey the actions 240 to the state processing subsystem 260. In alternate embodiments, the orchestrator 232 may transmit the actions 240 to the state processing subsystem 260 via any communication mechanism and protocol. As shown, the state processing subsystem 260 includes, without limitation, any number of finite state machines (FSM) 270. In general, each of the FSMs 270 is a computational model that is implemented in software, hardware, or any combination thereof, in any technically feasible fashion. For example, in some embodiments, each of the FSMs 270 may be included in a software application that is stored in the memory 146 of the computing machine instance 140 and executes on the CPU 142. At any given time, each of the FSMs 270 is considered to be in the current state 250 of the FSM 270. The current state 250 of a particular FSM 270 is one of any number of states that is associated with the FSM 270. For example, if a particular FSM 270 is associated with the three states “OK,” “broken,” and “degraded,” then at any given time the current state 250 of the FSM 270 is one of “OK,” “broken,” or “degraded.” In operation, the FSM 270 receives the action 240 associated with the FSM 270 and computes a next state based on the action 240 and the current state 250 of the FSM 270. If the next state does not equal the current state 250, then the FSM 270 executes a state transition from the current state 250 to the next state, and the next state becomes the current state 250 of the FSM 270. As part of executing a state transition, the FSM 270 also generates an event 380 that corresponds to the state transition. By contrast, if the next state equals the current state, then the FSM 270 does not execute a state transition and does not generate the event 380. The process of computing new states and performing a state transition based on whether the new state is different than the current state 250 is referred to herein as “selectively executing state transitions.” Upon receiving the actions 240 from the data processing subsystem 230, the FSMs 270 selectively execute state transitions. The state processing subsystem 270 then feedbacks the current states 250 to the data processing subsystem 230 and transmits the events 380 to the event processing subsystem 290. In this fashion, the AGs 236 included in the data processing subsystem 230 and the FSMs included in the state processing subsystem 270 collaborate to generate the events 380. The event processing subsystem 290 includes, without limitation, any number of event handlers 292. Upon receiving a particular event 280, the event handler 292 associated with the event 280 responds to the event 280 based on rules 294 that are implemented in the event handler 292. The rules 294 may be specified in any technically feasible fashion (e.g., software code, configuration files, etc.) and may cause the event processing subsystem 290 to perform any type of monitoring and/or management functionality. In general, the event processing subsystem 290 may be configured to respond to the events 280 in any technically feasibly fashion that either indirectly or directly improves the efficiency of the CDN 110. For example, in some embodiments, upon receiving the event 280 that reflects the operation of the server cluster 150(1), the corresponding event handler 292 may configure the CDN monitoring subsystem 170 to update a graphical user interface with the status of the server cluster 150(1) based on the rules 294. In other embodiments, upon receiving the event 280 that reflects the operation of the server cluster 150(1), the corresponding event handler 292 may configure the control servers 160 included in the CDN 110 to discontinue routing requests to the server cluster 150(1). Managing the Content Delivery Network (CDN) FIG. 3 illustrates how the content delivery network (CDN) monitoring system 170 of FIG. 2 monitors the server cluster 150(1), according to various embodiments of the present invention. For explanatory purposes, the server cluster 150(1) includes the servers 152(1) and 152(2). In alternate embodiments, the CDN 110 may include any number of servers 152 distributed between any number of server clusters 150. In yet other embodiments, the CDN 110 may be associated with any type of objects arranged in any technically feasible fashion. Further, the CDN monitoring system 170 may be configured to assess the operation of any number of these objects based on any number of the current states 150 and any type of the system data 220. The server 152(1) is associated with the system data 220 that includes, without limitation, a CPU metric 312(1) and a disk metric 314(1). Similarly, the server 152(2) is associated with the system data 220 that includes, without limitation, a CPU metric 312(2) and a disk metric 314(2). The CPU metrics 312(1:2) reflect the operation of the CPUs 142 included in the computing machine instances 140 that are configured as the servers 152(1:2). The disk metrics 314(1:2) reflect the operation of disks managed by the servers 152(1:2). In alternate embodiments, the system data 220 may include any number and type of data that reflects the operation of any number of objects associated with the CDN 110. Among other things, the data processing subsystem 230 includes, without limitation, the AGs 236(1:4). The AG 236(1) computes the action 240(1) based on the CPU metric 312(1), the AG 236(2) computes the action 240(2) based on the disk metric 314(1), the AG 236(3) computes the action 240(3) based on the CPU metric 312(2), and the AG 236(4) computes the action 240(4) based on the disk metric 314(2). In alternate embodiments, any number of the AGs 236 may be configured to compute any number of the actions 240 based on the system data 220 and the current states 250 in any combination. For example, in some embodiments, a single AG 236 may be configured to compute one action 240 based on the CPU metric 312(1) and the disk metric 314(1) and another action 240 based on the CPU metric 312(2) and the disk metric 314(2). The state processing subsystem 260 includes, without limitation and among other things, the FSMs 270(1:4) that selectively execute state transitions based on the actions(1:4). More specifically, the FSM 270(1) selectively executes state transitions based on the current state 250(1) of the FSM 270(1) and the action 240(1) that is associated with the CPU metric 312(1). The FSM 270(2) selectively executes state transitions based on the current state 250(2) of the FSM 270(2) and the action 240(2) that is associated with the disk metric 314(1). The FSM 270(3) selectively executes state transitions based on the current state 250(3) of the FSM 270(3) and the action 240(3) that is associated with the CPU metric 312(2). The FSM 270(4) selectively executes state transitions based on the current state 250(4) of the FSM 270(4) and the action 240(4) that is associated with the disk metric 314(2). For explanatory purposes, the current state 250(1) of the FSM 270(1) is depicted as “CPU broken,” reflecting that the CPU metric 312(1) indicates that the CPU 142 included in the server 152(1) is not operating as expected. By contrast, the current state 250(2) of the FSM 270(2) is depicted as “disk OK,” reflecting that the disk metric 314(1) indicates that a disk managed by the server 152(1) is operating as expected. In a similar fashion, the current state 250(3) indicates that the CPU 142 included in the server 152(1) is operating as expected, and the current state 250(4) indicates that the disk managed by the server 152(2) is operating as expected. As part of executing a state transition, the current state of the FSM 270 changes. For example, after the FSM 270(1) executes a state transition from the current state 250 “CPU broken” to a new state “CPU OK,” the current state 250 of the FSM 270 is “CPU OK.” Further, although not shown in FIG. 3, as part of executing a state transition, the FSM 270 transmits a corresponding event 280 to the event processing subsystem 290. The FSM 270 may be implemented in any technically feasible fashion in software, hardware, or any combination thereof. Further, the FSM 270 may execute state transitions in any technically feasible fashion and based on any combination of actions 240, the current state 250, and any other type of information that the FSM 270 is configured to process. As described previously herein, the state processing subsystem 260 feedbacks the current states 250 to the data processing subsystem 230. In particular, as shown, the state processing subsystem 260 feedbacks the current state 250(1) “CPU broken,” the current state 250(2) “disk OK,” the current state 250(3) “CPU OK,” and the current state 250(4) “disk OK” to the data processing subsystem 230. As also shown, the AG 236(5) included in the data processing subsystem 230 aggregates the current state 250(1) and the current state 250(2) to generate the action 240(5) that reflects the operation of the server 152(1). Similarly, the AG 236(6) included in the data processing subsystem 230 aggregates the current state 250(3) and the current state 250(4) to generate the action 240(6) that reflects the operation of the server 152(2). The AGs 236 may compute the actions 240 in any technically feasible fashion. For example, based on the current state 250(1) “CPU broken” and the current state 250(2) “disk OK,” the AG 236(5) may generate the action 240(5) “some server components broken.” The FSM 270(5) included in the state processing subsystem 260 selectively executes state transitions that change the current state 270(5) based on the action 240(5) and the current server state 250(5). Similarly, the FSM 270(6) included in the state processing subsystem 260 selectively executes state transitions that change the current state 270(6) based on the action 240(6) and the current state 270(6). As shown, after processing the actions 240(5) and the actions 240(6), the state processing subsystem 260 feedbacks the current state 250(5) “server degraded” that reflects the operation of the server 152(1) to the data processing subsystem 230. In a similar fashion, the state processing subsystem 260 feedbacks the current state 250(6) “server OK” that reflects the operation of the server 152(2) to the data processing subsystem 230. As a general matter, the current states 270(5:6) are dynamically updated outputs of the state processing subsystem 260 and dynamically updated inputs to the data processing subsystem 230. Subsequently, the AG 236(7) included in the data processing subsystem 230 aggregates the current state 250(5) and the current state 250(6) to generate the action 240(7) that reflects the operation of the server cluster 150(1). The AG 236(7) may compute the action 240(7) in any technically feasible fashion. For example, because the current state 250(5) is “server broken” and the current state 250(6) is “server OK,” the AG 236(7) may generate the action 240(7) “most servers broken.” Finally, in response to the action 240(7), the FSM 270(7) included in the state processing subsystem 260 selectively executes state transitions that change the current state 250(7). Accordingly, the current state 250(7) of the FSM 270(7) reflects the operation of the server cluster 150(1) as an aggregation of the operation of the servers 152(1:2) that are included in the server cluster 150(1). For example, because the action 240(7) is “most servers broken,” the FSM 270(7) may transition from the current state 250(7) “OK” to the current state 250(7) “broken.” As FIG. 3 illustrates, the FSMs 270 enables the CDN monitoring subsystem 170 to mirror complex interactions between objects included in the CDN 110. Consequently, the CDN monitoring subsystem 170 may accurately track the current states 250 that reflect the operation of different aspects of CDN 110 with respect to the entire CDN 110. FIG. 4 illustrates how the finite state machine (FSMs) 270 of FIG. 2 monitors the server cluster 150(1), according to various embodiments of the present invention. For explanatory purposes, the server cluster 150(1) includes the servers 152(1:12). In alternate embodiments, the CDN 110 may include any number of servers 152 organized in any fashion. As depicted by lightly shaded boxes, the six servers 152(1), 152(3), 152(4), 152(6), 152(7), and 152(9) are not functional. By contrast, as depicted by unshaded boxes, the five servers 152(2), 152(5), 152(8), 152(10), and 152(11) are functional. Finally, as depicted by a heavily shaded box, the operational status of server 152(12) changes from functional to not functional. As described previously herein, the data processing subsystem 230 generates the actions 240 associated with the server 152(1) based on the operational status of the server 152(1). In particular, the data processing subsystem 230 generates the action 240 “break” associated with the server 152(1). The FSM 270(12) is included in the state processing subsystem 260, and the current state 250(12) of the FSM 270 reflects the operational status of the server 152(12). More precisely, the FSM 270(12) selectively executes state transitions based on the current state 250(12) of the FSM 270(12) and the actions 240 associated with the server 152(12). As depicted with a bold state transition arrow and a bold state bubble, upon receiving the action 240 “break,” the FSM 270(12) executes a state transition from the current state 250(12) “server OK” to the current state 250(12) “server broken.” The AG 236(13) is included in the data processing subsystem 260 and is associated with the server cluster 150(1). As shown in table form, the AG 236(13) computes the action 240 associated with the server cluster 150(1) based on the current states 250(1:12) associated with the servers 154(1:12). More specifically, if twelve of the current states 250(1:12) equal “OK,” then the AG 236(13) computes the action 240 “none broken.” If less than twelve but more than seven of the current states 250(1:12) equal “OK,” then the AG 236(13) computes the action 240 “some broken.” If seven or less of the current states 250(1:12) equal “OK,” then the AG 236(13) computes the action 240 “most broken.” As depicted in bold, upon receiving the current state 250(12) “server broken,” the AG 236(13) determines that five of the current states 250(1:12) equal “server OK,” and computes the action 240 “most broken.” The FSM 270(13) is included in the state processing subsystem 260, and the current state 250(13) of the FSM 270(13) reflects the operational status of the server cluster 150(1). As depicted with a bold state transition arrow and a bold state bubble, upon receiving the action 240 “most broken” from the AG 236(13), the FSM 270(13) executes a state transition from the current state 250(13) “cluster degraded” to the current state 250(13) “cluster broken.” Notably, because the current states 250(1:12) reflect the operational status of the servers 152(1:12) included in the server cluster 150(1) and the current state 250(13) is an aggregation of the current states 250(1:12), the current state 250(13) accurately reflects the operational status of the server cluster 150(1). In this fashion, the CDN monitoring system 170 indirectly monitors the operational status of the server cluster 150(1) via the operational statuses of the servers 152(1:12). FIG. 5 is a flow diagram of method steps for managing a content delivery network, according to various embodiments of the present invention. Although the method steps are described with reference to the systems of FIGS. 1-4, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. Please note that, for purposes of discussion only, it is assumed that the CDN monitoring system 170 is configured to monitor the CDN 110 via the finite state machines (FSMs) 270. At any given point in time, the FSM 270(i) is in the current state 250(i). In alternate embodiments, the CDN monitoring system 170 may include any type of devices that store any number of current states 250 that are relevant to the operation of any number, combination, and type of machines and/or networks that deliver digital content. As shown, a method 500 begins at step 502, where the state processing subsystem 260 sets the current states 250 of the finite state machines (FSMs) 270 to initial states. The state processing subsystem 260 may configure the FSMs 270 and determine the initial states in any technically feasible fashion that is consistent with monitoring the operation of the CDN 110. For example, system engineers may configure the CDN monitoring subsystem 170 to implement a set of FSMs 270 that mirrors the deployment of the server clusters 150, the servers 152 included in the server clusters 150, and the networks 214 associated with the CDN 110. Further, the state processing subsystem 260 may determine the initial states based on the configuration 218. At step 504, the data processing subsystem 230 receives the system data 220 from the data sources 210 and the current states 250 from the state processing subsystem 260. The data processing subsystem 230 may receive the system data 220 via any number and type of communication paths using any combination of communication protocols. Further the data processing subsystem 230 may receive the system data 220 in response to any number of active or passive events. For example, in some embodiments, the data processing subsystem 230 includes the pollers 234 that request the system data 220 from the data sources 210. In alternate embodiments, the data sources 210 are configured to periodically transmit the system data 220 to the data processing subsystem 230. The system data 220 and the data sources 210 may include any object that may impact the performance of the CDN 110. For example, the system data 220 may include, without limitation, the CPU metric 312 and the disk metric 314. Among other things, the data sources 210 may include the servers 152, the networks 212, the caches 214, the clients 216, and the configuration 218 associated with the CDN 110. At step 506, the data processing subsystem 230 computes the actions 240 based on the system data 220 and the current states 250. As part of computing each of the actions 240, the data processing subsystem 230 may aggregate any number of the current states 250. For example, the data processing subsystem 230 may determine the action 240 associated with the server cluster 150 based on the current states 250 associated with the servers 152 included in the server cluster 150. In general, the inputs to the data processing subsystem 230 are the system data 220 and the current states 250, and the outputs to the data processing subsystem 230 are the actions 240. At step 508, the data processing subsystem 230 transmits the actions 240 to the state processing subsystem 260 via any communication mechanism and protocol. At step 510, each of the FSMs 270 receives the action 240 associated with the FSM 270 and computes a new state based on the action 240 and the current state 250 of the FSM 270. At step 512, for each of the FSMs 270, if the new state differs from the current state 250 of the FSM 270, then the FSM 270 executes a state transition. As part of a state transition, the current state 250 of the FSM 270 changes and the FSM 270 generates the event 280. At step 514, the state processing subsystem 260 feedbacks the current states 250 to the data processing subsystem 230. In this manner, the state processing subsystem 260 configures the data processing subsystem 230 to compute the “new” actions 240 based on the “new” current states 250. Such a feedback process enables the data processing subsystem 230 to compute actions 240 that reflect aggregations of the current states 250 and/or the system data 220. At step 516, the state processing subsystem 260 transmits the events 280 to the event processing subsystem 290. At step 518, the event handlers 292 included in the event processing subsystem 290 respond to the events 280 based on the rules 294. The rules 294 may cause the event processing subsystem 290 to perform any type of management functionality. For example, in some embodiments, the event processing subsystem 290 may update a graphical user interface with the status of each of the server clusters 150 based on the rules 294. In other embodiments, the event processing subsystem 290 may configure the control servers 160 included in the CDN 110 to discontinue routing requests to a particular server cluster 150 that is associated with the current status 250 “degraded cluster.” In general, the event processing subsystem 290 may be configured to respond to the events 280 in any technically feasibly fashion that either indirectly or directly improves the efficiency of the CDN 110. In sum, the disclosed techniques may be used to monitor, troubleshoot, and maintain content delivery networks (CDN). Notably, a CDN monitoring system is configured to compute current states that are relevant to the operation of the CDN. Further, the CDN monitoring system may be configured to compute new current states based on the any number of current states. In operation, a data processing subsystem receives system data from data sources and current states from a state processing subsystem via a feedback loop. Action generators (AGs) included in the data processing subsystem operate on the system data and current states and generate actions that reflect changes in the operation of the CDN. Finite state machines (FSM) included in the state processing subsystem execute state transition from current states to new states based on the actions and the current states. The data processing subsystem feedbacks the new states, now the current states, to the data processing subsystem. In this fashion, the data processing subsystem configures the state processing subsystem via actions and the state processing subsystem configures the data processing subsystem via current states. Further, for each transition, the associated FSM generates an event, and transmits the event to an event processing subsystem. Event handlers in the event processing subsystem process the events based on a set of rules. The event handlers may implement any type of management functionality. For example, some event handlers may provide performance metrics to a graphical user interface. Other event handlers may initiate automated debugging, test, or repair procedures. Advantageously, the state processing subsystem enables the CDN monitoring system to interpret the operation of the CDN in a holistic manner. Consequently the CDN monitoring system may effectively mitigate any issues that reduce the performance of the CDN, jeopardize the availability of content assets stored in the CDN, and/or negatively impact the overall quality of experience provided to clients. In particular, because each current state may depend on other current states, the CDN monitoring system may be configured to monitor a hierarchy of current states that reflects the deployment of the CDN. For example, the data processing subsystem may compute events associated with a current state of a particular server cluster based on the current states of servers that are included in the server cluster. Consequently, unlike conventional approaches to CDN management that rely primarily on metrics associated with individual components, the CDN monitoring system may be configured to perform contextually-relevant operations based on the health of the CDN at any granularity. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable processors or gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. 15043429 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Nov 6th, 2018 12:00AM May 8th, 2017 12:00AM https://www.uspto.gov?id=US10123059-20181106 Fast start of streaming digital media playback with deferred license retrieval One embodiment of the present invention sets forth a technique for deferring license retrieval when streaming digital media content. The perceived delay between the time a user selects the protected digital media content to when playback of the protected digital media content begins is reduced because retrieval and playback of an unprotected version of a portion of the digital media content starts before the license and protected version of the digital media content is received. The unprotected version includes fast start streams of audio and video data that may encoded at a lower bit rate than the protected version in order to quickly transfer the fast start streams from the content server to the playback device. 10123059 1. A computer-implemented method, comprising: retrieving one or more intervals of a fast start data stream comprising a sequence of intervals of an unprotected portion of a digital media content title; requesting a license that authorizes playback of a protected data stream comprising a sequence of intervals of a protected portion of the digital media content title; playing at least one interval of the fast start data stream while the license is being acquired; and once the license is acquired, transitioning from an interval of the unprotected portion to an interval of the protected data stream, wherein a starting boundary of the interval of the protected data stream and an ending boundary of the interval of the unprotected portion correspond to substantially the same playback offset within the digital media content title. 2. The computer-implemented method of claim 1, further comprising retrieving one or more of the intervals of the protected data stream. 3. The computer-implemented method of claim 1, further comprising switching from playing the one or more intervals of the fast start data stream to playing a plurality of intervals of the protected data stream. 4. The computer-implemented method of claim 1, further comprising decrypting each interval of the protected data stream as the interval of the protected data stream is retrieved. 5. The computer-implemented method of claim 1, wherein the unprotected portion of the digital media content title comprises a preview clip. 6. The computer-implemented method of claim 1, further comprising, while playing the at least one of the one or more intervals of the fast start data stream, determining that a user intends is to play the protected data stream. 7. The computer-implemented method of claim 1, wherein a bit rate associated with the fast start data stream is less than a bit rate associated with the protected data stream. 8. The computer-implemented method of claim 1, wherein the unprotected portion of the digital media content title and the protected portion of the digital media content title include video data and audio data. 9. The computer-implemented method of claim 1, wherein the unprotected portion of the digital media content title comprises a beginning portion of the digital media content title. 10. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the steps of: retrieving one or more intervals of a fast start data stream comprising a sequence of intervals of an unprotected portion of a digital media content title; requesting a license that authorizes playback of a protected data stream comprising a sequence of intervals of a protected portion of the digital media content title; playing at least one interval of the fast start data stream while the license is being acquired; and once the license is acquired, transitioning to the protected data stream by playing an interval of the protected data stream immediately after playing an interval of the unprotected portion, wherein a starting boundary of the interval of the protected data stream and an ending boundary of the interval of the unprotected portion correspond to substantially the same playback offset within the digital media content title. 11. The non-transitory computer-readable medium of claim 10, further comprising retrieving one or more of the intervals of the protected data stream. 12. The non-transitory computer-readable medium of claim 10, further comprising switching from playing the one or more intervals of the fast start data stream to playing a plurality of intervals of the protected data stream. 13. The non-transitory computer-readable medium of claim 10, further comprising decrypting each interval of the protected data stream as the interval of the protected data stream is retrieved. 14. The non-transitory computer-readable medium of claim 10, wherein the unprotected portion of the digital media content title comprises a preview clip. 15. The non-transitory computer-readable medium of claim 10, further comprising, while playing the at least one of the one or more intervals of the fast start data stream, determining that a user intends is to play the protected data stream. 16. The non-transitory computer-readable medium of claim 10, wherein a bit rate associated with the fast start data stream is less than a bit rate associated with the protected data stream. 17. The non-transitory computer-readable medium of claim 10, wherein the unprotected portion of the digital media content title and the protected portion of the digital media content title include video data and audio data. 18. The non-transitory computer-readable medium of claim 10, wherein the unprotected portion of the digital media content title comprises a beginning portion of the digital media content title. 19. A system, comprising: a memory storing instructions; and a processor that is coupled to the memory and, when executing the instructions, is configured to: retrieve one or more intervals of an unprotected portion of a digital media content title; request a license that authorizes playback of a protected data stream comprising a sequence of intervals of a protected portion of the digital media content title; play at least one interval of the fast start data stream while the license is being acquired; and once the license is acquired, transitioning from an interval of the unprotected portion to an interval of the protected data stream, wherein a starting boundary of the interval of the protected data stream and an ending boundary of the interval of the unprotected portion correspond to substantially the same playback offset within the digital media content title. 20. The system of claim 19, wherein the processor is further configured to retrieve one or more of the intervals of the protected data stream. 21. The system of claim 19, wherein the processor is further configured to switch from playing the one or more intervals of the unprotected portion to playing a plurality of intervals of the protected data stream. 22. The system of claim 19, wherein the processor is further configured to decrypt each interval of the protected data stream as the interval of the protected data stream is retrieved. 23. The system of claim 19, wherein the unprotected portion of the digital media content title comprises a preview clip. 24. The system of claim 19, wherein the processor is further configured to, while playing the at least one of the one or more intervals of the unprotected portion, determine that a user intends is to play the protected data stream. 25. The system of claim 19, wherein a bit rate associated with the unprotected portion is less than a bit rate associated with the protected data stream. 26. The system of claim 19, wherein the unprotected portion of the digital media content title and the protected portion of the digital media content title include video data and audio data. 27. The system of claim 19, wherein the unprotected portion of the digital media content title comprises a beginning portion of the digital media content title. 27 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of United States patent application titled “FAST START OF STREAMING DIGITAL MEDIA PLAYBACK WITH DEFERRED LICENSE RETRIEVAL,” filed Jun. 22, 2011 and having Ser. No. 13/166,693. The subject matter of this related application is hereby incorporated herein by reference. BACKGROUND OF THE INVENTION Field of the Invention Embodiments of the present invention relate generally to digital media and, more specifically, to a fast start of streaming digital media content with deferred license retrieval. Description of the Related Art Digital media content distribution systems conventionally include a content server, a content player, and a communications network connecting the content server to the content player. The content server is configured to store digital media content files, which can be downloaded from the content server to the content player. Each digital media content file corresponds to a specific identifying title, such as “Gone with the Wind,” which is familiar to a user. The digital media content file typically includes sequential content data, organized according to playback chronology, and may comprise audio data, video data, or a combination thereof. The content player is configured to download and play a digital media content file, in response to a user request selecting the title for playback. The process of playing the digital media content file includes decoding audio and video data into a synchronized audio signal and video signal, which may drive a display system having a speaker subsystem and a video subsystem. Playback typically involves a technique known in the art as “streaming,” whereby the content server sequentially transmits the digital media content file to the content player, and the content player plays the digital media content file while content data is received that comprises the digital media content file. When a user initiates playback of the digital media content for a digital media content title that is protected, there is a delay before the playback of the selected digital media content begins. The delay is a result of the time needed for the content player to request the selected digital media content and for the content server to locate and transmit the protected digital media content file and the license needed for playback of the protected digital media content file to the content player. For example, a license for DRM (Digital Rights Management) encryption of the digital media content title must be retrieved before starting playback of the protected digital media content files, so that the digital media content that is retrieved is protected. Additionally, a minimum amount of video data must be received by the content player before decoding of the video data can begin. Playback of the video data may only begin after a full GOP (group of pictures) has been received and decoded. As the foregoing illustrates, what is needed in the art is improved techniques that minimize the perceived delay between the time a user selects the digital media content to when playback of the protected digital media content begins. SUMMARY OF THE INVENTION One embodiment of the present invention sets forth a method for a fast start of streaming digital media content with deferred license retrieval. The method comprises the steps of receiving a playback selection for a digital media content title and retrieving one or more intervals of fast start data stream comprising a sequence of intervals encoding an unprotected portion of data of the digital media content title. The license for the digital media content title is requested that authorizes playback of a protected data stream comprising a sequence of intervals encoding protected data of the digital media content title. At least one of the one or more intervals of the fast start data stream are played before the license for the digital media content title is acquired. One advantage of the disclosed technique is that the perceived delay between the time a user selects the protected digital media content to when playback of the protected digital media content begins is reduced because playback of an unprotected version of a portion of the digital media content starts before the license and protected version of the digital media content is received. The unprotected version of a portion of the digital media content includes fast start streams of audio and video data that may encoded at a lower bit rate than the protected version in order to quickly transfer the fast start streams from the content server to the playback device. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 illustrates a content distribution system configured to implement one or more aspects of the present invention; FIG. 2 is a more detailed view of the streaming device of FIG. 1, according to one embodiment of the invention; FIG. 3 is an illustration of a transition from fast start streams, generated by the fast start streams generator of FIG. 2, to protected streams after a license is acquired, according to one embodiment of the invention; FIG. 4A is an illustration of the protected streams and the fast start streams encoded using fixed rate audio intervals, according to one embodiment of the invention; FIG. 4B is an illustration of the protected streams and the fast start streams encoded using variable rate intervals, according to one embodiment of the invention; FIG. 5A is a more detailed view of the streaming device of FIG. 1, according to one embodiment of the invention; and FIG. 5B is a flow diagram of method steps for transitioning from playing fast start streams to playing protected streams after a license for protected digital media content is requested and acquired, according to one embodiment of the invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention. FIG. 1 illustrates a content distribution system 100 configured to implement one or more aspects of the invention. As shown, the content distribution system 100 includes a streaming server 102, a communications network 106, a streaming device 108, and a output device(s) 104. The content distribution system 100 may include a plurality of communications networks 106, such as routers and switches, configured to facilitate data communication between the streaming server 102 and the streaming device 108. The output device(s) 104 is configured to produce a display image and associated sound and is typically directly coupled to the streaming device 108 by a wired or wireless connection. Persons skilled in the art will recognize that many technically feasible techniques exist for transmitting data between the streaming server 102, the streaming device 108 and the output device(s) 104, including technologies practiced in deploying the well-known internet communications network. The streaming server 102 is a computer system configured to encode video and/or audio streams associated with digital media content files for streaming. The content distribution system 100 maybe include one or more streaming servers 102, where each streaming server 102 is configured to perform all the functions needed to encode the video and/or audio streams or where each streaming server 102 is configured to perform a particular function needed to encode the video and/or audio streams. The digital media content files including the encoded video and audio streams are retrieved by the streaming device 108 via the communications networks 106 for output to the output device(s) 104. As shown in FIG. 1, audio data 103 and video data 101 represent the encoded audio and video streams that are transmitted from the streaming server 102 to the streaming device 108. The streaming device 108 passes the audio data 103 through to the output device 104 as the audio signal 103, unless the audio data 103 is protected. The audio signal 103 is unchanged when compared with the audio data 103 except to remove any padding bits added by the streaming server 102. When the audio data 103 is protected, i.e., encrypted, the output device 104 decrypts the audio data 103 before outputting the decrypted audio data to the output device 104. When the video data 101 is protected, the video data 101 is first decrypted by the streaming device 108. The decrypted video data may then be uncompressed (if in a compressed format) or decoded into raw frames or PCM (pulse code modulated) intervals and output by the streaming device 108 to the output device(s) 104 as video signal 105. The output device(s) 104 may include a display device and speaker device for presenting video image frames, and generating acoustic output, respectively. The streaming server 102 comprises one or more computer systems configured to serve download requests for digital media content files from the streaming device 108. The digital media content files may reside on a mass storage system accessible to the computer system. The mass storage system may include, without limitation, direct attached storage, network attached file storage, or network attached block-level storage. The digital media content files may be formatted and stored on the mass storage system using any technically feasible technique. A data transfer protocol, such as the well-known hyper-text transfer protocol (HTTP), may be used to download digital media content files from wherever the digital media content files are stored to the streaming device 108. The streaming device 108 may comprise a computer system, a set top box, a mobile device such as a mobile phone, or any other technically feasible computing platform that has network connectivity and is coupled to or includes the output device(s) 104. The streaming device 108 is configured for streaming, i.e., to download units of a video stream encoded to a specific playback bit rate. In one embodiment, the streaming device 108 is configured to switch to downloading subsequent units of a video stream encoded to a different playback bit rate based on prevailing bandwidth conditions within the communications network 106. As bandwidth available within the communications network 106 becomes limited, the streaming device 108 may select a video stream encoded to a lower playback bit rate. As the bandwidth increases, a video stream encoded to a higher playback bit rate may be selected. The audio stream is typically a much lower playback bit rate than the corresponding video stream and is therefore not typically encoded at different playback bit rates. Although, in the above description, the content distribution system 100 is shown with one streaming device 108, persons skilled in the art will recognize that the architecture of FIG. 1 contemplates only an exemplary embodiment of the invention. Other embodiments may include any number of streaming device 108. Thus, FIG. 1 is in no way intended to limit the scope of the present invention in any way. FIG. 2 is a more detailed view of the streaming server 102 of FIG. 1, according to one embodiment of the invention. As shown, the streaming server 102 includes a central processing unit (CPU) 202, a system disk 204, an input/output (I/O) devices interface 206, a network interface 208, an interconnect 210 and a system memory 212. The CPU 202 is configured to retrieve and execute programming instructions stored in the system memory 212. Similarly, the CPU 202 is configured to store application data and retrieve application data from the system memory 212. The interconnect 210 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 202, the system disk 204, I/O devices interface 206, the network interface 208, and the system memory 212. The I/O devices interface 206 is configured to receive input data from I/O devices 222 and transmit the input data to the CPU 202 via the interconnect 210. For example, I/O devices 222 may comprise one or more buttons, a keyboard, and a mouse or other pointing device. The I/O devices interface 206 is also configured to receive output data from the CPU 202 via the interconnect 210 and transmit the output data to the I/O devices 222. The system disk 204, such as a hard disk drive or flash memory storage drive or the like, is configured to store non-volatile data such as encoded video streams. The encoded video streams can then be retrieved by the streaming device 108 via the communications network 104. The network interface 218 is coupled to the CPU 202 via the interconnect 210 and is configured to transmit and receive packets of data via the communications network 104. In one embodiment, the network interface 208 is configured to operate in compliance with the well-known Ethernet standard. The system memory 212 includes software components that include instructions for encoding one or more audio and video streams associated with a specific content title for streaming. As shown, these software components include an fast start streams generator 214, a video stream encoder 216, an audio stream encoder 224, a sequence header index (SHI) generator 218, and a license manager 220. The video stream encoder 216 executes encoding operations for encoding a video stream to a specific playback bit rate such that the encoded video stream complies with a particular video codec standard, such as VC1, and is configured for streaming. In an alternative embodiment, the video stream can be encoded to comply with a different video codec standard such as MPEG or H.264. In operation, for a particular video stream, the video stream encoder 216 encodes the video stream to different constant bitrates to generate multiple encoded video streams, each encoded video stream associated with a different constant bitrate and, thus, having a different quality. An encoded video stream generated by the video stream encoder 216 includes a sequence of groups of pictures (GOPs), each GOP comprising multiple image frames of video data. In practice, a GOP may include multiple scenes or portions of a scene. A GOP typical corresponds to 2.5 seconds or 10 seconds of playback time, although other durations may also be used. A GOP is specific to video data and one or more GOPs are included in an interval. For each interval of video data, there may be a corresponding interval of audio data. The video and audio streams each include a sequence of intervals. The SHI generator 218 generates a sequence header index associated with each encoded video stream. To generate the sequence header index, the SHI generator 218 first searches the encoded video stream for the key frames associated with the different intervals included in the encoded video stream. The key frames can be located by the SHI generator 218 based on the sequence start codes specified in the sequence headers included in the key frames. For the interval associated with each of the identified key frames, the SHI generator 218 defines a switch point within the sequence header index that stores (i) a data packet number that identifies the data packet that includes the key frame associated with the interval and (ii) the playback offset associated with the interval. Again, the playback offset associated with the interval is determined based on the location of the interval in the sequence of intervals included in the encoded video stream. The audio stream encoder 224 executes encoding operations for encoding an audio stream to a specific playback bit rate such that the encoded audio stream is configured for streaming and synchronization with the video stream. The sequence header indexes associated with each encoded video stream that are generated by the SHI generator 218 are also associated with the encoded audio stream. The switch points defined by the SHI generator 218 within the sequence header index stores (i) a data packet numbers that identifies the data packet for the audio data corresponding to each interval of the audio and video data and (ii) the playback offset in the audio data associated with each interval of the audio data. The audio stream encoder 224 and the video stream encoder 216 are configured to generate encoded audio and video streams that are protected, respectively, e.g., encrypted, ciphered, and the like. In contrast, the fast start streams generator 214 is configured to generate encoded audio and video streams that are not protected. In one embodiment, fast start streams may be generated by the fast start streams generator 214 for a preview clip corresponding to a pivotal moment in a movie. When a preview clip is selected for playback, the fast start streams may be quickly retrieved and played by the streaming device 108. The user may initiate transfer of the protected digital media content while viewing the unprotected digital media content for the same title. Alternatively, the fast start streams may encode the beginning intervals of the protected content. The fast start streams may be precomputed by the streaming server 102 or may be generated on-the-fly, i.e., in real-time, by the streaming server 102 when adequate computation resources are available. The license manager 220 serves requests for licenses associated with protected streams, e.g., encrypted digital content files, received from the streaming device 108. In operation, protected streams transmitted by the streaming server 102 to the streaming device 108 must be decrypted before the digital media content can be played. The license associated with the protected streams is stored in the streaming server 102 and is transmitted to the streaming device 108, which in turn uses the license to decrypt the protected streams. In one embodiment the license manager 220 functionality may be performed by a license server that is separate from the streaming server 102. FIG. 3 is an illustration of a transition from fast start streams generated by the fast start streams generator 214 of FIG. 2 to protected streams after a license is acquired, according to one embodiment of the invention. The fast start streams only encode a portion of the protected streams for a digital media content title, such as a preview clip or a few seconds or minutes of the beginning of the digital media content title. At the playback start 301 the streaming device 108 retrieves and begins playing an interval 305 from the fast start streams. The streaming device 108 transitions from the fast start streams to the protected streams at any interval boundary after acquiring the license and retrieving, decrypting, and decoding at least one interval of the protected streams. Alternatively, the streaming device 108 may retrieve and play the entire fast start streams before transitioning to the protected streams, regardless of when the license is acquired. In that case, playback of the protected streams begins at the interval that follows the last interval of the fast start streams, assuming that at least one interval of the protected streams is available for playback, i.e., received, decrypted, and decoded. When at least one interval of the protected streams is not available for playback or if the license has not been acquired after all of the fast start streams intervals are played, then the streaming device 108 may be configured to wait, displaying the last image frame encoded in the video data of the fast start streams. As shown in FIG. 3, the streaming device 108 acquires the license at acquire license 304 which is during playback of interval 305. A corresponding interval of the fast start streams is played for interval 305. If, at least one interval of the protected streams has been received and is ready to be played by the streaming device 108 at the end of playback of the interval 305, then the streaming device 108 transitions from playing the fast start streams to the protected streams at the transition 312 and the interval 310 may be a corresponding interval of the protected stream. If, at least one interval of the protected streams has not been received or is not ready to be played by the streaming device 108 at the end of the interval 305, then the interval 310 is a corresponding interval of the fast start streams. Similarly, interval 315 and/or 320 is a corresponding interval of the protected streams if at least one interval of the protected streams has not been received or is not ready to be played by the streaming device 108 at the end of the interval 310 and/or 315, respectively. When at least one interval of the protected streams corresponding to the interval 310 has been received and is ready to be played by the streaming device 108 at the end of the interval 310, then the streaming device 108 transitions from playing the fast start streams to the protected streams at the transition 314. Note that the transitions 312 and 314 occur at an interval boundary between intervals 305 and 310 and intervals 310 and 315, respectively. The audio stream of the protected streams or fast start streams provides the clock track for synchronous playback of the audio and video streams. Therefore, switching from one audio stream to another audio stream, such as switching from the fast start streams to the protected streams, is only possible when the different audio streams are encoded to have the same playback time intervals and the same playback offsets. The streaming device 108 may be configured to switch between different encoded audio streams and between different encoded video streams. When generating the fast start streams and the protected streams, the streaming server 102 may generate multiple encoded video streams associated with the same content title and encoded to different playback bit rates. The encoding process implemented by the streaming server 102 ensures that, across the different encoded video and audio streams the intervals are associated with the same playback time interval and that corresponding intervals across the different encoded video and audio streams are associated with the same playback offsets. Therefore, each switch point defined in a sequence header included in one of the encoded video stream associated with a specific content title has a corresponding switch point defined in a sequence header included in each of the other encoded video stream associated with the same content title. Similarly, when multiple encoded audio streams are generated, the audio data corresponding to the interval are associated with the same playback time interval and the same playback offsets. The streaming device 108 may switch between different encoded video streams based on the interval boundaries defined by the corresponding sequence header indices. Importantly, in order to properly switch between the different audio streams, the switch points defined by the SHI generator 218 within the sequence header index for the audio streams are matching in terms of time duration, bytes, and indices. FIG. 4A is an illustration of the protected streams 300 and the fast start streams 302 encoded using fixed rate audio intervals by the streaming server 102 of FIGS. 1 and 2, according to one embodiment of the invention. The fast start streams 302 only encode a portion of the protected streams 300, such as a preview clip or a few seconds or minutes of the beginning of the digital media content title. For a portion of the intervals of the protected streams 300, intervals 302(0), 302(1), 302(2), and 302(3), are generated to produce the fast start streams 302. Intervals 302(0), 302(1), 302(2), and 302(3) of the fast start streams 302 correspond to intervals 300(0), 300(1), 300(2), and 300(3) of the protected streams 300. Fast start streams intervals 302(1), 302(2), and 302(3), are subsequent to interval 302(0) in the interval fast start streams 302. In one embodiment, the fast start streams generator 214 precomputes the fast start streams 302. In another embodiment, the fast start streams generator 214 computes the fast start streams 302 on-the-fly when the corresponding content is requested by the streaming device 108. Because a fixed rate encoding is performed, each interval is of equal and constant length in terms of bytes and playback duration. Video data and audio data may both be encoded using constant bit rates to generate the fast start streams 300 having different constant bit rates for the same content and to generate the protected streams 300 having different constant bit rates for the same content. Typically, intervals of the fast start streams 302 are encoded using a lower bit rate compared with the corresponding intervals of the protected streams 300 so that the intervals of the fast start streams 302 may be quickly retrieved by the streaming device 108. The streaming device 108 can efficiently transition between the fast start streams 302 and the protected streams 300 by identifying the appropriate switch points in the sequence header indices. When switching between a currently playing encoded audio stream and a different encoded audio stream, the streaming device 108 searches the sequence header index included in the different encoded audio stream to locate the particular switch point specifying the playback offset associated with the next interval to be played. The streaming device 108 can then switch to the new encoded audio stream and download the interval stored in the data packet specified at the particular switch point for playback. For example, for encoded video streams where each interval was associated with a playback time interval of three seconds, if the first interval associated with the playback offset of zero seconds were currently being played, then the next interval to be played would be associated with the playback offset of three seconds. In such a scenario, the streaming device 108 searches the sequence header associated with the new encoded stream for the particular switch point specifying a playback offset of three seconds. Once locating the particular switch point, the streaming device 108 would download the interval stored in the data packet specified in the switch point for playback. FIG. 4B is an illustration of protected streams 400 and fast start streams 406 encoded using variable bit rate (VBR) intervals, according to one embodiment of the invention. Rather than encoding the video and audio streams at a fixed bit rate, each interval is encoded based on the content for the respective interval. For example, interval for a scene of low complexity is encoded to a lower bit rate to “save” bits for scenes having a higher complexity. The average bit rate across a VBR video stream is, thus, not reflective of the bit rate of a particular interval within the VBR video stream. The VBR encoded protected streams 400 includes intervals 304(0), 304(1), 304(2), 304(3), and 304(4) corresponding to intervals 300(0), 300(1), 300(2), 300(3), and 300(4) of the protected streams 300, respectively. The fast start streams 406 is the VBR encoding of the fast start streams 302. Intervals 401, 402, 403, and 404 are the VBR encoded intervals 302(0), 302(1), 302(2), and 302(3), respectively. Note that the interval boundaries are not aligned between the protected streams 400 and the fast start streams 406 due to the VBR encoding. Therefore, the switch points between corresponding intervals in the streams are different and the streaming device 108 cannot easily locate corresponding intervals in the different streams. In order to easily switch between the different streams, the switch points defined by the SHI generator 218 within the sequence header index for the streams match in terms of time duration, bytes, and indices. The fast start streams 410 includes VBR encoded intervals that match the intervals in the VBR encoded protected streams 400 in terms of time duration, bytes, and indices. Intervals 411, 412, 413, 414, and 415 are the encoded intervals 411, 412, 413, and 414 correspond to intervals 304(0), 304(1), 304(2), and 304(3) of the VBR encoded protected streams 400, respectively. The intervals 411, 412, 413, and 414 may be generated by including padding 408 compared with the corresponding intervals 401, 402, 403, and 404 to match the length in bytes of the corresponding intervals of the protected streams 400, e.g., intervals 304(0), 304(1), 304(2), and 304(3). The streaming device 108 may easily locate corresponding intervals in the protected streams 400 and the fast start streams 410 in order to switch between the audio data and/or the video data included in the two streams at any interval boundary. Prior to initiating playback, the streaming device 108 may measure available bandwidth from the content server and select a digital media content file having a bit rate that can be supported by the measured available bandwidth. To maximize playback quality, a digital media content file with the highest bit rate not exceeding the measured bandwidth is conventionally selected. To the extent the communications network 106 can provide adequate bandwidth to download the selected digital media content file while satisfying bit rate requirements, playback proceeds satisfactorily. In practice, however, available bandwidth in the communications network 106 is constantly changing as different devices connected to the communications network 106 perform independent tasks. To counter the variability of network conditions, adaptive streaming may be implemented where, for each title, multiple video streams having different fixed bit rates exist. As the network conditions vary, the streaming device 108 may switch between video streams according to the network conditions. For example, video data may be downloaded from video streams encoded to higher fixed bit rates when the network conditions are good, and, when the network conditions deteriorate, subsequent video data may be downloaded from video streams encoded to lower fixed bit rates. The bit rate of the audio stream is typically much lower than the bit rate of the video stream, so the audio stream is typically only encoded for a single fixed bit rate. Because the bit rate for a particular interval of a VBR encoded video stream is not fixed, adaptive stream is best suited for use with fixed bit rate streams. FIG. 5A is a more detailed view of the streaming device 108 of FIG. 1, according to one embodiment of the invention. As shown, the streaming device 108 includes, without limitation, a central processing unit (CPU) 510, a graphics subsystem 512, an input/output (I/O) device interface 514, a network interface 518, an interconnect 520, and a memory subsystem 530. The streaming device 108 may also include a mass storage unit 516. The CPU 510 is configured to retrieve and execute programming instructions stored in the memory subsystem 530. Similarly, the CPU 510 is configured to store and retrieve application data residing in the memory subsystem 530. The interconnect 520 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 510, graphics subsystem 512, I/O devices interface 514, mass storage 516, network interface 518, and memory subsystem 530. The graphics subsystem 512 is configured to generate image frames of video data and transmit the frames of video data to display device 550. In one embodiment, the graphics subsystem 512 may be integrated into an integrated circuit, along with the CPU 510. The display device 550 may comprise any technically feasible means for generating an image for display. For example, the display device 550 may be fabricated using liquid crystal display (LCD) technology, cathode-ray technology, and light-emitting diode (LED) display technology (either organic or inorganic). An input/output (I/O) device interface 514 is configured to receive input data from user I/O devices 552 and transmit the input data to the CPU 510 via the interconnect 520. For example, user I/O devices 552 may comprise one of more buttons, a keyboard, and a mouse or other pointing device. The I/O device interface 514 also includes an audio output unit configured to generate an electrical audio output signal. User I/O devices 552 includes a speaker configured to generate an acoustic output in response to the electrical audio output signal. In alternative embodiments, the display device 550 may include the speaker. A television is an example of a device known in the art that can display video frames and generate an acoustic output. A mass storage unit 516, such as a hard disk drive or flash memory storage drive, is configured to store non-volatile data. A network interface 518 is configured to transmit and receive packets of data via the communications network 106. In one embodiment, the network interface 518 is configured to communicate using the well-known Ethernet standard. The network interface 518 is coupled to the CPU 510 via the interconnect 520. The memory subsystem 530 includes programming instructions and data that comprise an operating system 532, user interface 534, and playback application 536. The operating system 532 performs system management functions such as managing hardware devices including the network interface 518, mass storage unit 516, I/O device interface 514, and graphics subsystem 512. The operating system 532 also provides process and memory management models for the user interface 534 and the playback application 536. The user interface 534 provides a specific structure, such as a window and object metaphor, for user interaction with streaming device 108. Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into the streaming device 108. The playback application 536 is configured to retrieve digital media content, e.g., audio and video streams, from the streaming server 102 via the network interface 518 and play the digital media content through the graphics subsystem 512. The graphics subsystem 512 is configured to transmit a rendered video signal to the display device 550. In normal operation, the playback application 536 receives a request from a user to play a specific digital media content title. The playback application 536 then identifies the different encoded video streams associated with the requested digital media content title, wherein each encoded video stream is encoded to a different playback bit rate. A preview clip may be encoded separately from the requested title or may be indicated by an index into the video and audio streams encoded for the requested title. After the playback application 536 has located the encoded video streams associated with the requested title, the playback application 536 downloads sequence header indices associated with each encoded video stream associated with the requested title from the streaming server 102. As previously described herein, a sequence header index associated with an encoded video stream includes information related to the encoded sequence included in the digital media content file. In one embodiment, the playback application 536 begins downloading the digital media content file associated with the requested title comprising the encoded sequence encoded to the lowest playback bit rate to minimize startup time for playback. For the purposes of discussion only, the digital media content file is associated with the requested title and comprises the encoded sequence encoded to the lowest playback bit rate. The requested digital media content file is downloaded into the content buffer 543, configured to serve as a first-in, first-out queue. In one embodiment, each unit of downloaded data comprises a unit of video data or a unit of audio data. As units of video data associated with the requested digital media content file are downloaded to the streaming device 108, the units of video data are pushed into the content buffer 543. Similarly, as units of audio data associated with the requested digital media content file are downloaded to the streaming device 108, the units of audio data are pushed into the content buffer 543. In one embodiment the units of video data are stored in video buffer 546 within the content buffer 543, and units of audio data are stored in audio buffer 544, also within the content buffer 543. A video decoder 548 reads units of video data from the video buffer 546, and renders the units of video data into a sequence of video frames corresponding in duration to the fixed span of playback time. Reading a unit of video data from the video buffer 546 effectively de-queues the unit of video data from the video buffer 546 (and from the content buffer 543). When the video data is encrypted, the video decoder 548 decrypts the video data using the license provided by the license manager 220. The sequence of video frames is processed by graphics subsystem 512 and transmitted to the display device 550. An audio decoder 542 reads units of audio data from the audio buffer 544, and processes the units of audio data into a sequence of audio samples, generally synchronized in time with the sequence of video frames. When the audio data is encrypted, the audio decoder 542 decrypts the audio data using the key provided by the license manager 220. In one embodiment, the sequence of audio samples is transmitted to the I/O device interface 514, which converts the sequence of audio samples into the electrical audio signal. The electrical audio signal is transmitted to the speaker within the user I/O devices 552, which, in response, generates an acoustic output. Given the bandwidth limitations of the communications network 106, the playback application 536 may download consecutive portions of video data from different constant bit rate encoded video streams based on available bandwidth. Other performance factors that may influence the specific encoded stream from which to download the portion of video data include the buffer size of the video buffer 546, the behavior of the end-user viewing the video content, the type of display being generated (high-definition, standard-definition, etc) and the available lead time. These factors combined with the bandwidth limitations of the communications network 106 may be used to determine a specific encoded video stream from which to download each interval of the video data. The transition component 304 receives content playback information including the content title selection and request for a license to play the protected version of the content title. Typically a portion of a content title, such as a preview, may be played without acquiring the license, but the license is needed to play the full digital media content for the title. The transition component 304 determines which streams are retrieves, requests the license, and controls transitions between different streams. The sequence header indexes 538-1, 538-2, and 538-3 are each associated with a respective video or audio stream and are used by the transition component 304 to locate switch points defined by the SHI generator 218 within each stream. The transition component 304 may switch from playing a first audio and/or video stream at an interval boundary to playing a second audio and/or video stream. FIG. 5B is a flow diagram 560 of method steps for playing the fast start streams 302 or the fast start streams 410 and transitioning to the protected streams 300 or 400, respectively, according to one embodiment of the invention. Although the method steps are described in conjunction with the systems for FIGS. 1, 2 and 5A, persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the invention. At step 562, the playback application 536 receives a playback selection indicating the digital media content for which playback has been initiated. The playback selection may be for a preview clip of the digital media content or for playback of the protected digital media content. At step 564, the playback application 536 initiates retrieval of one or more intervals from the fast start streams 302 or 410. At step 566, the playback application 536 requests the license for the digital media content. At step 568, the playback application 536 initiates retrieval of one or more intervals from the protected streams 300 or 400. At step 570, the playback application 536 begins playing the intervals from the fast start streams 302 or 410 in sequence. At step 571, the playback application determines if the user intent is to quit playing the unprotected digital media content or switch from playing the unprotected digital media content to play the protected digital media content. The user intent may be determined by first presenting the user with an interface including “play” and “quit” buttons or symbols that represent “play” and “quit” functions. The user may choose not to provide a response, in which case the playback application 536 proceeds to step 573 from step 571. If, at step 571 the playback application 536 determines that the user intent is to quit, then at step 576 the playback application 536 stops playing the unprotected digital media content. If, at step 571 the user intent cannot be determined because no response has been received from the user, then at step 573 the playback application 536 determines if the end of the fast start streams 302 or 410 is reached, i.e., all of the intervals in the fast start streams 302 or 410 have been played. If at step 573 the end of the fast start streams 302 or 410 is not reached, then the playback application 536 returns to step 570 to continue playing the fast start streams 302 or 410. Otherwise, the playback application 536 returns to step 571. If, at step 571, the user intent is determined to be to play the protected digital media content, then at step 572 the playback application 536 determines if the license for the digital media content is acquired. If, at step 571 the playback application 536 determines that the license for the digital media content has not been acquired, then the playback application 536 waits for the license for the digital media content is acquired before proceeding to step 578. At step 578 the playback application 536 transitions (or switches) from playing the fast start streams 302 or 410 to playing the protected streams 300 or 400. Importantly, the playback application 536 transitions at an interval boundary when at least one interval of the protected streams 300 or 400 is available (received, decrypted, and decoded). In one embodiment, the playback application 536 may be configured to transition from playing the fast start streams 302 or 410 to playing the protected streams 300 or 400 after the last interval of the fast start streams 302 or 410 has been played, so that all of the intervals in the fast start streams 302 or 410 are played before the protected streams 300 or 400 are played. In one embodiment, the request for the license at step 566 and retrieval of the protected streams 300 or 400 at step 568 may be performed after step 571 in response to determining the user intent is to play the protected digital media content. In order to select a specific encoded video stream from a set of fixed bit rate encoded video streams representing the same video data, the playback application 536 executing on the streaming device 108 may be configured to dynamically determine the encoding level (high, medium, or low bit rate) of the video stream for the next portion of the video data to be downloaded during playback of a different (previous) portion of the digital media content. One advantage of the disclosed technique is that the perceived delay between the time a user selects the protected digital media content to when playback of the protected digital media content begins is reduced because retrieval and playback of the fast start streams starts before the license and protected version of the digital media content is received. The fast start streams of audio and video data that may encoded at a lower bit rate than the protected version in order to quickly transfer the fast start streams from the content server to the playback device. The fast start streams may encode a preview clip of the protected content and the user may initiate transfer of the protected digital media content while viewing the unprotected digital media content for the same title. Alternatively, the fast start streams may encode the beginning intervals of the protected content. In one embodiment of the invention the streaming device 108 comprises an embedded computer platform such as a set top box. An alternative embodiment of the invention may be implemented as a program product that is downloaded to a memory within a computer system, for example as executable instructions embedded within an internet web site. In this embodiment, the streaming device 108 comprises the computer system. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. For example, aspects of the present invention may be implemented in hardware or software or in a combination of hardware and software. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention. In view of the foregoing, the scope of the present invention is determined by the claims that follow. 15589328 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Aug 25th, 2020 12:00AM Aug 7th, 2014 12:00AM https://www.uspto.gov?id=US10754830-20200825 Activity information schema discovery and schema change detection and notification Techniques for activity information schema discovery, schema change detection, and notification. In one embodiment, for example, a computer-implemented method for activity information schema discovery and schema change detection and notification comprises: analyzing a first set of related activity messages obtained during a first sample period; determining first schema counters for uniquely named properties identified in the first set of messages based on the analyzing of the first set of activity messages; after the first sample period, inferring a first schema from the first schema counters; analyzing a second set of related activity messages obtained during a second sample period; determining second schema counters for uniquely named properties identified in the second set of messages based on the analyzing of the second set of activity messages; after the second sample period, inferring a second schema from the second schema counters; comparing the first schema and the second schema for any differences. 10754830 1. A method for activity information schema discovery, comprising: at one or more computing devices with one or more processors and memory, and one or more programs stored in the memory that execute on the one or more processors: receiving, over a network and during a sample period, a first set of network packets that include a set of related activity messages generated by at least one software application coupled to the network; generating a computer data structure for representing a plurality of properties included in the set of related activity messages; iterating through the plurality of properties represented in the computer data structure to identify a uniquely named property in the set of related activity messages; determining a first plurality of schema counters for the uniquely named property identified in the set of related activity messages, wherein each of the first plurality of schema counters is associated with a different data type and indicates a number of instances, within the set of related activity messages, that a corresponding value of the uniquely named property is of the data type associated with the schema counter; after the sample period, inferring, by at least one processor, a schema implemented by the at least one software application from at least the first plurality of schema counters, wherein the schema specifies that the uniquely named property is of a first data type associated with a first schema counter in the plurality of schema counters that has a higher value relative to other scheme counters in the plurality of schema counters; receiving, over the network and during a second sample period, a second set of network packets that include a second set of related activity messages generated by the at least one software application coupled to the network; and in response to receiving the second set of network packets, executing a binding rule associated with the schema to: determine that the uniquely named property in the second set of related activity messages is of a second data type different than the first data type specified in the schema, and electronically transmit a notification to a contact specified in the binding rule, wherein the notification indicates that the uniquely named property in the second set of related activity messages is of the second data type. 2. The method of claim 1, further comprising: relating the activity messages together in the set based on one or more properties of the activity messages. 3. The method of claim 1, wherein the sample period corresponds to a period of time or a predetermined number of obtained activity messages. 4. The method of claim 1, further comprising: parsing at least one of the activity messages into a set of unordered properties; and determining the first plurality of schema counters for the uniquely named property identified in the activity message based on the set of unordered properties. 5. The method of claim 1, further comprising: determining the first plurality of schema counters for the uniquely named property identified in the set of related activity messages based on analyzing values of the uniquely named property identified in the set of related activity messages. 6. The method of claim 1, further comprising: after the sample period, inferring, for the uniquely named property, based the first plurality of schema counters, at least one of the following: a data type of the uniquely named property, whether the uniquely named property is nullable, whether the uniquely named property is required, or one or more possible values for the uniquely named property. 7. A method for activity information schema discover and schema change detection, comprising: at one or more computing devices with one or more processors and memory, and one or more programs stored in the memory that execute on the one or more processors: receiving, over a network and during a first sample period, a first set of network packets that include a set of related activity messages generated by at least one software application coupled to the network; generating a computer data structure for representing a plurality of properties included in the set of related activity messages; iterating through the plurality of properties represented in the computer data structure to identify a uniquely named property in the set of related activity messages; determining a first plurality of schema counters for the uniquely named property identified in the set of related activity messages, wherein each of the first plurality of schema counters is associated with a different data type and indicates a number of instances, within the set of related activity messages, that a corresponding value of the uniquely named property is of the data type associated with the schema counter; after the first sample period, inferring, by at least one processor, a schema implemented by the at least one software application from at least the first plurality of schema counters; receiving, over the network and during a second sample period, a second set of network packets that includes a second set of messages generated by at least one software application coupled to the network; identifying a second uniquely named property in the second set of messages; determining a second plurality of schema counters for the second uniquely named property identified in the second set of messages, wherein each of the second plurality of schema counters indicates a number of instances, within the second set of messages, that a corresponding value of the second uniquely named property is associated with a second data type; after the second sample period, inferring a second schema from at least the second plurality of schema counters; comparing the first schema and the second schema to identify a first difference between the first schema and the second schema; and executing a binding rule associated with the schema to electronically transmit a notification to a contact specified in the binding rule, wherein the notification indicates the first difference. 8. The method of claim 7, further comprising: relating the activity messages in the first set together in the first set based on a first set of one or more designated properties of the first set of related activity messages; relating the activity messages in the second set together in the second set based on a second set of one or more designated properties of the second set of messages; and wherein the first set of one or more designated properties is the same as the second set of one or more designated properties. 9. The method of claim 7, wherein the first set of related activity messages obtained during the first sample period is obtained before the second set of activity messages is obtained. 10. One or more non-transitory computer-readable media storing instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform the steps of: receiving, over a network and during a sample period, one or more network packets that include a set of messages generated by at least one software application executing within the network; generating a computer data structure for representing a plurality of properties included in the set of messages; iterating through the plurality of properties represented in the computer data structure to identify a uniquely named property in the set of messages; determining a first plurality of schema counters for the uniquely named property identified in the set of messages, wherein each of the plurality of schema counters is associated with a different data type and indicates a number of instances, within the set of messages, that a corresponding value of the uniquely named property is of the data type associated with the schema counter; after the sample period, inferring, by at least one processor, a schema implemented by the at least one software application from at least the first plurality of schema counters, wherein the schema specifies that the uniquely named property is of a first data type associated with a first schema counter in the plurality of schema counters that has a higher value relative to other scheme counters in the plurality of schema counters; receiving, over the network and during a second sample period, a second set of network packets that include a second set of related activity messages generated by the at least one software application coupled to the network; and in response to receiving the second set of network packets, executing a binding rule associated with the schema to: determine that the uniquely named property in the second set of related activity messages is of a second data type different than the first data type specified in the schema, and electronically transmit a notification to a contact specified in the binding rule, wherein the notification indicates that the uniquely named property in the second set of related activity messages is of the second data type. 11. The media of claim 10, further comprising the steps of: relating the activity messages together in the set based on one or more properties of the activity messages. 12. The media of claim 10, wherein the sample period corresponds to a period of time or a predetermined number of obtained activity messages. 13. The media of claim 10, further comprising the steps of: parsing at least one of the activity messages into a set of unordered properties; and determining the plurality of schema counters for the uniquely named property identified in the activity message based on the set of unordered properties. 14. The media of claim 10, further comprising the steps of: determining the plurality of schema counters for the uniquely named property identified in the set of messages based on analyzing values of the uniquely named property identified in the set of messages. 15. The media of claim 10, further comprising the steps of: after the sample period, inferring, for the uniquely named property, based on the first plurality of schema counters, at least one of the following: a data type of the uniquely named property, whether the uniquely named property is nullable, whether the uniquely named property is required, or one or more possible values for the uniquely named property. 16. One or more non-transitory computer-readable media storing instructions which, when executed by one or more computing devices, cause performance of a method for activity information schema change detection comprising the steps of: receiving, over a network and during a first sample period, a first set of network packets that includes a first set of messages generated by at least one software application coupled to the network; generating a computer data structure for representing a plurality of properties included in the first set of messages; iterating through the plurality of properties represented in the computer data structure to identify a first uniquely named property in the first set of messages; determining a first plurality of schema counters for the first uniquely named property identified in the first set of messages, wherein each of the plurality of schema counters is associated with a different data type and indicates a number of instances, within the set of messages, that a corresponding value of the uniquely named property is of the data type associated with the schema counter; after the first sample period, inferring, by at least one processor, a schema from at least the plurality of schema counters; receiving, over the network and during a second sample period, a second set of network packets that includes a second set of messages generated by at least one software application coupled to the network; identifying a second uniquely named property in the second set of messages; determining a second plurality of schema counters for the second uniquely named property identified in the second set of messages, wherein each of the second plurality of schema counters indicates a number of instances, within the second set of messages, that a corresponding value of the second uniquely named property is associated with a second data type; after the second sample period, inferring a second schema from at least the second plurality of schema counters; comparing the first schema and the second schema to identify a first difference between the first schema and the second schema; and executing a binding rule associated with the schema to automatically generate an electronic notification for transmission to a contact specified in the binding rule, wherein the electronic notification indicates the first difference. 17. The media of claim 16, the method further comprising the steps of: relating the activity messages in the first set together in the first set based on a first set of one or more designated properties of the first set of messages; relating the messages in the second set together in the second set based on a second set one or more designated properties of the second set of activity messages; and wherein the first set of one or more designated properties is the same as the second set of one or more designated properties. 18. The media of claim 16, wherein the first set of messages obtained during the first sample period is obtained before the second set of messages is obtained. 18 FIELD OF THE INVENTION The present invention relates to services that use activity information to provide and support the services. BACKGROUND Many online services generate large amounts of “activity” information. Typically, activity information includes “user activity” information and “system activity” information. User activity information includes information reflecting users' online interaction with the service such as, for example, logins, page views, clicks, “likes”, sharing, recommendations, comments, search queries, etc. System activity information includes system operational metrics collected for servers and/or virtualized machine instances supporting the online service such as, for example, call stack traces, error messages, faults, exceptions, CPU utilization, memory usage, network throughput metrics, disk utilization, etc. Traditionally, online services have leveraged activity information as a component of service analytics to track user engagement, system utilization, and other usage and performance of the service. Often, the analytics involve batch processing activity information. More recently, online services use activity information in real-time directly in end-user features. For example, an Internet search engine service may use activity information to provide more relevant search results, an online shopping service may use activity information to provide more relevant product recommendations, an online advertising service may use activity information to provide more targeted advertisements and promotions, and a social networking service may use activity information to provide a newsfeed feature. Because of the large numbers of users, many online services generate large volumes of activity information, sometimes within a short period of time. For example, the Netflix service (available from Netflix, Inc. of Los Gatos, Calif.) which, among other services, provides an Internet streaming media service to millions of subscribers, has been known to generate activity information for up to 80 billion online user events per day. These user events include subscriber membership changes, subscription changes, streaming media playback events, user preference changes, among others. In order to reliably collect large amounts of activity information from applications of an online service that generate them (producer applications) and provide them in a timely manner to applications of the online service that use the activity information (consumer applications), many online services implement a data pipeline to reliably and efficiently “move” the activity information generated by the producer applications to the consumer applications. In this description, the term “application” is used to refer to computer-implemented functionality of an online service. Typically, an application is implemented in software executing as one or more computer processes on one or more computing devices (e.g., one or more servers in a data center environment). Thus, an online service may be viewed as a collection of one or more applications, each of which may provide different portion of the functionality and support for the online service, but collectively provide the overall functionality and support for the online service. For example, some applications of an online service may provide end-user functionality and other applications may provide site performance and usage analytics to service operators. As typically implemented, a data pipeline is a collection of computer systems designed to facilitate message passing and brokering of large-scale amounts of activity information from producer applications to consumer applications for batch or real-time processing. In some cases, the data pipeline includes a distributed commit log for durably storing recent activity information obtained from producer applications and also includes a messaging brokering system (e.g., a queuing system or a publish-subscription system) for providing stored activity information to consumer applications in a timely manner. Often, different pieces of activity information that pass through a data pipeline from producer applications to consumer applications have different data formats. For example, one producer application may generate activity information in the form of log lines and another producer application may generate activity information in the form of highly-structured markup-language (e.g., XML) documents. On a more fined-grained level, values in a piece of activity information can have different data formats. For example, one producer application may generate activity information in which calendar date values are formatted using a two-character sequence to designate the calendar year (e.g., “14”) and another producer application may generate activity information in which a four-character sequence is used (e.g., “2014”). More generally, activity information generated by producer applications may not conform to a single or a small number of known data formats and different pieces of activity information can have different formats. Further, the data format of activity information generated by a producer application may change over time. Moreover, it may be a design goal of the data pipeline to allow producer applications to generate activity information in whatever data formats the human software developers of the producer applications deem appropriate, as opposed to imposing or prescribing data formats that activity information generated by the producer applications must adhere to. As different pieces of activity information can have different data formats, which can change over time, a challenge in implementing a data pipeline is to provide activity information to consumer applications in a data format that is expected. Historically, this challenge has been solved by ad-hoc communications between software developers of producer and consumer applications. For example, a software developer may design or configure the producer application to generate activity information in a particular custom log line format in which calendar date values use a four-character sequence to represent the calendar year. The producer application software developer may communicate this format to software developers of consumer applications as the format they should expect for activity information received from the producer application. The software developers of the consumer applications may then design or configure the consumer applications to expect this data format. If the producer application is subsequently re-designed or re-configured to generate activity information in a different format (e.g., to use a two-character sequence to represent the calendar year), the software developer must remember to communicate the format change to the software developers of the consumer applications. In worst cases, an uncommunicated format change causes a consumer application to fail or otherwise not provide expected functionality because the consumer application is not designed or configured to expect activity information from the producer application in the new data format. Another problem with uncommunicated data format changes to activity information is that such changes can “break” rules used by computer systems in the data pipeline to route activity information obtained from producer applications to the consumer applications. For example, an online service may include an application that is configured to automatically send an e-mail to new subscribers greeting them to the service. To do so, a “binding rule”, or other pre-arranged criteria for routing activity information accepted from producer applications toward consumer applications, may be registered with an activity information routing or messaging brokering system in the data pipeline. The binding rule may express that the greeting application would like to receive certain activity information generated by a subscriber management application when a new subscriber enrolls in the service. When the routing or message brokering system obtains activity information from the subscriber management application satisfying the binding rule, it will provide the activity information to the greeting application. For example, the routing or messaging brokering system may place the activity information in a message queue from which the greeting application reads the activity information. However, if the subscriber management application is re-designed or re-configured to generate activity information such that the generated activity information no longer satisfies the binding rule, then the greeting application may no longer be notified when a new subscriber is enrolled. As a result, the new subscriber may not receive the e-mail welcoming them to the online service. Even where communication between software developers of producer and consumer applications with the respect to activity information data format changes is consistent, such communication may be considered inefficient or cumbersome. The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flowchart illustrating steps perform by a service that uses activity information to provide and support the service. FIG. 2 is a block diagram of a computer system upon which embodiments of the invention may be implemented. DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. General Overview Techniques are described herein for automatically discovering schemas in activity information. For example, a schema may be discovered for activity information generated by a first producer application of an online service. At the same time, another schema may be discovered for activity information generated by a second different producer application of the online service. The techniques further include automatically detecting changes to the schema and automatically sending notifications when the schema change is detected. For example, a schema may be discovered for activity information generated by a particular producer application of the online service. After discovering the schema, a change to the schema may detected and an e-mail message or text message sent to a person responsible for the producer application, or a person responsible for a consumer application that subscribes to the activity information generated by the particular producer application. In this way, the person responsible is automatically notified when a change to activity information is detected that could affect a binding rule that the consumer application relies on. Techniques for discovering schemas in activity information and detecting schema changes are described in greater detail below. Activity Messages A piece of activity information generated by an application of the online service is referred to herein as an “activity message”. An activity message can be structured or semi-structured data. In many cases, an activity message is not structured (i.e., is semi-structured). Structured activity message data includes data that conforms to a known data model that determines the manner in which the data must be formatted. For example, data that conforms to a relational database data model is sometimes considered to be structured data. Semi-structured activity message data is data that may not conform to a data model but that nonetheless contains tags, whitespace characters, or other markers to separate and designate semantic elements and, optionally, enforce hierarchies of semantic elements within the data. For example, data formatted in eXtensible Markup Language (XML) or JavaScript Object Notation (JSON) is sometimes considered semi-structured data. An activity message is typically generated in response to the occurrence of some event within the online service. Accordingly, an activity message may be also be referred herein as an “activity event”. Typically, the event is a user event or an operational event. For example, when a user views a web page of an online service (an event), a web server may record the network address of the user's device and the web page viewed in a log file (an activity message). As another example, the CPU utilization of a database server over a period of time (an event) may be recorded by a monitoring system as a system metric (an activity message). For an online service with a large-user base in which a large number of events occur (e.g., up to billions of events per day), the online service may generate millions or more of activity messages per second. The generated activity message may have many different data formats. The techniques described herein can be implemented within an online service to discover schemas in the various activity messages generated by the online service and to detect changes to the schemas over time. Producer and Consumer Applications An application of the online service that generates activity messages can be a producer application, a consumer application, or both a producer application and a consumer application. An application that generates activity messages is referred to herein as a producer application. An application that uses activity messages to provide or support the online service is referred to herein as a consumer application. For an application that is both a producer application and a consumer application, when referring to the application in the context of the application generating activity messages, the application is referred to herein as a producer application, and when referring to the application in the context of the application using activity messages, the application is referred to herein as a consumer application. Sample Period According to an embodiment, the online service obtains activity messages generated by one or more producer applications. Activity messages generated by just one producer application, or more than one producer application may be obtained. Although not a requirement of the present invention, the temporal order of obtaining activity message may roughly correspond to the temporal order in which the activity messages were generated by the producer applications. However, no particular manner of obtaining activity messages is required and the manner in which activity messages are obtained may vary according to the requirements of the particular implementation at hand. For example, activity messages may be obtained over a data network according to a suitable networking protocol from a distributed commit log, a messaging queuing system, a message brokering system, a publish-subscription system, or other system or component of a data pipeline system for moving activity messages from producer applications to consumer applications. In addition or alternatively, activity messages may be obtained directly from the producer applications that generate them. For example, activity messages may be obtained from the producer applications in network messages/packets sent over a data network by the producer applications. In addition or alternatively, activity messages may read from file system files (e.g., log files) containing the activity messages and generated by the producer applications. In addition or alternatively, activity messages may be read from a database storing the activity messages. According to an embodiment, activity messages are obtained during a “sample period”. The sample period may correspond to a certain period of time or a certain number of obtained activity messages. Activity messages obtained during a sample period are referred to hereinafter as “activity messages samples”, or just “samples”. A single activity message obtained during a sample period is referred to hereinafter as an “activity message sample”, or just “sample”. As described hereinafter, schemas are inferred from sets of related samples (“sample sets”). Each sample set includes samples obtained during a sample period. Multiple sample sets may have the same sample period. For example, the sample period for one sample set may be N number of samples and the sample period for another sample set may also be N number of samples. Different sample sets can also have different sample periods. For example, the sample period for one sample set may be one hour and the sample period for another sample set may be ten minutes. Two or more different sample sets may contain mutually exclusive sets of samples. Alternatively, the same sample can belong to two or more sample sets. Sample Sets In one embodiment, as described in greater detail below, a schema is inferred from a “sample set” of related samples. A sample set is a set of related samples obtained during a sample period. For example, the samples generated by a particular producer application obtained within the past hour may be a sample set. As another example, a most recently obtained number of samples generated by a particular producer application may be a sample set. In one embodiment, samples obtained during a sample period are related together in a sample set based on intrinsic properties of the samples. Properties of samples may be included in the samples before they are obtained. For example, properties may be included in samples when they are generated by a producer application, or as a result of then being processed when passing through a data pipeline. In one embodiment, a property of a sample is name-value pair extracted from the sample based on an analysis of the sample. For example, the analysis may involve parsing text or character string data of the sample. In one embodiment, the name of an extracted property is represented in a computer as a character string data, but may be represented by another computer data type such as, for example, a number. In one embodiment, the value of a property is represented in a computer by a character string, a number, an array or other ordered list of values, a Boolean (e.g., TRUE/FALSE), no value (e.g., NULL), or another sample. Thus, samples may be nested in other samples to form a hierarchy of samples. A sample in a parsed form, including any samples nested therein, may be represented (stored) in a computer memory as a collection of name/value pairs (i.e., properties). For example, the collection may be implemented in a computer as an object, a record, a struct, a dictionary, a hash table, a keyed list, an associative array, or other suitable data structure. According to one embodiment, samples are related together in the same sample set based on a “routing-key” property of the samples. For example, samples that have the same “routing-key” property value may be related together in the same sample set. In one embodiment, the value of the “routing-key” property of a sample is used by a message brokering application or other middleware application such as, for example, the RabbitMQ open source messaging software. In one embodiment, the name of the “routing-key” property in a sample is “routing-key”. However, a different property name may be used according to the requirements of the particular implementation at hand. In addition to or instead of the routing key property, samples can be related together in the same sample set based on the “source” of the samples as reflected by one or more properties of the samples. In one embodiment, the source of a sample is one of, or a combination of, the following, each of which may be specified by one or more properties of the sample: The producer application that generated the sample, or The type of client operating system interacting with the producer application that caused the producer application to generate the sample. For example, a particular property of the sample named “application-id” may have a value that specifies an identifier of the producer application that generated the sample. As another example, another particular property of the sample named “client-type” may have a value that specifies the type of operating system installed on the client that caused the producer application to generate the sample. For example, the value of the “client-type” property may indicate a conventional operating system such as, for example, a version of MICROSOFT WINDOWS, APPLE IO, or ANDROID. In one embodiment, the name of the “application-id” property in a sample is “application-id”. However, another name may be used according to the requirements of the particular implementation at hand. In one embodiment, the name of the “client-type” property in a sample is “client-type”. However, another name may be used according to the requirements of the particular implementation at hand. While in some embodiments, samples are related together in the same sample set based on one or more of the “routing-key”, “application-id”, and “client-type” properties of the samples, samples are related together in the same sample set based on other properties of the samples in other embodiments. Generally, samples may be related together in the same sample set based on the values of one or more designated properties of the samples. Typically, such designated properties will be ones that occur in most, if not all, of the samples. For example, samples can be related together in sample sets based one or more properties of the samples that conform to an Advanced Message Queuing Protocol (AMQP) model. For example, samples can be related together sample sets based on one or more of a “Content type” property, a “Content encoding” property”, a “Routing key” property”, a “Delivery mode” property, “a Message priority” property”, a “Message publishing timestamp” property”, an “Expiration period” property”, or a “Publisher application id” property of the samples. In one embodiment, relating samples together in the same sample set includes detecting samples that have the same or similar values for the same set of one or more designated properties. Similarity can be syntactic or semantic similarity according to the requirements of the particular implementation at hand. For example, two or more samples obtained during a sample period that have the same values for the “routing-key” property may be related together in the sample set. As another example, two or more samples obtained during a sample period that have the same values for the “application-id” and “client-type” properties may be related together in another sample set. Schema Counters According to an embodiment, information used to infer a schema is collected during a sample period based on the samples related together in a sample set during the sample period. More specifically, a set of one or more “schema counters” is maintained for each uniquely named property identified in the samples in the sample set. The uniquely named properties may be identified, and the associated schema counters for the uniquely named properties updated, as or just after the samples in the sample set are obtained. In this case, once uniquely named properties in an obtained sample have been identified and the associated schema counters for the uniquely named properties identified in the sample updated, the sample may be discarded. The sample need not be persisted to disk or other non-volatile data storage medium. This allows efficient processing of a large volume of samples by avoiding persisting the obtained samples to a non-volatile data storage device (e.g., a hard disk), which typically has higher access latency and lower throughput than volatile data storage devices (e.g., RAM). For example, schemas can be inferred from millions of samples or more obtained per second. In an embodiment, to identify uniquely named properties in an obtained sample and to update associated schema counters, the sample is parsed to identify properties in the sample and to produce a data structure representation of the properties of the sample. The data structure may be any computer data structure suitable for storing and representing the properties in a computer memory. For example, the data structure can be an object, a record, a struct, a dictionary, a hash table, a keyed list, an associative array, or other data structure suitable for storing and representing an unordered set of properties where the value of a property can be a string value, a number value, an unordered set of properties, an array or other ordered list of values, a Boolean value, or no value (e.g., NULL). The parsing technique used to produce a data structure representation of the properties of an obtained sample may vary depending on the data format of the obtained sample. For example, if the data format of the obtained sample is eXtensible Markup Language (XML), then a parsing technique for parsing XML syntax may be used. Alternatively, as another example, if the data format of the obtained sample is JavaScript Object Notation (JSON), then a parsing technique for parsing JSON syntax may be used. Generally, any parsing technique suitable for identifying properties in an obtained sample and representing the identified properties in a computer as a data structure that facilitates iterating over the identified properties may be used. Once the data structure representation of the identified properties of the sample is produced, the uniquely named properties of the sample are identified by iterating over the properties in the data structure, including all nested properties in the data structure. For each uniquely named property identified in the data structure, the following may be determined in addition to the value of the property itself: The primary data type of the property value, and The sub-data type of the property value. The primary type of the property value may be one of a character string data type, a number data type, an data type for an unordered set of properties, a data type for an ordered list of values, a Boolean data type, or a no value data type (e.g., NULL). The primary type of the property value may be determined by performing a type detection operation on the property value. For example, the type detection operation may be a standard library function that accepts a value as input and returns a data type designated as output. Examples of a data type an ordered set of properties include an object, a record, a struct, a dictionary, a hash table, a keyed list, and an associative array. Examples of a data type for an ordered list of values include an array, a vector, and a list. In some cases where the primary data type of the property value cannot be determined, the primary data type is UNKNOWN or other information indicating that the primary data type of the property value could not be detected. Detection of the sub-data type of the property value may be dependent on its primary type. In an embodiment, if the primary type of the property value is a character string data type, then the sub-data type of the property values is detected by pattern matching. For example, one or more regular expressions representing character string patterns of one or more sub-data types respectively may be applied to the property value. If the character string property value matches (satisfies) one of the regular expressions, then the property value may be considered to be of the corresponding sub-data type. For example, a regular expression may be applied to the property value to determine whether the property value is in the form of a URL as defined in Request for Comments (RFC) standards document 1738. As another example, a regular expression may be applied to the property value to determine whether the property value is in the form of an Internet mail address as described in RFC standards document 5322. Other types of character string patterns may be detected using regular expressions as one skilled in the art will understand. For example, a regular expression may be applied to the property value to determine whether the property value is in the form of a calendar date or timestamp. In some cases where the primary data type of the property value does not have any sub-data types, the sub data-type of the property value is NOT APPLICABLE or other information indicating that the property value does not have sub-data type. In some cases where the primary data type of the property value has sub data-types but none of the sub data-types were detected for the property value, the sub data-type of the property value is UNKNOWN or other information indicating that the sub data-type of the property value could not be detected. In an embodiment, if the primary type of the property is a number data type, then the sub-data type of the property is detected by applying a type detection operation on the property value that determines the type of number. The type detection operation may be implemented by a standard library, for example. The type detection operation may specify whether the property value is an integer or a floating point number, for example. In one embodiment, for a sample that is nested within another sample, a property of the nested sample is made unique by combining the name of the property in the nested sample with the name of the property in the other sample having the nested sample as its value. This may be done for arbitrary levels of sample nesting to generate a unique property name for nested properties (i.e., a property of a sample that is nested in another sample, and which itself may be nested in another sample, and so on). After the primary data type, optionally the sub-data type, and the value of a uniquely named property in a sample has been identified, a set of schema counters maintained for the uniquely named property and the sample set to which the sample belongs is updated. In one embodiment, the set of schema counters maintained for each uniquely named property identified from obtained samples that belong to a sample set include one or more of: Primary Data Types List: A list of one or more primary types of the property that have been identified in the obtained samples. And for each primary type, a count of how many times the property has been identified in the obtained samples with that primary type. The Primary Data Types List may be represented (stored) in a computer memory in a data structure suitable for representing the information contained in the Primary Data Types List. For example, the data structure can one suitable for representing an unordered set of properties in which the names of the properties represent the names of the primary data types in the list and the values of the properties represent the count of how many times the uniquely named property has been identified in the obtained samples with the corresponding primary type. Sub-Data Types List: For each primary data type in the Primary Data Type List, a list of one or more sub data-types of the property that have been identified in the obtained samples. And for each sub data-type in the list, a count of how many times the property has been identified in the obtained samples with that sub data-type. The Sub-Data Types List may be represented (stored) in a computer memory in a data structure suitable for representing the information contained in the Sub-Data Types List. For example, the data structure can one suitable for representing an unordered set of properties in which the names of the properties represent the names of the sub-data types in the list and the values of the properties represent the count of how many times the uniquely named property has been identified in the obtained samples with the corresponding sub-data type. Unique Values List: A list of one or more unique values of the property that have been identified in the obtained samples. And for each value in the list, a count of how many times the property has been identified in the obtained samples with that value. The Unique Values List may be represented (stored) in a computer memory in a data structure suitable for representing the information contained in the Unique Values List. For example, the data structure can one suitable for representing an unordered set of properties in which the keys of the properties are the unique values in the list and the values of the properties represent the count of how many times the uniquely named property has been identified in the obtained samples with the corresponding unique value. Once the sample period has expired, schema counters have been collected for uniquely named properties identified in samples obtained during the sample period and belonging to the sample set. Thus, the schema counters are collected from the samples belong to the sample set. Once the sample period has expired, a schema is inferred from the collected schema counters. In an embodiment, schema counters for a sample set are represented (stored) in a computer memory as a data structure representing an unordered set of properties. For example, the data structure may be an object, a record, a struct, a dictionary, a hash table, a keyed list, or an associative array. The name of each property in the set corresponds to a uniquely named properties identified during the sample period in the samples that belong to the sample set. The value of each property includes the Primary Data Types List, the Sub-Data Types List, and the Unique Values List for the corresponding uniquely named property. Schema Inference According to an embodiment, a schema is inferred from the schema counters collected from samples belonging to a sample set over a sample period. In an embodiment, the following schema information is inferred for each uniquely named property of the schema counters: A primary data type for the property; A sub-data type for the property; Whether the value for the property can contain no value? (i.e., is nullable?) Whether the property is required? A list of possible values for the property. In an embodiment, if the Primary Data Types List for a property contains more than one primary data type, then the primary data type for the property is determined as the primary data type in the Primary Data Types List with the highest associated count value. That is, the primary data type most often identified for the property in the samples obtained for the sample set during the sample period. Otherwise, the primary data type is determined as the one primary data type in the Primary Data Types List for the property. In an embodiment, if the Sub-Data Types List for a property contains more than one sub-data type, then the sub-data type for the property is determined as the sub-data type in the Sub-Data Types List with the highest associated count value. That is, the sub-data type most often identified for the property in the samples obtained for the sample set during the sample period. Otherwise, the sub-data type is determined as the one sub-data type in the Sub-Data Types List for the property. In an embodiment, the property is determined to be nullable if a NULL or a no-value exists in the Unique Values List for the property. If one does not exist, then the property is determined to not be nullable. In an embodiment, the property is determined to be required if the property was identified in every sample belonging to the sample set during the sample period. Otherwise, the property is determined not to be required. In an embodiment, if the Unique Values List for a property contains less than a predetermined threshold number of values, the set of possible values for the property is determined as the Unique Value List for the property. The threshold number may set based on the number of samples. For example, the threshold number may be set larger or smaller depending on the number of samples in the sample set. In an embodiment, the threshold number is set based on a percentage of the number of samples in the sample set. For example, if the number of values in the Unique Value List for a property is less than 50% of the total number of samples in the sample set, this may indicate that the values for the property are not random or unique per sample. On the other hand, if the number of values in the Unique Value List for the property is close to 100% of the total number of samples in the sample set, this indicates that the values for the property are random or unique per sample. If this case, the set of possible values for the property is determined as “random”, “unique”, or some other value indicating that the values for the property are random or unique. A schema inferred from schema counters collected from samples belonging to a sample set over a sample period is stored in a schema repository or other database. The stored schema can include the information inferred for each uniquely named property of the schema counters including, for each uniquely named property of the schema counters, information about the determined primary type of the property, the determined sub-data type of the property, whether the property is nullable, whether the property is required, and the list of possible values for the property. The information stored in the schema repository or database for a schema may be associated in the schema repository or database with an identifier of the schema. In an embodiment, the identifier is composed of the values of the properties used to related obtained samples together in the sample set for which the schema was inferred. For example, the identifier can be a combination of one or more “routing-key”, “application-id’, or “client-type” property values identified in the samples that belong to the sample set. By storing schema information in a schema repository or database, the schemas can be retrieved for later inspection. For example, persons responsible for consumer applications within the online service can query the repository or database to discover what activity information is being produced by the producer applications and to formulate binding rules for obtaining specific activity information of interest. For example, schema for activity information generated by a particular producer application can be retrieved for inspection by querying the repository or database with the “application-id” of the Schema Changes Once an initial schema has been inferred from a sample set, another schema can be inferred from a subsequent sample set. Both schemas may be associated with the same schema identifier in a schema repository or database and samples may be related together in the two sample sets based on the samples having the same property values. For example, samples in the initial sample set and samples in the subsequent sample may all have the same value for the “routing-key” property or the “application-id” property. After the two schemas have been inferred, the two schemas can be compared to each other to determine whether the schemas differ. The schema change detection process can be performed in a continuous or on-going manner. For example, sample sets can continually be collected over sample periods, schemas continually inferred from the sample sets, and schemas inferred from later collected sample sets to schema inferred from prior collected sample sets to detect changes to schemas over time. In an embodiment, comparing two schemas together to detect changes involves comparing the set of unique property names in one of the schemas to the set of unique property names in the other of the schemas. In addition, comparing the two schema involves comparing schema information for each common property between the two schemes. In an embodiment, a change between two compared schemas is detected when any of the following is detected by the comparison: The sets of unique property names of the two schemas differ. The sets differ if one set contains a unique property name that is not in the other set, or For any common property name in both of the sets, any of the following schema information is different between the two common properties: The primary type is different between the two common properties, The sub-type is different between the two common properties, The common property is required by one schema but not required by the other schema. The common property is nullable according to one schema but not nullable according to the other schema. The set of possible values between the two common properties is different. In an embodiment, the set of possible values between the two common properties is detected as different if one set of possible values contains a value that is not contained in the other set. When comparing values in this regard, an exact match can be required to detect two values as equal. Alternatively, a fuzzy matching algorithm can be applied to two values to determine whether they are “equal”. When a schema change is detected, persons associated with binding rules can be notified of the change. Binding Rules A binding rule is pre-arranged criteria used by a publish-subscription system, a messaging queuing system, or other messaging brokering system for routing activity messages obtained from producer applications toward consumer applications. In these systems, binding rules are used by components within the systems that receive activity messages from producer applications to route the activity messages to message queues within the systems. The message queues holds (stores) activity messages and forwards them to consumer applications. The system can either “push” the activity messages from the message queues to the consumer applications. Alternatively, the consumer applications can “pull” the activity messages from the message queues, depending on the requirements of the particular implementation at hand. In an embodiment, the pre-arranged criteria of a binding rule specifies conditions on activity messages that satisfy the binding rule. The conditions may be in terms of properties of the activity messages. For example, a condition may specify names and values of properties of the activity messages. A possible condition of a binding rule is: “routing-key=STOCK.USD.ACME”. In this case, the condition of the binding rule is not satisfied by an activity message unless the activity messages contains the property routing-key=STOCK.USD.ACME”. Binding rule conditions may be expressed in terms of properties other than just a “routing-key” property. In an embodiment, identifiers of inferred schemas are associated with identifiers of binding rules in a schema repository or database. In addition, the identifiers of the binding rules are associated with contact information. The contact information may be e-mail addresses, phone numbers, or other information for contact a person electronically. The purposes of associated schema with binding rules and binding rules with contacts is to be able to automatically notify the contacts when a change to a schema is detected. In an embodiment, an identifier of a schema is associated in the repository or database with one or more binding rule identifiers and each of the one or more binding rule identifiers is associated in the repository or database with contact information for one or more contacts. The set of binding rule identifiers associated with the schema represent binding rules that may be affected if the schema of the activity messages from which the schema was inferred is changed. The contacts associated with one of the binding rules represents a person or persons who should be notified if the schema of the activity messages from the schema was inferred is changed. The purpose of such notification is to inform a person or persons responsible for the binding rules so that the binding rules can be corrected or modified based on the detected schema change. In an embodiment, the notification includes, or includes a hyperlink to, information about what changed in the current version of the schema relative to a previous version of the schema. For example, the information may specify the identifier of the schema that changed, any uniquely named properties that were removed from the current version of the schema relative to the previous version of the schema, any uniquely named properties that were added in the current version of the schema relative to the previous version of the schema, any properties that are now or are no longer nullable in the current version of the schema relative to the previous version, any properties that are now or no longer required in the current version of the schema relative to the previous version, and properties for which the possible values have changed between the current and previous versions. In addition or alternatively, the notification can provide, or provide a hyperlink to, both the current and previous versions of the schema for comparison by the recipient of the notification. Activity Information Schema Discovery and Schema Detection and Notification Process Referring now to FIG. 1, it is a flowchart illustrating steps performed by an online service that generates activity messages, according to one embodiment. In some embodiments, the steps are performed at one or more computing devices with one or more processors and memory, and one or more programs stored in the memory that execute on the one or more processors. At step 100, a first set of related activity messages obtained during a first sample period is analyzed. For example, the first sample period may correspond to a predetermined period of time or a predetermined number of activity messages. The activity messages may be related together in the first set based on one or more properties of the activity messages. For example, the activity messages may be related together in the first set based on values for one or more of a “routing-key” property, an “application id” property, and/or a “client-type” property. Analyzing the activity message can include parsing each of the activity messages into a set of unordered properties. At step 102, first schema counters for uniquely named properties identified in the first set of activity messages are determined based on the analyzing performed at step 100. For example, determining the first schema counters can include parsing the activity messages into sets of unordered properties and determining schema counters for identified uniquely named properties based on the sets of unordered properties. In an embodiment, steps 100 and 102 are performed during the first sample period. In particular, each activity message in the first set obtained during the sample period is analyzed during the sample period in accordance with step 100 and then schema counters for uniquely named properties identified in the activity message based on the analyzing of the activity message in accordance with step 100 are determined in accordance with step 102. Thus, each activity message in the first set can be analyzed (e.g., parsed into a set of unordered properties) and schema counters determined for the analyzed activity message as each activity message in the first set is obtained during the first sample period. In an embodiment, once schema counters for an analyzed activity message have been determined in accordance with step 102, the results of analyzing the activity message in accordance with step 100 can be discarded. For example, once schema counters for an activity message have been determined based on a set of unordered properties in accordance with step 102, the data structure storing the set of unordered properties can be released (freed) from memory. At step 104, a first schema is inferred after the first sample period from the first schema counters determined at step 102. For example, inferring the first schema can include performing, for each of the uniquely named properties identified in the first set of activity messages based on the analyzing performed at step 100, and based on the schema counters determined at step 102, one or more of the following: a primary data type of the uniquely named property, a sub data-type of the uniquely named property, whether the uniquely named property is nullable, whether the uniquely named property is required, or the possible values for the uniquely named property. At step 106, a second set of related activity messages obtained during a second sample period is analyzed. The criteria for relating the second set of activity messages together in the second set may be same as the criteria used for relating the first set of activity messages together in the first set. For example, the activity messages in the first set and the second set may all have the same value for a “routing-key” property. Typically, the first set of activity messages obtained during the first sample period is obtained before the second set of activity messages is obtained. That is, the last activity message obtained in the first set is obtained before the first activity message in the second set is obtained. At step 108, second schema counters for uniquely named properties identified in the second set of activity messages are determined based on the analyzing performed at step 106. Like steps 100 and 102 can be performed during the first sample period, steps 106 and 106 can be performed during the second sample period. At step 110, a second schema is inferred after the second sample period from the second schema counters determined at step 108. At step 112, the first schema and the second schema are compared for any differences. If differences exist based on the comparison, a notification may be automatically sent. For example, an e-mail or text message notification can be sent responsive to detecting at least one difference between the first schema and the second schema. Implementing Computing Device In some embodiments, the techniques are implemented one or more computing devices. For example, FIG. 2 is a block diagram that illustrates a computing device 200 in which some embodiments of the present invention may be embodied. Computing device 200 includes a bus 202 or other communication mechanism for communicating information, and a hardware processor 204 coupled with bus 202 for processing information. Hardware processor 204 may be, for example, a general purpose microprocessor or a system on a chip (SoC). Computing device 200 also includes a main memory 206, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 202 for storing information and instructions to be executed by processor 204. Main memory 206 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 204. Such instructions, when stored in non-transitory storage media accessible to processor 204, render computing device 200 into a special-purpose machine that is customized to perform the operations specified in the instructions. Computing device 200 further includes a read only memory (ROM) 208 or other static storage device coupled to bus 202 for storing static information and instructions for processor 204. A storage device 210, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 202 for storing information and instructions. Computing device 200 may be coupled via bus 202 to a display 212, such as a liquid crystal display (LCD) or other electronic visual display, for displaying information to a computer user. Display 212 may also be a touch-sensitive display for communicating touch gesture (e.g., finger or stylus) input to processor 204. An input device 214, including alphanumeric and other keys, is coupled to bus 202 for communicating information and command selections to processor 204. Another type of user input device is cursor control 216, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 204 and for controlling cursor movement on display 212. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computing device 200 may implement the techniques described herein using customized hard-wired logic, one or more application-specific integrated circuits (ASICs), one or more field-programmable gate arrays (FPGAs), firmware, or program logic which, in combination with the computing device, causes or programs computing device 200 to be a special-purpose machine. According to some embodiments, the techniques herein are performed by computing device 200 in response to processor 204 executing one or more sequences of one or more instructions contained in main memory 206. Such instructions may be read into main memory 206 from another storage medium, such as storage device 210. Execution of the sequences of instructions contained in main memory 206 causes processor 204 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device 210. Volatile media includes dynamic memory, such as main memory 206. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 202. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 204 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computing device 200 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 202. Bus 202 carries the data to main memory 206, from which processor 204 retrieves and executes the instructions. The instructions received by main memory 206 may optionally be stored on storage device 210 either before or after execution by processor 204. Computing device 200 also includes a communication interface 218 coupled to bus 202. Communication interface 218 provides a two-way data communication coupling to a network link 220 that is connected to a local network 222. For example, communication interface 218 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 218 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 218 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link 220 typically provides data communication through one or more networks to other data devices. For example, network link 220 may provide a connection through local network 222 to a host computer 224 or to data equipment operated by an Internet Service Provider (ISP) 226. ISP 226 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 228. Local network 222 and Internet 228 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 220 and through communication interface 218, which carry the digital data to and from computing device 200, are example forms of transmission media. Computing device 200 can send messages and receive data, including program code, through the network(s), network link 220 and communication interface 218. In the Internet example, a server 230 might transmit a requested code for an application program through Internet 228, ISP 226, local network 222 and communication interface 218. The received code may be executed by processor 204 as it is received, and/or stored in storage device 210, or other non-volatile storage for later execution. A software system is typically provided for controlling the operating of computing device 200. The software system, which is usually stored in main memory 206 and on fixed storage (e.g., hard disk) 210, includes a kernel or operating system (OS) which manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file and network input and output (I/O), and device I/O. The OS can be provided by a conventional operating system such as, for example, MICROSOFT WINDOWS, SUN SOLARIS, or LINUX. One or more application(s), such as client software or “programs” or set of processor-executable instructions, may also be provided for execution by computer 200. The application(s) may be “loaded” into main memory 206 from storage 210 or may be downloaded from a network location (e.g., an Internet web server). A graphical user interface (GUI) is typically provided for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the computing device in accordance with instructions from OS and/or application(s). The graphical user interface also serves to display the results of operation from the OS and application(s). Extensions and Alternatives The present disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. 14454632 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix May 28th, 2019 12:00AM Aug 8th, 2016 12:00AM https://www.uspto.gov?id=US10303777-20190528 Localization platform that leverages previously translated content One embodiment of the present invention sets forth a technique for translating textual content. The technique includes receiving a request to translate an element of source text from an origin language to a target language and searching a database for an element of matching text in the origin language that at least partially matches the element of source text. The technique further includes, if an element of matching text is found in the database, then reading from the database an element of previously translated text that is mapped to the element of matching text and includes at least one word that is translated into the target language, and transmitting the element of source text, the element of matching text, and the element of previously translated text to a location for translation, or if an element of matching text is not found in the database, then transmitting the element of source text to the location for translation. 10303777 1. A computer-implemented method, comprising: receiving from an external application a request to translate an element of source text from an origin language to a target language that is determined based on a geographical location where the source text is employed; searching a database associated with the geographical location for an element of matching text in the origin language that at least partially matches the element of source text; in response to determining that an element of matching text is found in the database: reading from the database an element of previously translated text that is mapped to the element of matching text and includes at least one word that is translated into the target language; and transmitting the element of source text, the element of matching text, the element of previously translated text, and a mapping between the element of source text and the element of matching text to a location for translation. 2. The method of claim 1, further comprising receiving from the location an element of newly translated text that corresponds to the element of source text. 3. The method of claim 2, further comprising: performing a quality check on the element of newly translated text; and if the element of newly translated text fails the quality check, returning the element of newly translated text to the location for correction, or if the newly translated text passes the quality check, updating the database to include the element of source text, the element of newly translated text, and a mapping of the element of source text to the element of newly translated text. 4. The method of claim 2, further comprising transmitting the element of newly translated text to an application that transmitted the request to translate. 5. The method of claim 2, further comprising updating the database to include the element of source text, the element of newly translated text, and the mapping of the element of source text to the newly translated text. 6. The method of claim 1, wherein searching the database for the element of matching text comprises searching the database for an element of text that is an in-context match of the element of source text, an out-of-context match of the element of source text, or a fuzzy match of the element of source text. 7. The method of claim 1, wherein an element of matching text is found in the database, and further comprising calculating a matching score that quantifies how closely the element of matching text matches the element of source text; and generating metadata associated with the element of matching text that includes the matching score. 8. The method of claim 7, wherein the matching score includes one of an indicator designating the element of matching text as an in-context match of the element of source text, an exact out-of-context match of the element of source text, or a fuzzy match of the element of source text. 9. The method of claim 7, wherein the matching score includes an indicator designating the element of matching text as a fuzzy match of the element of source text and metadata indicating what portion of the element of matching text does not match the element of source text. 10. The method of claim 7, further comprising transmitting the metadata to the location with the element of previously translated text. 11. The method of claim 7, wherein the matching score is based on an editing distance between the element of source text and the element of matching text. 12. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps of: receiving from an external application a request to translate an element of source text from an origin language to a target language that is determined based on a geographical location where the source text is employed; searching a database associated with the geographical location for an element of matching text in the origin language that at least partially matches the element of source text; in response to determining that an element of matching text is found in the database: reading from the database an element of previously translated text that is mapped to the element of matching text and includes at least one word that is translated into the target language; and transmitting the element of source text, the element of matching text, the element of previously translated text, and a mapping between the element of source text and the element of matching text to a location for translation. 13. The non-transitory computer-readable storage medium of claim 12, wherein receiving the request to translate the element of source text comprises receiving a block of source text that includes multiple phrases or sentences, the method further comprising separating the block of source text into multiple elements of source text that each include a single sentence or phrase. 14. The non-transitory computer-readable storage medium of claim 12, wherein receiving the request to translate the element of source text comprises receiving a block of text in the origin language that is embedded in an electronic document, the method further comprising extracting the block of text from the electronic document, wherein the element of source text includes at least a portion of the block of text. 15. The non-transitory computer-readable storage medium of claim 12, wherein the element of source text includes a multi-word phrase or sentence in the origin language. 16. The non-transitory computer-readable storage medium of claim 12, further comprising receiving from the location an element of newly translated text that corresponds to the element of source text. 17. The non-transitory computer-readable storage medium of claim 16, further comprising: performing a quality check on the element of newly translated text; and if the element of newly translated text fails the quality check, returning the element of newly translated text to the location for correction, or if the newly translated text passes the quality check, updating the database to include the element of source text, the element of newly translated text, and a mapping of the element of source text to the element of newly translated text. 18. The non-transitory computer-readable storage medium of claim 16, further comprising transmitting the element of newly translated text to an application that transmitted the request to translate. 19. The non-transitory computer-readable storage medium of claim 16, further comprising updating the database to include the element of source text, the element of newly translated text, and the mapping of the element of source text to the newly translated text. 20. A system, comprising: a memory storing a leveraging application; and a processor that is coupled to the memory and, when executing the leveraging application, is configured to: receive from an external application a request to translate an element of source text from an origin language to a target language that is determined based on a geographical location where the source text is employed; search a database associated with the geographical location for an element of matching text in the origin language that at least partially matches the element of source text; if an element of matching text is found in the database, then: read from the database an element of previously translated text that is mapped to the element of matching text and includes at least one word that is translated into the target language; and transmit the element of source text, the element of matching text, the element of previously translated text, and a mapping between the element of source text and the element of matching text to a location for translation. 20 BACKGROUND OF THE INVENTION Field of the Invention The present invention relates generally to computer science and, more specifically, to a localization platform that leverages previously translated content. Description of the Related Art Text localization is the process of translating and otherwise adapting written content to a language or dialect specific to a country or region. Because machine translation algorithms are generally unable to accurately translate idioms or accommodate the differences in linguistic typology that are invariably present between any two languages, there are many applications for which manual translations by linguists are still mandatory. For example, with respect to software applications, software menus, legal documents, and customer service communications, even a small error in translation can have a serious negative impact on the utility and/or quality of the software, document, or service that includes the incorrectly translated text. However, manual translation is typically a time-consuming, error-prone, and long cycle-time process that is difficult to scale efficiently. Consequently, for business processes that rely on large volumes of textual content to be translated on a weekly or daily basis, the time and costs associated with manually translating so much textual content can be significantly burdensome. For example, web-based vendors that serve an international customer base may have very large daily or weekly translation needs that must be met quickly and accurately to avoid customer satisfaction issues and to drive international sales. Without the ability to provide high-quality translations quickly and inexpensively, such businesses can suffer dramatically. As the foregoing illustrates, what is needed in the art are more effective approaches to translating textual content. SUMMARY OF THE INVENTION One embodiment of the present invention sets forth a technique for translating textual content. The technique includes receiving a request to translate an element of source text from an origin language to a target language and searching a database for an element of matching text in the origin language that at least partially matches the element of source text. The technique further includes, if an element of matching text is found in the database, then reading from the database an element of previously translated text that is mapped to the element of matching text and includes at least one word that is translated into the target language, and transmitting the element of source text, the element of matching text, and the element of previously translated text to a location for translation, or if an element of matching text is not found in the database, then transmitting the element of source text to the location for translation. At least one advantage of the disclosed techniques is that for new textual content that requires translation, previously translated textual content can be leveraged in an automated process to reduce or eliminate how much manual translation of the new textual content is needed. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 illustrates a network infrastructure configured to implement one or more aspects of the present invention; FIG. 2 is a more detailed illustration of the content server of FIG. 1, according to various embodiments of the present invention; FIG. 3 is a more detailed illustration of the control server of FIG. 1, according to various embodiments of the present invention; FIG. 4 is a more detailed illustration of the endpoint device of FIG. 1, according to various embodiments of the present invention; FIG. 5 is an illustration of a localization platform configured to generate translated content for the network infrastructure of FIG. 1, according to various embodiments of the present invention; FIG. 6 sets forth a flowchart of method steps for translating textual content, according to various embodiments of the present invention; FIG. 7 sets forth a flowchart of method steps for performing a matching score analysis, according to various embodiments of the present invention; and FIG. 8 is an illustration of a computing device configured to implement one or more functions of the localization platform of FIG. 5, according to various embodiments of the present invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the embodiments of the present invention. However, it will be apparent to one of skill in the art that the embodiments of the present invention may be practiced without one or more of these specific details. System Overview FIG. 1 illustrates a network infrastructure 100, according to various embodiments of the invention. As shown, the network infrastructure 100 includes content servers 110, control server 120, and endpoint devices 115, each of which are connected via a communications network 105. Network infrastructure 100 is configured to distribute content to content servers 110, and such content is then distributed on demand to endpoint devices 115. Each endpoint device 115 communicates with one or more content servers 110 (also referred to as “caches” or “nodes”) via the network 105 to download content, such as textual data, graphical data, audio data, video data, and other types of data. The downloadable content, also referred to herein as a “file,” is then presented to a user of one or more endpoint devices 115. In various embodiments, the endpoint devices 115 may include computer systems, set top boxes, mobile computer, smartphones, tablets, console and handheld video game systems, digital video recorders (DVRs), DVD players, connected digital TVs, dedicated media streaming devices, (e.g., the Roku® set-top box), and/or any other technically feasible computing platform that has network connectivity and is capable of presenting content, such as text, images, video, and/or audio content, to a user. Each content server 110 may include a web-server, database, and server application 217 configured to communicate with the control server 120 to determine the location and availability of various files that are tracked and managed by the control server 120. Each content server 110 may further communicate with cloud services 130 and one or more other content servers 110 in order “fill” each content server 110 with copies of various files. In addition, content servers 110 may respond to requests for files received from endpoint devices 115. The files may then be distributed from the content server 110 or via a broader content distribution network. In some embodiments, the content servers 110 enable users to authenticate (e.g., using a username and password) in order to access files stored on the content servers 110. Although only a single control server 120 is shown in FIG. 1, in various embodiments multiple control servers 120 may be implemented to track and manage files. In various embodiments, the cloud services 130 may include an online storage service (e.g., Amazon® Simple Storage Service, Google® Cloud Storage, etc.) in which a catalog of files, including thousands or millions of files, is stored and accessed in order to fill the content servers 110. Cloud services 130 also may provide compute or other processing services. Although only a single cloud services 130 is shown in FIG. 1, in various embodiments multiple cloud services 130 may be implemented. FIG. 2 is a more detailed illustration of content server 110 of FIG. 1, according to various embodiments of the present invention. As shown, the content server 110 includes, without limitation, a central processing unit (CPU) 204, a system disk 206, an input/output (I/O) devices interface 208, a network interface 210, an interconnect 212, and a system memory 214. The CPU 204 is configured to retrieve and execute programming instructions, such as server application 217, stored in the system memory 214. Similarly, the CPU 204 is configured to store application data (e.g., software libraries) and retrieve application data from the system memory 214. The interconnect 212 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 204, the system disk 206, I/O devices interface 208, the network interface 210, and the system memory 214. The I/O devices interface 208 is configured to receive input data from I/O devices 216 and transmit the input data to the CPU 204 via the interconnect 212. For example, I/O devices 216 may include one or more buttons, a keyboard, a mouse, and/or other input devices. The I/O devices interface 208 is further configured to receive output data from the CPU 204 via the interconnect 212 and transmit the output data to the I/O devices 216. The system disk 206 may include one or more hard disk drives, solid state storage devices, or similar storage devices. The system disk 206 is configured to store non-volatile data such as files 218 (e.g., audio files, video files, subtitles, application files, software libraries, etc.). The files 218 can then be retrieved by one or more endpoint devices 115 via the network 105. In some embodiments, the network interface 210 is configured to operate in compliance with the Ethernet standard. The system memory 214 includes a server application 217 configured to service requests for files 218 received from endpoint device 115 and other content servers 110. When the server application 217 receives a request for a file 218, the server application 217 retrieves the corresponding file 218 from the system disk 206 and transmits the file 218 to an endpoint device 115 or a content server 110 via the network 105. Files 218 include a plurality of digital visual content items, such as videos and still images. In addition, files 218 may include textual content associated with such digital visual content items, such as movie metadata. For a particular digital visual content item, files 218 may include multiple translations of such textual content, so that users in different countries can interact with or request the particular digital visual content item regardless of the preferred language of the user. FIG. 3 is a more detailed illustration of control server 120 of FIG. 1, according to various embodiments of the present invention. As shown, the control server 120 includes, without limitation, a central processing unit (CPU) 304, a system disk 306, an input/output (I/O) devices interface 308, a network interface 310, an interconnect 312, and a system memory 314. The CPU 304 is configured to retrieve and execute programming instructions, such as control application 317, stored in the system memory 314. Similarly, the CPU 304 is configured to store application data (e.g., software libraries) and retrieve application data from the system memory 314 and a database 318 stored in the system disk 306. The interconnect 312 is configured to facilitate transmission of data between the CPU 304, the system disk 306, I/O devices interface 308, the network interface 310, and the system memory 314. The I/O devices interface 308 is configured to transmit input data and output data between the I/O devices 316 and the CPU 304 via the interconnect 312. The system disk 306 may include one or more hard disk drives, solid state storage devices, and the like. The system disk 206 is configured to store a database 318 of information associated with the content servers 110, the cloud services 130, and the files 218. The system memory 314 includes a control application 317 configured to access information stored in the database 318 and process the information to determine the manner in which specific files 218 will be replicated across content servers 110 included in the network infrastructure 100. The control application 317 may further be configured to receive and analyze performance characteristics associated with one or more of the content servers 110 and/or endpoint devices 115. FIG. 4 is a more detailed illustration of the endpoint device 115 of FIG. 1, according to various embodiments of the present invention. As shown, the endpoint device 115 may include, without limitation, a CPU 410, a graphics subsystem 412, an I/O device interface 414, a mass storage unit 416, a network interface 418, an interconnect 422, and a memory subsystem 430. In some embodiments, the CPU 410 is configured to retrieve and execute programming instructions stored in the memory subsystem 430. Similarly, the CPU 410 is configured to store and retrieve application data (e.g., software libraries) residing in the memory subsystem 430. The interconnect 422 is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU 410, graphics subsystem 412, I/O devices interface 414, mass storage 416, network interface 418, and memory subsystem 430. In some embodiments, the graphics subsystem 412 is configured to generate frames of video data and transmit the frames of video data to display device 450. In some embodiments, the graphics subsystem 412 may be integrated into an integrated circuit, along with the CPU 410. The display device 450 may comprise any technically feasible means for generating an image for display. For example, the display device 450 may be fabricated using liquid crystal display (LCD) technology, cathode-ray technology, and light-emitting diode (LED) display technology. An input/output (I/O) device interface 414 is configured to receive input data from user I/O devices 452 and transmit the input data to the CPU 410 via the interconnect 422. For example, user I/O devices 452 may comprise one of more buttons, a keyboard, and a mouse or other pointing device. The I/O device interface 414 also includes an audio output unit configured to generate an electrical audio output signal. User I/O devices 452 includes a speaker configured to generate an acoustic output in response to the electrical audio output signal. In alternative embodiments, the display device 450 may include the speaker. Examples of suitable devices known in the art that can display video frames and generate an acoustic output include televisions, smartphones, smartwatches, electronic tablets, and the like. A mass storage unit 416, such as a hard disk drive or flash memory storage drive, is configured to store non-volatile data. A network interface 418 is configured to transmit and receive packets of data via the network 105. In some embodiments, the network interface 418 is configured to communicate using the well-known Ethernet standard. The network interface 418 is coupled to the CPU 410 via the interconnect 422. In some embodiments, the memory subsystem 430 includes programming instructions and application data that comprise an operating system 432, a user interface 434, and a playback application 436. The operating system 432 performs system management functions such as managing hardware devices including the network interface 418, mass storage unit 416, I/O device interface 414, and graphics subsystem 412. The operating system 432 also provides process and memory management models for the user interface 434 and the playback application 436. The user interface 434, such as a window and object metaphor, provides a mechanism for user interaction with endpoint device 115. Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into the endpoint device 108. In some embodiments, the playback application 436 is configured to request and receive content from the content server 105 via the network interface 418. Further, the playback application 436 is configured to interpret the content and present the content via display device 450 and/or user I/O devices 452. Localization Platform According to various embodiments of the present invention, new textual content that needs to be localized is translated via an automated or fully automated approach, in which previously translated textual content is leveraged to minimize or eliminate the need for manual translation of the new textual content. In some embodiments, a localization platform receives textual content for translation, leverages previously translated textual content to reduce manual translation workload, acts as an interface with linguists who perform any needed manual translation, and returns translated textual content to the application requesting the translated textual content. One such embodiment is illustrated in FIG. 5. FIG. 5 is an illustration of a localization platform 500 configured to generate translated content for the network infrastructure of FIG. 1, according to various embodiments of the present invention. Localization platform 500 facilitates the localization of textual content to provide a high quality user experience, for example for an end user associated with an endpoint device 115 of network infrastructure 100 in FIG. 1. Ideally, such an end user has a similar high quality experience, regardless of the country of residence of the user or the preferred language of the user. Consequently, user interactions with network infrastructure 100 that include a textual element, such as e-mails, graphical user interface (GUI) text strings, etc., should be presented in the preferred language of the user. For example, in embodiments in which network infrastructure 100 is involved in the distribution of digital entertainment content, such as video streaming, the localized textual content generated by localization platform 500 may include movie metadata (e.g., movie synopsis, cast information, sub-titles, etc.), marketing material, and strings used in customer-facing applications. In general, localization platform 500 is configured to leverage previously translated textual content to partially or fully automate the translation of new textual content. As shown, localization platform 500 is communicatively coupled to one or more external applications 590 and linguists 595 via a network 505, and includes a connector module 520, a hub module 530, a leveraging module 540, and a translation database 550. Network 505 may be any technically feasible communications or information network, wired or wireless, that allows data exchange, such as a wide area network (WAN), a local area network (LAN), a wireless (WiFi) network, and/or the Internet, among others. External applications 590 are source systems for localization platform 500 that employ localized content, such as textual content that is specific to a particular location or country. Therefore, external applications 590 send requests for translations of textual content to localization platform 500 and/or have textual content for translation that are periodically pulled by localization platform 500. External applications 590 may include, without limitation, a user interface (UI) text string repository 591, a metadata application 592, a customer service application 593, and one or more additional external applications, such as marketing applications, legal applications, and the like. Each of external applications 590 is an application that employs textual content that varies depending on the location in which the textual content is employed. UI text string repository 591 may include the most current text strings for a user interface by which a user associated with a particular endpoint device 115 can interact with network infrastructure 100. One embodiment of such a user interface is user interface 434 in FIG. 4. Whenever a text string in the primary language is added to UI text string repository 591, or an existing text string in the primary language is modified, the new or modified text string needs to be translated into each language supported by network infrastructure 100, then provided to UI text string repository 591. In this way, a consistent UI with the same menus, explanations, and the like, is presented for all users, regardless of user location and preferred language. Thus, UI text string repository 591 may request specific text strings to be translated, either periodically, or whenever such a change in the current text strings is detected. Metadata application 592, customer service application 593, and other external applications may similarly request specific text strings to be translated. Metadata application 592 is configured to provide movie-specific data to end users, such as a movie synopsis, cast information, and the like. Customer service application 593 is configured to generate electronic responses, such as e-mails, to customer queries. Thus, metadata application 592 and customer service application 593 each rely on up-to-date text strings that are available in all languages supported by network infrastructure 100. Connector module 520 is configured to connect with external applications 590 to receive requests for textual content to be translated and/or pulls textual content for translation from external applications 590. In the latter case, connector module 520 queries each of external applications 590 for textual content to be translated in an automated process, such as periodic polling of each of external applications 590. In some embodiments, the frequency of polling for each external application may be unique. For example, connector module 520 may query metadata application 592 every few minutes or hours when a daily program is under production and new metadata associated with the daily program are generated on an on-going basis. By contrast, connector module 520 may query customer service application 593 only once per week to determine whether new text for customer bulk delivery e-mails has been generated. Connector module 520 passes textual content received for translation, such as electronic documents 531, to hub module 530, and returns translated textual content received from hub module 530 on to the appropriate requesting external application 590. In some embodiments, connector module 520 receives a translated job 533 from hub module 530 and generates a translated electronic document 521 that is returned to the appropriate requesting external application 590. Hub module 530 is configured to drive workflows within localization platform 500. In particular, hub module 530 is configured to facilitate completion of translation jobs by acting as a hub between connector module 520, linguists 595, and leveraging module 540. For example, in some embodiments, hub module 530 associates a particular project with each of external applications 590, and a given electronic file 531 (or set of files) received from one of external applications 590 is considered a particular translation job 532 within the project associated with that external application 590. In such embodiments, each file 531 may be a word-processing file, spreadsheet, properties file (or property resource bundle), or any other electronic document that is employed by one or more of external applications 590 and includes textual content strings. Hub module 530 transmits each translation job 532 to leveraging module 530 for leveraging (described below), receives a leveraged translation job 541 from leveraging module 540, makes the leveraged version of the translation job 541 available for downloading by one or more linguists 595, and receives translated jobs 533 uploaded by linguists 595. Thus, hub module 530 is configured as the human interface for localization platform 500 for outputting files that include textual content for translation, such as files in an XML Localization Interchange File Format (XLIFF), and receiving files that include textual content that has been translated, such as XLIFF files. Hub module 530 then transmits received translated jobs 533 to connector module 520 for transmission to the appropriate external application 590. In some embodiments, hub module 530 first transmits the translated jobs 533 received from linguists 595 to leveraging module 540 for a quality check prior to transmitting received translated jobs 533 to connector module 520. Leveraging module 540 is configured to perform various function that enable the leveraging of previously translated textual content to eliminate, minimize, or otherwise reduce the need for manual translation of source text that is in an origin language and is requested to be translated into one or more target languages. Specifically, leveraging module 540 is configured to extract textual content from electronic documents included in a translation job 532; segment such textual content into smaller and more easily matched text elements; search translation database 550 for matching text elements; perform a matching score analysis for matching text elements; perform quality checks on translated content received from linguists 595, and update translation database 550 with newly translated content. The above functions are described in greater detail below in conjunction with FIGS. 6 and 7. Translation database 550 includes a continuously updated repository of text elements that are in an origin language, for example English. For each such origin language text element 551, translation database 550 includes one or more target language text elements 552, each in a different target language, that corresponds to the origin language text element 551. The one or more corresponding target language text elements 552 may each be previously translated textual content that a linguist has translated from the origin language text element 551. In addition, for each origin language text element 551, translation database 550 includes contextual metadata 553, which may include any contextual information associated with a particular origin language text element 551. As described below in greater detail, contextual metadata 553 that is associated with a particular origin language text element 551 may be employed for determining whether a particular text element from an electronic document 531 is an in-context match of the particular origin language text element 551. By way of illustration, localization platform 500 is illustrated conceptually as a single entity in FIG. 5, however, in some embodiments, localization platform 500 may be implemented as a distributed computing system across multiple computing devices. In a similar vein, connector module 520, hub module 530 and/or leveraging module 540 may each be configured as a service that is distributed over multiple machines, so that the functionality of any one of connector module 520, hub module 530 and/or leveraging module 540 is not vulnerable to a single point of failure. Furthermore, the various functions of localization platform 500 that are divided between connector module 520, hub module 530 and leveraging module 540 may be combined into a single service, or rearranged between multiple additional services, and are not limited the configuration illustrated in FIG. 5. Leveraging Previously Translated Content in a Localization Platform FIG. 6 sets forth a flowchart of method steps for translating textual content, according to various embodiments of the present invention. Although the method steps are described with respect to the systems of FIGS. 1-5, persons skilled in the art will understand that any system configured to perform the method steps, in any order, falls within the scope of the present disclosure. As shown, a method 600 begins at step 601, in which connector module 520 receives an electronic file 531 that includes textual content for translation, as shown in FIG. 5. Upon receipt of electronic file 531, connector module 520 transmits electronic file 531 to hub module 530. Hub module 530 assigns a job identifier to electronic file 531 and transmits electronic file 531 as translation job 532 to leveraging module 540 for further processing prior to translation. In step 602, leveraging module 540 extracts textual content from the document. Because each of external applications 590 may send a different type of electronic file to localization platform 500 for translation, leveraging module 540 is configured to recognize the particular file type of electronic file 531, and accordingly extract textual content from electronic file 531 that needs to be translated. In some embodiments, electronic file 531 includes metadata, header information, or some other indicator for what portions of electronic file 531 are textual content that need to be translated. In other embodiments, leveraging module 540 is configured to determine what portions of electronic file 531 are textual content that need to be translated. In step 603, leveraging module 540 segments the textual content extracted in step 602 into individual text elements, or elements of source text. In some embodiments, an element of source text is a single sentence or phrase from electronic file 531. Thus, in step 603, a paragraph of textual content extracted from electronic file 531 is generally separated into separate sentences, each sentence being a text element that can be checked for matches in translation database 550. Because larger blocks of text are broken into such elements of source text, there is more likely to be matching text in translation database 550 that exactly or substantially matches each element of source text. Each heading, phrase, title block, and other sentence fragments can also be segmented as a single element of source text. In step 604, leveraging module 540 searches for matching text elements in translation database 550 for each element of source text extracted from electronic file 531. In some embodiments, a particular text element in translation database 550 is considered a matching text element when the element of source text extracted from electronic file 531 and the particular text element in translation database 550 share a minimum number of words or letters. Thus, a text element in translation database 550 may be considered a matching text element even when not an exact word-for-word match of the element of source text extracted from electronic file 531. Any technically feasible text searching algorithm may be employed in step 604. In some embodiments, when no matches are found for a particular element of source text, the text searching algorithm may be configured to transform one or more words in the element of source text to make further matches possible. For example, verb case and/or tense may be transformed, plural nouns may be transformed to singular nouns or vice-versa etc. In step 605, leveraging module 540 determines whether a matching text element has been found for one or more of the elements of source text associated with electronic file 531. If yes, method 600 proceeds to step 611; if no, method 600 proceeds to step 621. In step 611, leveraging module 540 performs a matching score analysis for each element of source text from an electronic file 531 for which one or more matching text elements were found in step 604. The matching score analysis generates at least one matching score for each element of source text for which one or more matching text elements were found in step 604. Thus, there may be multiple matching scores associated with a single electronic file 531, one corresponding to each element of source text for which a matching text element has been found. The matching score for a particular matching text element found in translation database 550 quantifies how closely the element of source text extracted from electronic file 531 matches that matching text element. In some embodiments, for a given element of source text, a matching score analysis may be performed for multiple matching text elements, and the matching text element with the best matching score will be considered in subsequent steps. In some embodiments, in step 611, leveraging module 540 determines whether each matching text element found in step 604 for electronic file 531 is an exact match or an “in-context match” of the corresponding element of source text extracted from electronic file 531. An exact match is typically a word-for-word or character-for-character match of the element of source text of interest. By contrast, an “in-context match” is generally defined as a word-for-word or character-for-character match of a matching text element with the element of source text of interest, in which the matching text element also shares identical or very similar context with the element of source text in question. In some embodiments, for in-context matches, the element of source text is automatically replaced with the matching text element, thereby avoiding the manual translation of the element of source text. In some embodiments, the above-described matching score for each matching text element includes metadata or any other indicator that designates the matching text element as an in-context match of the element of source text, an exact out-of-context match of the element of source text, or a fuzzy match of the element of source text. In such embodiments, the element of matching text may be designated as one of multiple levels of fuzzy match with the element of source text, e.g., a high-fuzzy match, a medium-fuzzy match, a low-fuzzy match, etc. Any technically feasible scoring algorithm may be employed in step 610 to quantify how closely an element of source text extracted from electronic file 531 matches a matching text element found in translation database 550. In some embodiments, a fuzzy match algorithm is employed in step 611. One such embodiment is described below in conjunction with FIG. 7. After the matching score analysis of step 611 is completed for each element of source text for which a matching text element was found, method 600 proceeds to step 612. In step 612, any elements of source text for which an in-context match is found in translation database 550 are automatically replaced with a matching text element that is determined to be an in-context match. That is, in step 612, an automated translation of in-context matches determined in step 611 is performed. Method 600 then proceeds to step 621. In step 621, hub module 530 transmits to a particular linguist 595 one or more elements of source text of electronic file 531, an associated matching text element (i.e., an origin language text element 551), if one was found in step 604, and the element of previously translated text that corresponds to the matching text element (i.e., a target language text element 552). In some embodiments, the one or more elements of source text of electronic file 531, the corresponding origin language text element 551, and the target language text element 552 are included an XLIFF file. The XLIFF file format is an XML-based format that standardizes how localizable data are passed between tools, such as computer-aided translation (CAT) tools, during a localization or translation process. In some embodiments, a matching score or other matching metadata are also transmitted to the linguist 595 in step 621, and may be included in the same XLIFF file in which the elements of source text are transmitted to the linguist 595. In such embodiments, leveraging module 540 generates a leveraged translation job 541 or a portion of a leveraged translation job 541. A leveraged translation job 541 includes, for each element of source text to be translated that has a corresponding element of matching text, a matching score and/or other matching metadata. Thus, when a linguist 595 receives a leveraged version of a translation job 541, the linguist 595 receives not only a particular element of source text to be translated, a corresponding origin language text element 551, and a target language text element 552, but also a matching score and/or other matching metadata. The matching score and/or other matching metadata further describe the relationship between the element of source text and the matching text element (i.e., the origin language text element 551). As a result, a linguist 595 can see whether the matching text element is an exact word-for-word match with the element of source text, or if the matching text element is merely a so-called “fuzzy match,” in which most but not all words match those in the element of source text. Furthermore, in some embodiments, such matching metadata can indicate what portions of the matching text element is not an exact match to the element of source text and why those portions are not considered an exact match. Thus, based on such matching metadata, a linguist 595 can quickly determine what portion of a text element is most likely in need of being modified. In some embodiments, all elements of source text extracted from electronic file 531 are sent to the same linguist 595 for translation, for example in a single XLIFF file as a single leveraged version of a translation job 541. In other embodiments, for example when electronic file 531 includes a large quantity of textual content for translation, different portions of the elements of source text associated with electronic file 531 are sent to different respective linguists 595 as separate leveraged translation jobs 541. For example, in such embodiments, all elements of source text associated with a particular paragraph may be sent to one linguist 595, and all elements of source text associated with a different paragraph may be sent to a different linguist 595. In embodiments in which the elements of source text extracted from a particular electronic file 531 are separated into multiple leveraged versions of translation jobs 541, the elements of source text extracted from the particular electronic file 531 may be separated into different translation jobs based on translation statistics that are compiled as part of the matching score analysis of step 611. Such translation statistics may include for the particular electronic file 531 and for each element of source text associated with electronic file 531: the total number of words to be translated; how many words are an exact match; how many words are a high-fuzzy match, a medium-fuzzy match, and a low-fuzzy match; and the like. Thus, as part of step 621, hub module 530 may generate multiple leveraged translation jobs 541 for a particular electronic file 531, each being transmitted to a different linguist 595. It is noted that for an electronic file 531 for which no matching text elements are found in step 605, complete manual translation of the text elements extracted from electronic file 531 will ultimately be performed by one or more linguists 595. In such cases, the leveraged translation job 541 does not include a matching score, matching metadata, a matching text element, or a previously translated text that corresponds to the matching text element. In step 622, hub module 530 receives a translated job 533 uploaded by a linguist 595, where translated job 533 includes translated content for a particular leveraged translation job 541. In step 623, leveraging module 540 performs a quality check of the translated content included in the leveraged translation job 541 received in step 622. For example, in some embodiments, leveraging module 540 performs one or more error checks to confirm that common human errors have not been made by the linguist 595. When such an error is detected, the leveraged translation job 541 is returned to the appropriate linguist 595 for correction. One error check that can be made in step 623 is to confirm that for each element of source text, a corresponding newly translated text element is not identical or substantially the same as the element of source text. Such high similarity between the origin language text element and the target language text element strongly implies that the linguist failed to complete or even begin the translation. Another error check that can be made in step 623 is to confirm that the language of each newly translated text element is in the target language. Another error check that can be made in step 623 is that the word count of each newly translated text element is within an acceptable range of the word count of the corresponding element of source text. Another error check that can be made in step 623 is a spell check of each newly translated text element. Yet another error check that can be made in step 623 is a sentiment analysis of each newly translated text element, where the overall sentiment of each newly translated text element is compared to the overall sentiment of the corresponding element of source text. In step 624, leveraging module 540 determines whether each newly translated text element in the translated job 533 has passed all error and/or quality checks. If no, method 600 proceeds to step 625; if yes, method 600 proceeds to step 626. In step 625, connector module 520 returns translated content to linguist 595, since one or more newly translated text elements in translated job 533 have failed to pass the quality check of step 622. In step 626, leveraging module 540 updates translation database 550 with newly translated textual content. Specifically, leveraging module 540 updates translation database 550 to include each element of source text in leveraged translation job 541 that was not an exact, in-context match of an existing entry in translation database 550. The element of source text is included in translation database 550 as an origin language text element 551. In addition, for each such element of source text, leveraging module 540 includes the element of newly translated text in translated job 533, where the newly translated text from translated job 533 is included in translation database 550 as a target language text element 552. Furthermore, leveraging module 540 updates translation database 550 to include a mapping of the element of source text to the element of newly translated text, i.e., each origin language text element 551 is mapped to one or more target language text elements 552 in translation database 550. As a result, as translation requests are completed by localization platform 500, translation database 550 is continuously updated with new origin language text elements 551 and corresponding target language text elements 552. In some embodiments, when a new origin language text element 551 and corresponding target language text elements 552 are added to translation database 550 in step 626, leveraging module 540 also updates translation database 550 with contextual metadata 553. The contextual metadata 553 is associated with the origin language text element 551 being added to translation database 550, and may include any contextual information associated with the origin language text element 551. For example, and without limitation, contextual metadata 553 may include textual content immediately preceding and/or immediately following the origin language text element 551 in the electronic file 531 from which the element of source text (i.e., origin language text element 551) is extracted. Additionally or alternatively, contextual metadata 553 may include other contextual information associated with the element of source text and/or the electronic file 531 from which the element of source text is extracted. For example, contextual metadata 553 may include a document type of electronic file 531, a particular movie or TV show referenced by electronic file 531, and the like. Contextual metadata 553 for a particular origin language text element 551 can then be employed subsequently for determining whether a text element from a different electronic document 531 is an in-context match of the particular origin language text element 551 or just an exact match. In step 627, hub module 530 generates a document w/translated content, such as a translated electronic document 521. Hub module 530 then posts back the translated electronic document 521 to the external application 590 that originally requested the translation. In step 628, connector module 520 transmits the document generated in step 627 to the external application 590 that originally requested the translation. Implementation of method 600 enables the translation of textual content in a partially automated or fully automated fashion. A translation job is segmented into multiple text elements that are individually more likely to be exactly or somewhat matched by a previously translated text element. As a result, most or all text elements of the translation job may partially or exactly match a previously translated text element in translation database 550, and the previously translated text is then provided to a linguist 595 to assist in the manual translation process. FIG. 7 sets forth a flowchart of method steps for performing a matching score analysis, according to various embodiments of the present invention. Although the method steps are described with respect to the systems of FIGS. 1-5, persons skilled in the art will understand that any system configured to perform the method steps, in any order, falls within the scope of the present disclosure. In some embodiments, the method steps of FIG. 7 are performed as part of step 611 of method 600, described above. As shown, a method 700 begins at step 701, in which leveraging module 540 begins a matching score analysis for one of the matching text elements found for a particular element of source text associated with a particular electronic file 531. In step 702, leveraging module 540 determines whether the matching text element is an exact match of the element of source text of interest. If yes, method 700 proceeds to step 711; if no, method 700 proceeds to step 721. An exact match may be a word-for-word or a character-for-character match of the element of source text of interest. In step 711, leveraging module 540 determines whether the matching text element is an in-context match of the element of source text of interest. If yes, method 700 proceeds to step 712; if no, method 700 proceeds to step 713. An in-context match may be a word-for-word or character-for-character match of a matching text element with an element of source text, where the matching text element also shares identical or very similar context with the element of source text. The determination of step 711 may be based on suitable contextual metadata 553 that are stored in translation database 550 and are associated with the matching text element. In step 712, leveraging module 540 indicates that the matching text element is an in-context match of the element of source text. For example, metadata associated with the matching text element may be updated accordingly, so that the element of source text can be directly replaced by the matching text element via an automated process. In step 713, leveraging module 540 indicates that the matching text element is an exact match of the element of source text. For example, metadata associated with the matching text element may be updated, so that a linguist 595 who receives the element of source text for translation will also receive the matching text element and the metadata indicating that the matching text element is an exact match, but not an in-context match. Thus, the linguist 595 may only have to confirm that the contextual differences between the element of source text and the matching text element do not bear on the translation of the element of source text, in which case the linguist 595 can simply replace the element of source text with the matching text element. In step 721, leveraging module 540 computes a fuzzy match score for the matching text element, for example using a fuzzy match score application. The fuzzy match score may be based on an edit distance between the element of source text and the matching text element, i.e., on a minimum number of operations required to transform the element of source text into the matching text element, or vice-versa. In some embodiments, the fuzzy matching score may be a percentile-based score, in which an exact matching score of 100% is reduces by one percent for each operation required to transform the element of source text into the matching text element, or vice-versa. In some embodiments, the fuzzy match score application may ignore certain minor inconsistencies when calculating an editing distance, such as consecutive spaces. In some embodiments, the fuzzy match score application may associate a specific minimum editing distance penalty for transforming a first word to a second word when the first and second words each share a common word stem. For example, such a transformation may only entail a fuzzy match penalty of one percent. In step 722, leveraging module 540 determines whether the fuzzy match score computed in step 721 for a particular matching text element is a high-fuzzy match score, for example between about 95% and 99%. If yes, method 700 proceeds to step 723; if no, method 700 proceeds to step 724. In step 723, leveraging module 540 indicates that the matching text element is a high-fuzzy match of the element of source text. For example, metadata associated with the matching text element may be updated, so that a linguist 595 who receives the element of source text for translation will also receive the matching text element and the metadata indicating that the matching text element is a high-fuzzy match, but not an exact match. In step 724, leveraging module 540 determines whether the fuzzy match score computed in step 722 for a particular matching text element is a medium-fuzzy match score, for example between about 85% and 94%. If yes, method 700 proceeds to step 725; if no, method 700 proceeds to step 726. In step 725, leveraging module 540 indicates that the matching text element is a medium-fuzzy match of the element of source text. Thus, a linguist 595 who receives the element of source text for translation will also receive the matching text element and the metadata indicating that the matching text element is a medium-fuzzy match. In step 726, leveraging module 540 indicates that the matching text element is a low-fuzzy match of the element of source text. Thus, a linguist 595 who receives the element of source text for translation will also receive the matching text element and the metadata indicating that the matching text element is a low-fuzzy match. Implementation of method 700 enables the generation of a fuzzy match score for a particular matching text element and/or a designation of the particular matching text element as an exact match or in-context match of an element of source text. As a result, a matching score and/or other matching metadata can be included with each matching text element associated with an electronic document, thereby facilitating the manual translation of matching text elements that are not in-context matches for an element of source text. As described herein, various functions are performed by localization platform 500. Such functions may be implemented as one or more applications executed by one or more computing devices associated with localization platform 500. For example, a document filtering application may be employed for extracting textual content from a variety of different electronic documents 531, a content segmentation application may be employed for separating extracted textual content into elements of source text, a search application may be employed for searching translation database 550 for matching text elements, a fuzzy match score application may be employed for performing a matching score analysis on matching text elements, and a quality check application may be employed to perform one or more quality checks on translated jobs 533 uploaded by a linguist 595. Such applications may be executed on content server 110 in FIG. 1, control server 120 in FIG. 2, and/or on a stand-alone computing device. One such computing device is described below in conjunction with FIG. 8. FIG. 8 is an illustration of a computing device 800 configured to implement one or more functions of the localization platform of FIG. 5, according to various embodiments. Computing device 800 is configured to translate textual content and facilitate translation of textual content by executing one or more of a document filtering application 831, a content segmentation application 832, a search application 833, a fuzzy match score application 834, a quality check application 835, and/or a leveraging application 836, according to one or more embodiments of the present invention. Leveraging application 836 may include the functionality of any combination of document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, and/or quality check application 835. Computing device 800 may be any type of device capable of executing application programs including, without limitation, instructions associated with document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836. For example, and without limitation, computing device 800 may be a laptop, a tablet, a smartphone, etc. In the embodiment illustrated in FIG. 8, computing device 800 includes, without limitation, a processor 810, input/output (I/O) devices 820, and a memory 830. Processor 810 may be implemented as a central processing unit (CPU), a graphics processing unit (GPU), an ASIC, an FPGA, any other type of processing unit, or a combination of different processing units. In general, processor 810 may be any technically feasible hardware unit capable of processing data and/or executing software applications to facilitate execution of document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836, as described herein. Among other things, and without limitation, processor 810 may be configured to execute instructions associated with document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836. I/O devices 820 may include input devices, output devices, and devices capable of both receiving input and providing output. Memory 830 may include a memory module or a collection of memory modules. As shown, in some embodiments, some or all of document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836 may reside in memory 830 during operation. Computing device 800 may be implemented as a stand-alone chip, such as a microprocessor, or as part of a more comprehensive solution that is implemented as an application-specific integrated circuit (ASIC), a system-on-a-chip (SoC), and so forth. Generally, computing device 800 may be configured to coordinate the overall operation of a computer-based system. In other embodiments, computing device 800 may be coupled to, but separate from such a computer-based system. In such embodiments, the computer-based system may include a separate processor that transmits input to computing device 800, such as digital images and/or digital videos, and receives output from computing device 800. However, the embodiments disclosed herein contemplate any technically feasible system configured to implement document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836, in any combination. In alternative embodiments, rather than being configured as a single machine, computing device 800 may be configured as a distributed computing system, such as a cloud-computing system. Alternatively or additionally, in some embodiments, rather than being configured as one or more stand-alone machines, computing device 800 may be associated with or included in one or more of content servers 110 and/or control servers 120 in FIG. 1. For example, and without limitation, the functionality of computing device 800 may be incorporated into CPU 204 of content server 110, shown in FIG. 2. In such embodiments, document filtering application 831, content segmentation application 832, search application 833, fuzzy match score application 834, quality check application 835, and/or leveraging application 836 may reside in one or more of content servers 110 and/or control servers 120 during operation. In sum, a localization platform leverages previously translated textual content to reduce or eliminate how much manual translation is needed for new textual content that requires translation. For an element of source text that requires translation, the localization platform searches a database of previously translated text elements for a text element that is an exact match or a fuzzy match of the element of source text. The localization platform then provides a linguist with both the element of source text and the matching text element, so that the linguist can use the matching text element to assist in the manual translation process. In some embodiments, metadata that quantifies how closely the element of matching text matches the element of source text is also provided to the linguist. At least one advantage of the disclosed techniques is that for new textual content that requires translation, previously translated textual content can be leveraged in an automated process to reduce or eliminate how much manual translation of the new textual content is needed. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable processors. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The invention has been described above with reference to specific embodiments. Persons of ordinary skill in the art, however, will understand that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, and without limitation, although many of the descriptions herein refer to specific types of application data, content servers, and client devices, persons skilled in the art will appreciate that the systems and techniques described herein are applicable to other types of application data, content servers, and client devices. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. 15231725 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Mar 1st, 2022 12:00AM Apr 25th, 2019 12:00AM https://www.uspto.gov?id=US11263305-20220301 Multilayered approach to protecting cloud credentials The disclosed computer-implemented method may include mapping an internal network to identify various nodes of the internal network. The method may further include determining where at least some of the internal network nodes identified in the mapping are located. The method may also include receiving a request for metadata service information from an application hosted on a cloud server instance. The method may then include providing a response to the received request for metadata service information if the determined location of the requesting node is approved or preventing a response to the received request for metadata service information if the determined location of the requesting node is not approved. Various other methods, systems, and computer-readable media are also disclosed. 11263305 1. A computer-implemented method for protecting credentials in a cloud environment comprising: mapping at least a portion of an internal network to identify one or more nodes of the internal network, each node comprising a specific type of electronic device including at least one of a mobile electronic device, a personal computer, or a server node; determining where one or more of the internal network nodes identified in the mapping are located, wherein a first subset of the internal network nodes at the determined location are permitted to receive metadata service information and wherein a second subset of the internal network nodes at the determined location are prevented from receiving metadata service information, wherein the manner in which the internal network nodes' location is determined is variable based on where a request for metadata service information originated; receiving the request for metadata service information from an application hosted on a cloud server instance that is provisioned on at least one of the internal network nodes, wherein the cloud server instance is associated with a specified user role, the user role having associated therewith a plurality of computing systems at various locations within the internal network; and preventing a response to the received request for metadata service information if the requesting internal network node is part of the second subset of the internal network nodes located at the determined location, if the requesting internal network node is identified as being associated with the specified user role, and if the requesting internal network node is identified as being a specific type of electronic device. 2. The computer-implemented method of claim 1, further comprising generating a list that identifies those internal network nodes that were discovered during the mapping, the generated list including network addresses for the nodes identified in the mapping. 3. The computer-implemented method of claim 1, wherein the metadata service information includes at least one of static information or dynamically changeable information. 4. The computer-implemented method of claim 1, wherein the metadata service information comprises credential information for the application. 5. The computer-implemented method of claim 4, wherein the credential information for the application allows the application to access one or more application programming interfaces (APIs) for services provided by the cloud server instance. 6. The computer-implemented method of claim 5, further comprising: determining the network location from which at least one of the API calls is received; and allowing or denying the API call based on the determined network location from which the at least one API call was received. 7. The computer-implemented method of claim 1, wherein the location of the network nodes comprises a physical location. 8. The computer-implemented method of claim 1, wherein the location of the network nodes comprises a logical location within the internal network. 9. The computer-implemented method of claim 1, further comprising creating a managed policy that describes one or more regions within the internal network. 10. The computer-implemented method of claim 9, wherein the regions are designated within the managed policy as being approved or being not approved for receiving metadata service information. 11. The computer-implemented method of claim 1, wherein the variable manner in which the internal network nodes' location is determined is based on at least one of a public IP address of the internal network node, a network address translation (NAT) IP address, a virtual private cloud (VPC) endpoint IP address, or a private link address. 12. A system comprising: at least one physical processor; physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: map at least a portion of an internal network to identify one or more nodes of the internal network, each node comprising a specific type of electronic device including at least one of a mobile electronic device, a personal computer, or a server node; determine where one or more of the internal network nodes identified in the mapping are located, wherein a first subset of the internal network nodes at the determined location are permitted to receive metadata service information and wherein a second subset of the internal network nodes at the determined location are prevented from receiving metadata service information, wherein the manner in which the internal network nodes' location is determined is variable based on where a request for metadata service information originated; receive the request for metadata service information from an application hosted on a cloud server instance that is provisioned on at least one of the internal network nodes, wherein the cloud server instance is associated with a specified user role, the user role having associated therewith a plurality of computing systems at various locations within the internal network; and prevent a response to the received request for metadata service information if the requesting internal network node is part of the second subset of the internal network nodes located at the determined location, if the requesting internal network node is identified as being associated with the specified user role, and if the requesting internal network node is identified as being a specific type of electronic device. 13. The system of claim 12, wherein mapping at least a portion of an internal network to identify one or more nodes of the internal network comprises defining one or more regions for the internal network. 14. The system of claim 13, wherein each region of the internal network includes at least one of a network address translation (NAT) gateway address, a virtual private cloud (VPC) identifier, or a VPC endpoint identifier. 15. The system of claim 12, wherein preventing the response to the request for metadata service information is further conditioned on receiving valid, up-to-date credentials. 16. The system of claim 12, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a public internet protocol (IP) address for the cloud server instance. 17. The system of claim 12, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a NAT gateway public IP address for the cloud server instance. 18. The system of claim 12, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a private IP address for the cloud server instance. 19. The system of claim 18, further comprising observing which virtual private cloud the request for metadata service information came from for cloud instances deployed on an external subnet with a public IP address. 20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: map at least a portion of an internal network to identify one or more nodes of the internal network, each node comprising a specific type of electronic device including at least one of a mobile electronic device, a personal computer, or a server node; determine where one or more of the internal network nodes identified in the mapping are located, wherein a first subset of the internal network nodes at the determined location are permitted to receive metadata service information and wherein a second subset of the internal network nodes at the determined location are prevented from receiving metadata service information, wherein the manner in which the internal network nodes' location is determined is variable based on where a request for metadata service information originated; receive the request for metadata service information from an application hosted on a cloud server instance that is provisioned on at least one of the internal network nodes, wherein the cloud server instance is associated with a specified user role, the user role having associated therewith a plurality of computing systems at various locations within the internal network; and prevent a response to the received request for metadata service information if the requesting internal network node is part of the second subset of the internal network nodes located at the determined location, if the requesting internal network node is identified as being associated with the specified user role, and if the requesting internal network node is identified as being a specific type of electronic device. 20 CROSS REFERENCE TO RELATED APPLICATION This application claims priority to and the benefit of U.S. Provisional Patent Application No. 62/756,460, filed Nov. 6, 2018, and also claims priority to U.S. Provisional Patent Application No. 62/669,313, filed May 9, 2018, the disclosures of each of which are incorporated, in their entirety, by this reference. BACKGROUND In many cases, software applications are installed locally on electronic devices. In other cases, software applications may be hosted on the cloud and may not be installed locally on the electronic devices, or those devices may only have small, client-side applications that allow access to the cloud. Each of these cloud-hosted applications may be hosted on different cloud instances. These cloud instances are often referred to as “virtual private clouds” or VPCs. Organizations may set up VPCs to host applications for their users. Those users typically log in to the VPCs, providing credentials such as usernames and passwords or biometric information. Once logged in, the users may be able to access data and other resources provided by the cloud-hosted application. In some cases, the user's credentials may be static and may be valid indefinitely. In other cases, the user's credentials may be temporary and may lose their validity after a predefined period (e.g., 1-6 hours). Once the user's credentials have lost their validity, any access to applications hosted on the VPCs will be denied. SUMMARY As will be described in greater detail below, the present disclosure describes methods and systems for protecting credentials in a cloud environment by limiting locations from which specified requests may originate. In one example, a computer-implemented method for protecting credentials in a cloud environment may include mapping an internal network to identify various nodes of the internal network. The method may further include determining where at least some of the internal network nodes identified in the mapping are located. The method may also include receiving a request for metadata service information from an application hosted on a cloud server instance. The method may then include providing a response to the received request for metadata service information if the determined location of the requesting node is approved. Or, the method may include preventing a response to the received request for metadata service information if the determined location of the requesting node is not approved. In some examples, the method may further include generating a list that identifies those internal network nodes that were discovered during the mapping. The generated list may include network addresses for the nodes identified in the mapping. In some examples, the metadata service information may include static information or dynamically changeable information. In some examples, the metadata service information may include credential information for the application. In some examples, the credential information for the application may allow the application to access various application programming interfaces (APIs) for services provided by the cloud server instance. In some examples, the method may further include determining the network location from which API calls are received and may allow or deny the API call based on the determined network location from which the API call was received. In some examples, the location of the network nodes may be a physical location. In some examples, the location of the network nodes may be a logical location within the internal network. In some examples, the method may further include creating a managed policy that describes various regions within the internal network. In some examples, the regions may be designated within the managed policy as being approved or being not approved for receiving metadata service information. In addition, a corresponding system for protecting credentials in a cloud environment may include at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: map at least a portion of an internal network to identify various nodes of the internal network, determine where the internal network nodes identified in the mapping are located, receive a request for metadata service information from an application hosted on a cloud server instance, and provide a response to the received request for metadata service information if the determined location of the requesting node is approved, or prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. In some examples, mapping at least a portion of an internal network to identify nodes of the internal network may include defining various regions of the internal network. Each region of the internal network may include a network address translation (NAT) gateway address, a virtual private cloud (VPC) identifier, and/or a VPC endpoint identifier. In some examples, providing a response to the request for metadata service information may be further conditioned on receiving valid, up-to-date credentials. In some examples, receiving a request for metadata service information from an application hosted on a cloud server instance may further include observing a public internet protocol (IP) address for the cloud server instance. In some examples, receiving a request for metadata service information from an application hosted on a cloud server instance may further include observing a NAT gateway public IP address for the cloud server instance. In some examples, receiving a request for metadata service information from an application hosted on a cloud server instance may further include observing a private IP address for the cloud server instance. In some examples, the computer system may be further configured to observe which virtual private cloud the request for metadata service information came from for cloud instances deployed on an external subnet with a public IP address. In some examples, the computer system may be further configured to observe which virtual private cloud the request for metadata service information came from for cloud instances deployed on an internal subnet with a private IP address. In some examples, the above-described method may be encoded as computer-readable instructions on a computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to map at least a portion of an internal network to identify various nodes of the internal network, determine where the internal network nodes identified in the mapping are located, receive a request for metadata service information from an application hosted on a cloud server instance, and provide a response to the received request for metadata service information if the determined location of the requesting node is approved, or prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure. FIG. 1 illustrates a computing environment in which embodiments described herein may be implemented. FIG. 2 illustrates a computing environment in which web services providers interact with a metadata service provider. FIG. 3 illustrates a computing environment in which internal and external subnets within a network interact with a virtual private cloud (VPC). FIG. 4 illustrates a flow diagram of an exemplary method for protecting credentials in a cloud environment. FIG. 5 illustrates an embodiment of a network architecture which involves network nodes located in various locations. FIG. 6 illustrates an embodiment of an alternative network architecture which involves network nodes located in different buildings. FIG. 7 illustrates an embodiment of an alternative network architecture which involves network nodes located in different logical locations. Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The present disclosure is generally directed to protecting credentials in a cloud environment. As will be explained in greater detail below, embodiments of the present disclosure may identify the location of nodes within an internal network. Once the location of these nodes has been determined, the systems described herein may analyze incoming requests for resources and may allow or deny the requests for services based on the location of the nodes within the network. In some cases, these requests for resources may be requests for information from a metadata service. In other cases, these requests for resources may be application programming interface (API) calls from applications. Regardless of which type of requests come in, the systems described herein may analyze and filter these requests and may only allow hose that are from specified locations within the network. In at least some traditional cloud-hosting systems, VPC providers may allow requests for resources to come from substantially anywhere on earth. This may allow a whole host of malicious users to send requests directly or through API calls to services such as a metadata service. In some cases, these metadata services may provide credential information to users or applications. These credentials may be used to access cloud resources including application data, database tables, or other private information. As such, it may be beneficial to reduce the number of locations from which requests for metadata service information may be received. By reducing the number of locations from which such requests may be received, the systems described herein may curtail the number of potentially malicious users that have access to cloud systems and cloud-stored information. This may, in turn, keep legitimate user's data more secure and out of the hands of unwanted users. The following will provide, with reference to FIGS. 1-7, detailed descriptions of systems and methods for protecting credentials in a cloud environment. FIG. 1, for example, illustrates a computing environment 100 that includes a computer system 101. The computer system 101 may be substantially any type of computer system including a local computer system or a distributed (e.g., cloud) computer system. The computer system 101 may include at least one processor 102 and at least some system memory 103. The computer system 101 may include program modules for performing a variety of different functions. The program modules may be hardware-based, software-based, or may include a combination of hardware and software. Each program module may use computing hardware and/or software to perform specified functions, including those described herein below. For example, the communications module 104 may be configured to communicate with other computer systems. The communications module 104 may include any wired or wireless communication means that can receive and/or transmit data to or from other computer systems. These communication means may include hardware radios including, for example, a hardware-based receiver 105, a hardware-based transmitter 106, or a combined hardware-based transceiver capable of both receiving and transmitting data. The radios may be WIFI radios, cellular radios, Bluetooth radios, global positioning system (GPS) radios, or other types of radios. The communications module 104 may be configured to interact with databases, mobile computing devices (such as mobile phones or tablets), embedded or other types of computing systems. The computer system 101 may further include a network mapping module 107. The network mapping module 107 may be configured to map a computer network and identify various nodes 108 within that network. The nodes may include mobile or stationary computing systems, server computers, gateways, routers, internet-of-things (IOT) devices, or other types of electronic devices. The location determining module 109 of computer system 101 may be configured to determine the location of these identified nodes 108. The “location,” as the term is used herein, may refer to a physical location including a building, room, city, state, country, etc., and may include physical global positioning system (GPS) coordinates, or other physical location indicators. Additionally or alternatively, the location may refer to a logical location such as behind a firewall or outside of a firewall, behind or outside of a router, part of a specified set of internet protocol (IP) addresses or media access control (MAC) addresses, or part of some other logical grouping of nodes, or some other logical location identifier. Each of the identified nodes 108 may thus be classified with a node location 114, whether that location is physical, logical, or both. The request receiving module 110 of computer system 101 may be configured to receive metadata requests. For example, request receiving module 110 may receive metadata request 123 from a cloud instance 120. The cloud instance 120 may be an instance of a private cloud or an instance of a public cloud and may include one or more server computer systems, one or more networking devices, one or more data storage systems (e.g., 122), or other cloud-related hardware. The cloud instance 120 may host one or more applications 121. These applications may be used, for example, by user 125, who may access and interact with these applications 121 (via input 127) using their mobile devices (e.g., 126). The applications 121 may be configured to send metadata requests 123 to computer system 101. The metadata requests 123 may request various kinds of information that is to be used in conjunction with the applications 121. This information may include network information 112, credential information 113, user profile information, application data, or other types of data. In some cases, requests from the cloud instance 120 may come in the form of API requests 124. These API requests may include specific requests for application credentials 113. The metadata service 111 may then provide these credentials 113 to the cloud instance 120. In some embodiments, however, the origin enforcing module 115 of computer system 101 may determine where the metadata request 123 was received from. If the metadata request 123 did not originate from an approved network location, the request may be denied and the application 121 will not receive the requested credentials or other metadata. If the metadata request 123 did on the other hand, originate from an approved network location, the request may be allowed and the application may receive the requested credentials or other metadata information. In some embodiments, the receiving module 110 and/or the origin enforcing module 115 may be part of or operated on the cloud instance 120. In such cases, the receiving module 110 and/or the origin enforcing module 115 may access API requests 124 sent by the applications 121 and determine whether the requests were sent from approved locations. For instance, when an application is making a request to a web server, a managed policy such as an Identity and Access Management (IAM) managed policy may maintain a whitelist that may be checked by the origin enforcing module 115. The origin enforcing module 115 may check that the API request came from within an environment defined by the IAM policy. If the API request 124 came from an approved location then the request receiving module would allow the request if the permissions were permitted. This process will be described in greater detail below initially with regard to FIGS. 2-3 and then with regard to method 400 of FIG. 4 and FIGS. 5-7. As shown in FIG. 2, various systems may be put into place to manage and distribute application credentials. As noted above, credential management systems may be designed to prevent credentials from being made available to unauthorized parties. The impact of exposed credentials may depend on the time of exposure, the skill of the individual with the credentials, and the privileges associated with the credentials. The combination of these can lead to anything from website defacement to a massive data breach where the businesses subjected to the breach may sustain heavy financial losses and may even be forced to discontinue business. In the embodiments described herein, a “credential” may be any type of authentication data, token, or other indicator that is used to describe and/or make changes within an account (e.g., a web services account). In at least some of the embodiments herein, an entity (such as a user or business) may host one or more applications on the cloud. In FIG. 2, for example, these applications may be hosted on elastic cloud 201. These applications may need access to various cloud resources. Access to cloud resources may be controlled via metadata service 203 which may be designed to control access to network information and/or credentials. Some web service providers may provide the ability to assign permissions to a cloud instance through an identity and access management (IAM) role using a role manager 202. This role may be attached to a cloud instance (e.g., 201) through an instance profile, thus providing credentials to the underlying applications running on the cloud instance through the metadata service 203. The metadata service 203 may be a service provided by an entity that itself is configured to provide information for web services (e.g., 206) or applications deployed on cloud servers. As noted above, this metadata service information may include network information, cloud instance identifiers, credentials, or other information. In some cases, the metadata service information may be read-only and static. Each process with network access may be able to communicate with the metadata service by default. The metadata service 203 may include information indicating which availability-zone the user is deployed in, the user's private IP address, user data with which the user launched the cloud instance, and the web service credentials that the application uses for making API calls to the web service provider. These credentials may be temporary session credentials that range in a validity from one to six hours (or more). When the expiration for the credentials nears, new session credentials may be generated and made available on the metadata service 203 for the application. This system may provide a substantially seamless experience with continuous access to web service APIs with limited-duration credentials. Software development kits (SDKs) 204 associated with the web service may be programmed to check the metadata service prior to credential expiration to retrieve the new set of dynamic credentials. The metadata service 203 may be accessible inside of the cloud instance 201 using a specified IP address or other identifier. In some cases, the web service provider may provide a logging service that logs API calls made by each application using credentials of a certain user or entity. This logging service may enable governance compliance and auditing. The logging service may identify which entity made the API call and from which location the API call was made. Static or dynamic credentials may be associated with a user in the web services identity access and management (IAM) service 202. The IAM service 202 may allow a user to generate up to two sets of credentials per IAM user. At least in some cases, these credentials may be static and may never expire. As such, the credentials may need to be manually rotated. Because these credentials may never expire, some entities may avoid the use of these credentials to mitigate risk if a credential were to be exposed. Temporary or session-based credentials may be used when operating in the cloud. If a session-based credential is exposed, the potential impact of exposure may be reduced as the credential will eventually expire. Web service providers may associate session-based credentials with IAM roles. The lifecycle of credentials on cloud instances (e.g., 201) may be illustrated, at least partially, in FIG. 2. When a user launches a server 205 with an IAM role, the web service provider may create session credentials that are valid for a specified time period (e.g., 1-6 hours). The elastic cloud instance 201 may retrieve credentials from the metadata service 203 through an API call to a security token service (STS) that retrieves the temporary session credentials. These credentials may be passed on to the metadata service 203 that is relied upon by the cloud instance 201. The web service SDK 204 may retrieve these credentials and use them when making API calls to web services 206. In the embodiments described herein, each API call may be evaluated by the IAM service (e.g., role manager 202) to determine if the role attached to the cloud instance 201 has permission to make that call and if the temporary credential is still valid. If the role has permission and the token has not expired, the call may succeed. On the other hand, if the role does not have the permission or the token has expired, the call may fail. The cloud instance 201 may handle renewal of the credentials and may replace them in the metadata service 203. In at least some embodiments, each temporary credential that is issued by the STS service may be given an expiration timestamp. When an API call is issued, the role manager 202 may validate that the credentials are still valid (not expired) and check the signature. If both validate, the API call may then be evaluated to see if the role has the given permissions assigned. As indicated further in FIG. 3, API calls may come from a variety of locations. In the embodiments described herein, the location from which the API call originated may be evaluated and used as a basis for allowing or denying the request. FIG. 3 illustrates a networking environment in which API calls may originate from a variety of locations. At arrow 1, web services 301 may observe the public IP address of a user's cloud instance (e.g., 305) as the source IP address if the web services instance 305 instance is deployed in an external subnet (e.g., in a public network with a public IP address). This is because, at least in this embodiment, web services API calls may go directly to the internet 302. At arrow 2, web services 301 may observe the network address translation (NAT) gateway 303 public IP address as the source IP address. In such cases, a user's web services instance 307 may be deployed in an internal subnet 306 (e.g., a private network with no public IP address). This is because, at least in this embodiment, web services API calls may travel through the NAT Gateway 303 in order to get to a virtual private cloud (VPC) endpoint 308. At arrow 3, web services 301 3 may observe the private IP address of a user's cloud instance as the source IP address and may also observe information about the VPC and/or VPC endpoint 308 the call went through if the user's web service instance 305 deployed in an external subnet 304 (e.g., a public network with a public IP address) makes a web services API call that goes through a VPC endpoint 308 or Private Link. At arrow 4, web services 301 may observe the private IP address of a user's cloud instance 307 as the source IP address as well as information about the VPC and/or VPC endpoint 308 the call went through if the user's web services cloud instance 307 deployed in an internal subnet 306 (e.g., a private network with no public IP address) makes a web services API call that goes through a VPC endpoint 308 or private link. Accordingly, in each of these four scenarios, the “location” of where an API call or metadata service request originates may be determined in a different manner. As noted above, in at least some of the embodiments described herein, credentials may be enforced by only allowing API calls or other metadata service information requests to succeed if they originate from a known environment. In a web services environment, this may be achieved by creating an IAM policy that checks the origin of the API call. The systems described herein may be designed to create a managed policy or IAM policy that encompasses a user's entire account across all regions. To do this, the user may describe each region and collect NAT gateway IPs, VPC identifiers, and VPC endpoint IDs to create the policy language for the managed policy or IAM policy. These endpoints may then be attached to IAM Roles that are to be protected. In some cases, endpoints may be whitelisted using a managed policy attached to a role or the endpoint whitelisting may be applied in the IAM role policy itself. In some embodiments, the user's web service may be exposed publicly through a load balancer. This may allow the user to deploy their cloud instance into the internal subnet and allow the user to attach this policy to their IAM role. Turning now to FIG. 4, a flow diagram is provided of an exemplary computer-implemented method 400 for protecting credentials in a cloud environment. The steps shown in FIG. 4 may be performed by any suitable computer-executable code and/or computing system, including the system illustrated in FIG. 1. In one example, each of the steps shown in FIG. 4 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below. As illustrated in FIG. 4, at step 410, one or more of the systems described herein may be implemented to protect credentials in a cloud environment. For example, a network mapping module may map a network such as network 500 of FIG. 5 to identify various nodes (e.g., 502) of the internal network. At step 420, the location determining module 109 of FIG. 1 may determine where at least some of the internal network nodes identified in the mapping are located. The network nodes 502 may include substantially any type of networking device or electronic computing device. The nodes at any given location (e.g., location 501A, location 501B, or location 501C) may be nodes of the same type or may be nodes of different types. The various locations 501A-501C may be different from each other and may be different physical locations, different logical locations, or a combination of physical and logical locations. The network mapping module 107 of FIG. 1 may map some or all of the nodes at each location 501A-50C within the internal network 500. Throughout the mapping process, the network mapping module 107 may identify various nodes 502 at location 501A, nodes 503 at location 501B, and nodes 504 at location 501C. The network mapping module 107 may also be configured to identify a network's architecture including internal nodes and external nodes, routers, gateways, firewalls, wired networks, wireless networks, individual user devices, IOT devices, and other network devices. After the network mapping module 107 has mapped the various locations of the network 500, and after the location determining module 109 has identified the locations of the nodes (e.g., locations 501A-501C), the method 400 may include receiving a request for metadata service information from an application hosted on a cloud server instance (step 430). Method 400 may then include providing a response to the received request for metadata service information if the determined location of the requesting node is approved (step 440A) or may include preventing a response to the received request for metadata service information if the determined location of the requesting node is not approved (440B). For example, location 501A may be an approved location. As such, metadata service information requests received from nodes 502 may be approved and the metadata service (e.g., 111 of FIG. 1) may provide metadata 117 to the cloud instance 120. This metadata 117 may include network information 112 and/or credential information 113. In some embodiments, all of the nodes from a given location (e.g., 501A) may be approved, while in other embodiments, only a portion of the nodes from a given location may be approved and other nodes at that location are prevented from receiving metadata service information. In such cases, the approved nodes may be nodes of a certain type (e.g., mobile devices), while the unapproved nodes may be of another type. In a similar manner, some or all of the nodes 503 of location 501B of FIG. 5 may be approved while some or all of the nodes 504 of location 501C may be unapproved. In at least some embodiments a managed policy (e.g., 116 of FIG. 1) or IAM policy may be put into place to specify which locations and/or which nodes at each location are approved and which locations and nodes are unapproved. The origin enforcing module 115 may then enforce the rule that only metadata requests originating from approved locations (or approved nodes within those locations) are to be serviced while requests originating from unapproved locations are to be denied. In some embodiments, the method 400 may include an optional step of generating a list that identifies those internal network nodes that were discovered during the mapping. When the network mapping module 107 is mapping the network (e.g., 500), the computer system may generate a list of nodes 502, 503, and/or 504 that were discovered during the mapping. The generated list may include network addresses including IP addresses or MAC addresses for the nodes identified in the mapping. If the metadata request 123 is received from a node located in an approved location, the metadata service 111 may provide the requested metadata 117 including potentially credentials 113 used by an application 121 to make API calls or to access data in some other manner. The metadata service information may include static information or dynamically changeable information. The network information 112 and/or the credentials 113 may be static or may be changed or updated from time to time. In some embodiments, the credential information 113 may correspond to an application and the credential information may allow the application to access various APIs for services provided by the cloud server instance 120. In some cases, once the application has generated and sent the API calls using the credential information 113, the method 400 may determine the network location from which the API calls were received and may allow or deny the API call based on the determined network location from which the API call was received. In some embodiments, when the mapping module 107 of FIG. 1 maps the internal network, the mapping module may further define various regions of the internal network. As noted above the various regions of the internal network may include physically separate portions and/or logically separate portions. In some cases, each region of the internal network may include its own network address translation (NAT) gateway address, virtual private cloud (VPC) identifier, and/or a VPC endpoint identifier. In some embodiments, such as in internal network 600 of FIG. 6, the mapping module may identify different physical buildings that are part of the internal network 600. For instance, the mapping module may determine that buildings 601A, 601B and 601C are part of internal network 600. These buildings may belong to a university, a corporation, a local or state government, or some other entity. The mapping module may identify the physical location of these buildings (based on wireless signals, for example, such as GPS or WiFi) and may then provide these locations to the origin enforcing module 115 of FIG. 1 which may approve or deny requests based on the physical or logical location from which the requests are received. In FIG. 6, for example, Buildings 601A-601C may be on the approved list, while other surrounding or perhaps more remote buildings are not on the approved list. In other cases, some of the buildings in the internal network 600 may be approved, while others may not. For example, buildings 601B and 601C may be approved, while building 601A is not approved. Thus, metadata service requests from nodes within buildings 601B and 601C may be approved and answered by the metadata service 111, while metadata service requests from nodes in building 601A may be denied. In still further cases some nodes within a building may be allowed while others are denied. The level of granularity may depend on implementation and may be specified in the managed policy 116 or other IAM policy. For instance, nodes from a certain department (e.g., human resources or engineering) may be approved while others are denied. Nodes from a certain floor of a building may be approved while nodes from other floors are denied. Nodes having certain assigned roles or nodes having certain assigned users or certain assigned tasks may be approved or denied as specified in the policy 116. Nodes behind certain firewalls or NAT gateways may be approved, while nodes behind other gateways are denied. For example, as shown in FIG. 7, internal network 700 has identified gateways 705A and 705B. Nodes 702 and 703 of locations 701A and 701B, respectively, lie (logically) behind gateway 705A. Nodes 704 in location 701C lie behind gateway 705B. In some embodiments, nodes 702 and 703 behind gateway 705A may be approved to receive metadata service information, while nodes 704 behind gateway 705B may be denied. Or the reverse may be true where nodes 702 and 703 behind gateway 705A are not approved to receive metadata service information and nodes 704 behind gateway 705B are allowed. Similarly, the managed policy 116 or other IAM policy may indicate that certain specific nodes at given locations are approved, while other specified nodes are not approved. It should be noted that network managers or other users may have full control to customize which nodes or types of nodes or locations of nodes are approved to request and receive metadata service information. Moreover, it should be noted that while certain numbers of buildings or locations or nodes are shown and described in these examples, substantially any number of internal networks, locations, nodes, gateways, policies or other components may be used. Thus in this manner, the metadata service 111 may condition the response to metadata requests on where the request originated. In some embodiments, the metadata service 111 may also condition a response to the request for metadata service information on receiving valid, up-to-date credentials. In some cases, when a metadata service information requests received at the computer system 101, the request may include credentials 113. For example, an application 121 hosted on cloud server instance 120 may send credentials 113 with an API's request 124 for metadata service information from metadata service 111. These credentials may be analyzed to determine whether they are valid and up-to-date. If they are valid and up-to-date, and if the API call 124 was received from an approved location, the metadata service 111 may provide metadata service information 117 to the application. When determining the location from which the API call originated, the location determining module 109 may identify the public IP address for the cloud server instance and use that public IP address to determine whether the request is from an approved location. Many other network addresses or identifiers may be used when determining a location from which a metadata service information request originated. Indeed, as mentioned above with regard to FIG. 3, depending on where a node located is within a network, the location determining module of computer system 101 of FIG. 1 may use different network addresses to identify the requester's location. In some embodiments, for example, receiving a request for metadata service information from an application 121 hosted on cloud server instance 120 may include observing a NAT gateway public IP address for the cloud server instance 120. In some cases, receiving a request for metadata service information from an application 121 hosted on cloud server instance 120 may include observing a private IP address for the cloud server instance 120. In some embodiments, the location determining module 109 may observe which virtual private cloud instance (e.g., 120) the request for metadata service information came from among a plurality of cloud instances deployed on an external subnet with a public IP address. In some cases, the location determining module 109 may observe which virtual private cloud the request for metadata service information came from among a plurality of virtual cloud instances deployed on an internal subnet with a private IP address. Other network identifiers may additionally or alternatively be used depending on network architecture. In some embodiments, the above-described method may be encoded as computer-readable instructions on a computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to map at least a portion of an internal network to identify various nodes of the internal network, determine where the internal network nodes identified in the mapping are located, receive a request for metadata service information from an application hosted on a cloud server instance provide a response to the received request for metadata service information if the determined location of the requesting node is approved or prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. In addition, a corresponding system for protecting credentials in a cloud environment may include at least one physical processor and physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: map at least a portion of an internal network to identify various nodes of the internal network, determine where the internal network nodes identified in the mapping are located, receive a request for metadata service information from an application hosted on a cloud server instance provide a response to the received request for metadata service information if the determined location of the requesting node is approved or prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. Accordingly, in this manner, systems and methods may be provided for protecting credentials in cloud environment. The systems and methods described herein may map out internal networks, determine where the various nodes are located (either physically or logically) and may generate a list of which node locations are approved and which locations are not approved to receive metadata service information. Then, when metadata service requests or API calls come in from various locations, the methods and systems described herein may determine where those requests or calls came from and may allow or prevent responses to the requests or calls based on where those requests or calls originated. Managed policies may specify details about which nodes or node types are approved from each location. 1. A computer-implemented method for protecting credentials in a cloud environment comprising: mapping at least a portion of an internal network to identify one or more nodes of the internal network; determining where one or more of the internal network nodes identified in the mapping are located; receiving a request for metadata service information from an application hosted on a cloud server instance; and preventing a response to the received request for metadata service information if the determined location of the requesting node is not approved. 2. The computer-implemented method of claim 1, further comprising generating a list that identifies those internal network nodes that were discovered during the mapping, the generated list including network addresses for the nodes identified in the mapping. 3. The computer-implemented method of claim 1, wherein the metadata service information includes at least one of static information or dynamically changeable information. 4. The computer-implemented method of claim 1, wherein the metadata service information comprises credential information for the application. 5. The computer-implemented method of claim 4, wherein the credential information for the application allows the application to access one or more application programming interfaces (APIs) for services provided by the cloud server instance. 6. The computer-implemented method of claim 5, further comprising: determining the network location from which at least one of the API calls is received; and allowing or denying the API call based on the determined network location from which the at least one API call was received. 7. The computer-implemented method of claim 1, wherein the location of the network nodes comprises a physical location. 8. The computer-implemented method of claim 1, wherein the location of the network nodes comprises a logical location within the internal network. 9. The computer-implemented method of claim 1, further comprising creating a managed policy that describes one or more regions within the internal network. 10. The computer-implemented method of claim 9, wherein the regions are designated within the managed policy as being approved or being not approved for receiving metadata service information. 11. A system comprising: at least one physical processor physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: map at least a portion of an internal network to identify one or more nodes of the internal network; determine where one or more of the internal network nodes identified in the mapping are located; receive a request for metadata service information from an application hosted on a cloud server instance; and prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. 12. The system of claim 11, wherein mapping at least a portion of an internal network to identify one or more nodes of the internal network comprises defining one or more regions for the internal network. 13. The system of claim 12, wherein each region of the internal network includes at least one of a network address translation (NAT) gateway address, a virtual private cloud (VPC) identifier, or a VPC endpoint identifier. 14. The system of claim 11, wherein providing a response to the request for metadata service information is further conditioned on receiving valid, up-to-date credentials. 15. The system of claim 11, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a public Internet protocol (IP) address for the cloud server instance. 16. The system of claim 11, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a NAT gateway public IP address for the cloud server instance. 17. The system of claim 11, wherein receiving a request for metadata service information from an application hosted on a cloud server instance further comprises observing a private IP address for the cloud server instance. 18. The system of claim 17, further comprising observing which virtual private cloud the request for metadata service information came from for cloud instances deployed on an external subnet with a public IP address. 19. The system of claim 17, further comprising observing which virtual private cloud the request for metadata service information came from for cloud instances deployed on an internal subnet with a private IP address. 20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: map at least a portion of an internal network to identify one or more nodes of the internal network; determine where one or more of the internal network nodes identified in the mapping are located; receive a request for metadata service information from an application hosted on a cloud server instance; and prevent a response to the received request for metadata service information if the determined location of the requesting node is not approved. As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor. In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory. In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor. Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks. In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive data requests to be transformed, transform the data requests, output a result of the transformation to determine an origin for the requests, use the result of the transformation to allow or prevent access to resources, and store the result of the transformation to make further determinations. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device. In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems. The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed. The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure. Unless otherwise noted the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” 16393958 netflix, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers
nasdaq:nflx Netflix Dec 8th, 2009 12:00AM Dec 7th, 2005 12:00AM https://www.uspto.gov?id=US07631323-20091208 Method of sharing an item rental account An account in an item rental service is shared with others using computer-implemented profiles, subject to computer-enforced constraints. In one embodiment, a method provides for establishing a user account, wherein the user account is associated with an account owner, wherein the user account comprises a first ordered queue indicating two or more items that the account owner desires to rent; receiving a request to add a second ordered queue to the user account, profile member identifying information, and a constraint value; creating and storing a profile record based on the profile member identifying information and the constraint value, wherein the profile record is further associated with a second queue; receiving a request to add a specified rental item to the second queue, wherein the specified rental item does not conform to the constraint value; and adding the specified rental item to the second queue only in response to receiving confirmation by the account owner. 7631323 1. A computer system for renting items, comprising: a computer that is coupled to a digital telecommunications network by a digital telecommunications link; an electronic digital memory in the computer; one or more sequences of computer program instructions stored in the electronic digital memory causing the computer to perform: establishing a user account, wherein the user account is associated with an account owner, wherein the user account comprises a first ordered queue indicating two or more items that the account owner desires to rent; receiving a request to add a second ordered queue to the user account, profile member identifying information, and a constraint value; creating and storing a profile record based on the profile member identifying information and the constraint value, wherein the profile record is further associated with a second queue; receiving a request to add a specified rental item to the second queue, wherein the specified rental item does not conform to the constraint value; and adding the specified rental item to the second queue only in response to receiving confirmation by the account owner. 2. A computer system as recited in claim 1, wherein the user account is associated with a total maximum number of allowed rental items, wherein the first queue is associated with a first maximum number of allowed rental items for the first queue, wherein the second queue is associated with a second maximum number of allowed rental items for the second queue. 3. A computer system as recited in claim 2, wherein user input is received for the first maximum number and the second maximum number, and wherein the first maximum number and the second maximum number are associated with the first queue and the second queue only when a sum of the first maximum number and the second maximum number is less than the total first maximum number of allowed rental items. 4. A computer system as recited in claim 1, wherein in response to receiving the request to add the specified rental item to the second queue, the profile member is prompted to provide a password of the account owner, and wherein the specified rental item is added to the second queue only in response to successful validation of the password of the account owner. 5. A computer system as recited in claim 1, wherein the rental items are movies, and wherein the constraint value specifies a maturity level of a movie. 6. A computer system as recited in claim 1, wherein the rental items are audiovisual programs, and wherein the constraint value specifies any of a Motion Picture Association of America (MPAA) rating of an audiovisual program and a television content rating of the audiovisual program. 7. A computer system as recited in claim 1, wherein the rental items are games, and wherein the constraint value specifies a game rating. 8. A computer system as recited in claim 1, further comprising one or more sequences of computer program instructions stored in the electronic digital memory causing the computer to perform: providing electronic digital information that causes one or more attributes of movies to be displayed; establishing, in electronic digital form, from electronic digital information received over the Internet, the first queue of two or more movies that the account owner desires to rent; causing to be delivered to the account owner up to a specified number of movies based upon the order of the queue; in response to one or more delivery criteria being satisfied, selecting another movie based upon the order of the queue and causing the selected movie to be delivered to the account owner; and in response to other electronic digital information received from the account owner over the Internet, electronically updating the first queue. 9. A computer system as recited in claim 8, wherein updating the first queue comprises changing the order of the two or more movies. 10. A computer system as recited in claim 8, wherein updating the first queue comprises indicating an additional movie. 11. A computer system as recited in claim 8, wherein updating the first queue comprises removing an indication of one or more of the movies. 12. A computer system as recited in claim 8, further comprising one or more sequences of computer program instructions stored in the electronic digital memory causing the computer to perform determining the order of the two or more movies based upon one or more preferences of the account owner. 13. A computer system as recited in claim 8, wherein the delivery of the selected movie comprises delivery by mail. 14. A computer system as recited in claim 8, wherein the delivery of the selected movie comprises delivery by mail on one or more optical media. 15. A computer system as recited in claim 8, wherein the delivery criteria comprises receipt of the movie by mail. 16. A computer system as recited in claim 8, wherein the customer is not required to return the movies within a specified time associated with delivery. 17. A computer system as recited in claim 8, wherein the customer is not charged a fee for retaining one or more movies beyond a specified time associated with delivery. 18. A computer system as recited in claim 8, further comprising one or more sequences of computer program instructions stored in the electronic digital memory causing the computer to perform: establishing over the Internet a rental agreement with the account owner that provides for charging the account owner a periodic fee; selecting another movie based upon the order of the list and causing the selected movie to be delivered to the customer only in response to one or more delivery criteria being satisfied, if the customer is current on the periodic fee. 19. A computer system as recited in claim 8, wherein the other electronic digital information indicates one or more delivery criteria being satisfied. 20. A computer system as recited in claim 8, wherein the other electronic digital information comprises one or more selection criteria. 21. A computer system as recited in claim 8, wherein the movies comprise any of motion pictures, television series, documentaries, cartoons, music videos, video recordings of concert performances, instructional programs, and educational programs. 21 FIELD OF THE INVENTION The present invention relates to inventory rental, and more specifically, to approaches for sharing item rental accounts. BACKGROUND OF THE INVENTION Conventional inventory rental models are typically based upon renting items for fixed rental periods and charging late fees for keeping rented items beyond a specified return date. These types of inventory models suffer from several significant limitations. First, conventional rental models require customers to make the decision of what items to rent at substantially the same time as the decision of when to rent the items. An example that illustrates this limitation is a video rental business. Customers go to a video rental store and select particular movies to rent at that time. The customers take the movies home and must return them by a particular due date or be charged a late fee. In this situation, the customers cannot decide what movies to rent before actually renting them. The customers may have a particular movie in mind, but there is no guarantee that the video rental store has the particular movie in stock. Moreover, due dates are inconvenient for customers, particularly for “new release” movies that are generally due back the next day. Given the current demand for inventory rental and the limitations in the prior approaches, an approach for renting items to customers that does not suffer from limitations associated with conventional inventory rental models is highly desirable. In particular, an approach for renting inventory items to customers that allows separation of customers' decisions of what items to rent from when to rent the items is highly desirable. There is a further need for an approach for renting items to customers on a continuous basis that avoids the use of fixed due dates or rental “windows” appurtenant to conventional rental models. There is yet a further need for an approach for renting movies, games and music to customers that is more convenient and flexible to customers than conventional approaches. In certain online rental approaches, customers who desire to rent items from an online rental service establish an account with the rental service, pay a fee, and establish a queue of rental items. A limitation of this approach is that in a multi-person-household, such as a family household, each family member is required to establish a separate account with the service. This approach limits the ability for one member of the household, such as a parent, to view or control the contents of a rental queue established by another member of the household, such as a child. For example, in online movie rental, a parent may wish to prevent a child from adding movies that have MPAA (Motion Picture Association of America) ratings of “PG-13”, “R”, or “NC-17” to the child's queue. Another drawback of this approach is that it limits the ability for any member of the household to enter their own ratings for any movie and receive personalized recommendations based on those ratings. Further, the use of individual accounts for multiple persons in a household reduces barriers to changing service providers. When each person in a household has his or her own account with the service provider, any of the persons may elect to change to a competitive service providers without significant effect on the other persons. Service providers would like to create a disincentive for such change. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which: FIG. 1 is a diagram depicting an approach for renting items to customers according to an embodiment. FIG. 2 is a flow diagram depicting an approach for renting items to customers according to an embodiment. FIG. 3 is a flow diagram depicting a “Max Out” approach for renting items to customers according to an embodiment. FIG. 4 is a flow diagram depicting a “Max Turns” approach for renting items to customers according to an embodiment. FIG. 5 is a diagram depicting an approach for renting audio/video items to customers over the Internet according to an embodiment. FIG. 6 is a flow diagram illustrating an approach for renting audio/video items to customers over the Internet using both “Max Out” and “Max Turns” according to an embodiment. FIG. 7 is a block diagram of a computer system upon which embodiments of the invention may be implemented. FIG. 8 is a block diagram of a server computer system that may be used to implement an example embodiment. FIG. 9A is a flow diagram depicting an overview of a method of sharing an item rental account. FIG. 9B is a flow diagram of a process of assigning a maximum allowed number of rental items to a profile. FIG. 9C is a flow diagram of processing a request to add a rental item to a profile member queue. FIG. 10A is a screen display diagram showing an example user interface display relating to browsing rental items. FIG. 10B is a screen display diagram showing an example user interface display relating to adding a queue or profile to an account. FIG. 10C is a screen display diagram showing an example profile introduction. FIG. 10D is a screen display diagram showing an example user interface display relating to entering attributes of a profile. FIG. 10E is a screen display diagram showing an example user interface display relating to browsing rental items. FIG. 10F is a screen display diagram showing a queue page. FIG. 10G is a screen display diagram showing a confirmation page. DETAILED DESCRIPTION OF THE INVENTION In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In other instances, well-known structures and devices are depicted in block diagram form in order to avoid unnecessarily obscuring the invention. Various aspects and features of example embodiments of the invention are described in more detail hereinafter in the following sections: (1) functional overview; (2) item selection criteria; (3) item delivery; (4) “Max Out”; (5) “Max Turns”; (6) inventory management; (7) implementation mechanisms; (8) sharing an item rental account. 1. Functional Overview FIG. 1 is a block diagram 100 that illustrates an approach for renting items to customers according to various embodiments described herein. As used herein, the term “items” refers to any commercial goods that can be rented to customers. Examples of items include movies, music and games stored on a non-volatile memory such as a tape, other magnetic medium, optical medium, read-only memory or the like, and the invention is not limited to any particular type of item. In general, the decision of what items to rent is separated from the decision of when to rent the items. Customers may specify what items to rent using one or more item selection criteria separate from deciding when to receive the specified items. Furthermore, customers are not constrained by conventional rental “windows” and instead can have continuous, serialized rental of items. According to one embodiment, a customer 102 provides one or more item selection criteria to a provider 104 over a link 106. Link 106 may be any medium for transferring data between customer 102 and provider 104 and the invention is not limited to any particular medium. Examples of link 106 include, without limitation, a network such as a LAN, WAN or the Internet, a telecommunications link, a wire or optical link or a wireless connection. The item selection criteria indicate items that customer 102 desires to rent from provider 104. In response to receiving the item selection criteria from customer 102, provider 104 provides the items indicated by the item selection criteria to customer 102 over a delivery channel 108. Delivery channel 108 may be implemented by any mechanism or medium that provides for the transfer of items from provider 104 to customer 102 and the invention is not limited to any particular type of delivery channel. Examples of delivery channel 108 include, without limitation, mail delivery, courier delivery or delivery using a delivery agent. Provider 104 may be centralized or distributed depending upon the requirements of a particular application. According to an embodiment, a “Max Out” approach allows up to a specified number of items to be rented simultaneously to customer 102 by provider 104. According to another embodiment, a “Max Turns” approach allows up to a specified number of item exchanges to occur during a specified period of time. The “Max Out” and “Max Turns” approaches may be used together or separately with a variety of subscription methodologies. The approach just described for renting items to customers is now described with reference to a flow diagram 200 of FIG. 2. After starting in step 202, in step 204, customer 102 creates item selection criteria. In step 206, customer 102 provides the item selection criteria to provider 104. In step 208, in response to provider 104 receiving the item selection criteria from customer 102, provider 104 provides one or more items indicated by the item selection criteria to customer 102. The process is complete in step 210. 2. Item Selection Criteria The one or more item selection criteria provided by customer 102 to provider 104 indicate the particular items that customer 102 desires to rent from provider 104. Thus, the item selection criteria define a customer-specific order queue that is fulfilled by provider 104. According to one embodiment, the item selection criteria specify attributes of items to be provided by provider 104 to customer 102. Item selection criteria may specify any type of item attributes and the invention is not limited to particular item attributes. Examples of item attributes include, without limitation, identifier attributes, type attributes and cost attributes. Item selection criteria may be changed at any time to reflect changes in items that customers desire to rent from a provider. 3. Item Delivery According to one embodiment, items are delivered by provider 104 to customer 102 over delivery channel 108 based upon item delivery criteria. More specifically, the delivery of items from provider 104 to customer 102 is triggered by item delivery criteria being satisfied. The item delivery criteria may include a wide range of criteria and the invention is not limited to any particular item delivery criteria. Examples of item delivery criteria include, without limitation, customer request/notification, customer notification that an item is being returned, customer return of an item, the occurrence of a specified date, the elapsing of a specified period of time or a customer payment. The item delivery criteria may be specified by customer 102 to provider 104 or negotiated by customer 102 and provider 104 as part of a subscription service. For example, a particular subscription service may include item delivery criteria that specifies that a particular number of items are to be delivered monthly. As another example, item delivery criteria may specify that an initial set of items is to be delivered by provider 104 to customer 102 upon initiation of a subscription service and that additional items are to be delivered to customer 102 upon return of items to provider 104. Item delivery criteria may be applied uniformly to all items to be delivered to a customer, or may be item specific. For example, item delivery criteria may specify a particular date, i.e., the third Wednesday of every month, for all item deliveries. Alternatively, separate item delivery dates may be assigned to each item. 4. “Max Out” According to one embodiment, a “Max Out” approach is used to manage the number of items that may be simultaneously rented to customers. According to the “Max Out” approach, up to a specified number of items may be rented simultaneously to a customer. Thus, the “Max Out” approach establishes the size of an inventory of items that may be maintained by customers. The specified number of items may be specific to each customer or may be common to one or more customers. In the present example, if the specified number of items is three, then up to three items may be rented simultaneously by provider 104 to customer 102. If the specified number of items are currently rented to customer 102 and the specified item delivery criteria triggers the delivery of one or more additional items, then those items are not delivered until one or more items are returned by customer 102 to provider 104. According to one embodiment, in situations where the specified number of items are currently rented to customer 102 and the specified item delivery criteria triggers the delivery of one or more additional items, then the one or more additional items are delivered to customer 102 and customer 102 and a surcharge is applied customer 102. The specified number of items may then be increased thereafter to reflect the additional items delivered to customer 102 and increase the size of the inventory maintained by customer 102. Alternatively, the specified number of items may remain the same and number of items maintained by customer 102 returned to the prior level after items are returned to provider 104 by customer 102. When used in conjunction with the “Max Turns” approach described hereinafter, the specified number of items may be unlimited. The “Max Out” approach for managing the number of items that may be simultaneously rented to customers is now described with reference to a flow diagram 300 of FIG. 3. After starting in step 302, in step 304, one or more initial items are delivered to customer 102 to establish the inventory maintained by customer 102. Note that an initial delivery of items is not required and according to one embodiment, the inventory of customer 102 is incrementally established over time. In step 306, a determination is made whether the item delivery criteria have been satisfied. If not, then the determination continues to be made until the item delivery criteria are satisfied. As described previously herein, the delivery criteria may include customer notification generally, customer notification that an item is being returned, the actual return of an item, the occurrence of a specific date, or that a specified amount of time has elapsed. Once the item delivery criteria are satisfied, then in step 308, a determination is made whether the specified number of items have been delivered. If not, then control returns to step 304 and one or more additional items are delivered by provider 104 to customer 102. If however, in step 308, the specified number of items have been delivered, then in step 310, a determination is made whether the specified number of items, i.e., the “Max Out” limit, is to be overridden. As previously described, the specified number of items may be overridden by increasing the specified number of items, i.e., the “Max Out” limit, to allow additional items to be delivered to customer 102 and charging a fee to customer 102. Alternatively, the specified number of items is not changed and a surcharge applied to customer 102. This process continues for the duration of the subscription and is then complete in step 312. 5. “Max Turns” According to one embodiment, a “Max Turns” approach is used to rent items to customers. According to the “Max Turns” approach, up to a specified number of item exchanges may be performed during a specified period of time. For example, referring to FIG. 1, suppose that provider 104 agrees to rent items to customer 102 with a “Max Turns” limit of three items per month. This means that customer 102 may make up to three item exchanges per month. This approach may be implemented independent of the number of items that a customer may have rented at any given time under the “Max Out” approach. The approach is also independent of the particular item delivery criteria used. According to one embodiment, the “Max Turns” approach is implemented in combination with the “Max Out” approach to rent items to customers. In this situation, up to a specified number of total items are simultaneously rented to customer 102 and up to a specified number of item exchanges may be made during a specified period of time. Thus, using the “Max Out” and the “Max Turns” approaches together essentially establishes a personal item inventory for customer 102 based upon the “Max Out” limit that may be periodically refreshed based upon the “Max Turns” limit selected. In some situations, customer 102 may wish to exchange more than the specified number of items during a specified period. According to one embodiment, in this situation, provider 104 agrees to rent additional items above the specified number to customer 102 and to charge customer 102 for the additional items. For example, suppose that provider 104 agrees to rent items to customer 102 with up to three item turns (exchanges) per month. If, in a particular month, customer 102 requires two additional turns, then the two additional items are provided to customer 102 and a surcharge is applied to customer 102 for the additional two items. In other situations, customer 102 may not use all of its allotted turns during a specified period. According to one embodiment, customers lose unused turns during a subscription period. For example, if customer 102 has a “Max Turns” limit of four item exchanges per month and only makes two item exchanges in a particular month, then the two unused exchanges are lost and cannot be used. At the start of the next month, customer 102 would be entitled to four new item exchanges. According to another embodiment, customers are allowed to carry over unused turns to subsequent subscription periods. For example, if customer 102 has a “Max Turns” limit of four item exchanges per month and only makes two item exchanges in a particular month, then the two unused exchanges are lost and cannot be used. At the start of the next month, customer 102 would be entitled to six new item exchanges, two from the prior month and four for the current month. The “Max Turns” approach for renting items to customers is now described with reference to a flow diagram 400 of FIG. 4. After starting in step 402, in step 404, customer 102 and provider 104 agree upon the terms of the “Max Turns” agreement. Specifically, customer 102 and provider 104 agree at least upon the maximum number of turns that are allowed in a specified period of time. In step 406, in response to one or more item delivery criteria being satisfied, provider 104 provides one or more items to customer 102 over delivery channel 108. Any type of item delivery criteria may be used with the “Max Turns” approach and the invention is not limited to any particular delivery criteria. For example, the initial one or more items may be delivered to customer 102 in response to a subscription payment made by customer 102 to provider 104, the initiation of a specified subscription period, or by request of customer 102 for the initial rental items. The number of initial one or more items must not exceed the terms of the “Max Turns” agreement. In step 408, in response to one or more delivery criteria being satisfied, a determination is made whether additional items can be provided to customer 102 within the terms of the “Max Turns” agreement. For example, if the number of items rented to customer in the current subscription period is less than the agreed-upon “Max Turns,” then additional items can be rented to customer 102 within the terms of the “Max Turns” agreement. In this situation, this determination may be made in response to customer 102 returning one or more items to provider 104, or by customer 102 requesting additional items. If, in step 408, a determination is made that additional items can be rented to customer 102 within the terms of the “Max Turns” agreement, then control returns to step 406 where one or more additional items are rented to customer 102. If however, in step 408, a determination is made that additional items cannot be rented to customer 102 within the terms of the “Max Turns” agreement, then in step 410, a determination is made whether to override the current agreement terms. If so, then in step 412, the agreement terms are changed to allow for a larger number of terms and customer 102 is charged accordingly, or the terms are left unchanged and a surcharge is applied for the additional items to be delivered. Control then returns to step 406, where one or more additional items are delivered to customer 102. If in step 410, a determination is made that the current agreement is not to be overridden, then in step 414, no items are delivered to customer 102 until the next subscription period. For example, the request for additional items may be received at the end of a subscription period and instead of renting the additional items immediately, they are instead delivered during the subsequent subscription period. Control then returns to step 406 where one or more additional items are rented to customer or the process is complete in step 416. The approach for renting items described herein is now described in the context of renting to customers audio/video (A/V) items, such as movies, games and music, stored on various media. FIG. 5 is a diagram 500 that depicts a set of customers 502 that desire to A/V items from a provider 504. Customers 502 communicate with provider 504 over links 506, the global packet-switched network referred to as the “Internet,” and a link 510. Links 506 and 510 may be any medium for transferring data between customers 502 and the Internet 508 and between the Internet 508 and provider 504, respectively, and the invention is not limited to any particular medium. In the present example, links 506 and 510 may be connections provided by one or more Internet Service Providers (ISPs) and customers 502 are configured with generic Internet web browsers. Links 506 and 510 may be secure or unsecured depending upon the requirements of a particular application. In accordance with an embodiment, customers 502 enter into a rental agreement with provider 504 to rent audio/video (A/V) items 512 from provider 504 according to the “Max Out” and/or “Max Turns” approaches described herein. The invention is not limited to any particular approach for entering into the rental agreement. For example, customers 502 and provider 504 may enter into a rental agreement by mail, telephone or over the Internet, by customers 502 logging into a web site associated with provider 504. Customers 502 create and provide item selection criteria to provider 504 over links 506 and 510 and the Internet 508. The invention is not limited to any particular approach for specifying and providing item selection criteria to provider 504. For example, according to one embodiment, customers 502 provide item selection criteria to provider 504 in one or more data files. According to another embodiment, customers 502 log onto a web site of provider 504 and use a graphical user interfaced (GUI) to specify attributes of the movies and music that customers desire to rent from provider 504. The item selection attributes may include any attributes that describe, at least in part, movies, games or music that customers 502 desire to rent. For movies, example attributes include, without limitation, title, category, director name, actor name and year of release. For games, example attributes include, without limitation, title and category. For music, example attributes include, without limitation, title, category, artist/group name and year of release. Customers 502 may identify specific movies or music by the item selection criteria, or may provide various attributes and allow provider 504 to automatically select particular movies and music that satisfy the attributes specified. For example, customers 502 may specify item selection criteria that include horror movies released in 1999 and let provider 504 automatically select horror movies that were release in 1999. As another example, customers 502 may specify item selection criteria that include adventure movies starring Harrison Ford. Customers 502 may also specify an order or priority for the specified item selection criteria. For example, customers 502 may specify specific movie titles and the order in which they want to receive them. As another example, customers 502 may specify that they want to receive a particular number of movies of different types. Once customers 502 and provider 504 have entered into a rental agreement and customers 502 have provided item selection criteria to provider 504, then A/V items 512 are rented to customers 502 over delivery channels 514 in accordance with the terms of the rental agreement. Specifically, according to the “Max Out” approach described herein, an initial set of A/V items 512, such as movies, games and music, are delivered to customers 502 over delivery channels 514 according to the terms of the rental agreement. Subsequent A/V items 512 are delivered whenever the specified item delivery criteria are satisfied. For example, additional A/V items 512 may be delivered upon the return of one or more A/V items 512 to provider, a request from customers 502, the arrival of a particular date, e.g., a specific day of the month, or the expiration of a specified period of time, e.g., fifteen days. In accordance with the “Max Out” approach described herein, once the maximum number of A/V items 512 have been rented to a particular customer 502, then no additional A/V items 512 are rented until one or more rented A/V items 512 are returned to provider 504, or unless a surcharge is applied to the particular customer 502. Alternatively, the rental agreement between the particular customer 502 and provider 504 may be modified to increase the maximum number of A/V items 512 that may be rented simultaneously to the particular customer 502. The rental agreement between customers 502 and provider 504 may also specify a maximum number of turns in combination with the “Max Turns” approach. In this situation, a maximum number of turns restricts how quickly customers 502 may refresh their A/V item 512 inventories. For example, suppose that a particular customer 502 agrees with provider 504 to rent up to four movies with a maximum of four turns per month. Under this agreement, the particular customer 502 may maintain a personal inventory of up to four movies and rent four new movies per month. Thus, the particular customer 502 can completely “replace” his personal inventory once per month. If the particular customer 502 agreed to a maximum of up to eight turns per month, then the particular customer 502 would be able to completely replace his personal inventory twice per month. Provider 504 is illustrated as a single entity for purposes of explanation only. Provider 504 may be centralized or distributed depending upon the requirements of a particular application. For example, provider 504 may be a central warehouse from which all A/V items 512 are provided. Alternatively, provider 504 may be implemented by a network of distributed warehouses. FIG. 6 is a flow diagram that illustrates an approach for renting A/V items 512, e.g., movies, to customers over a communications network such as the Internet using both “Max Out” and “Max Turns” according to an embodiment. Referring also to FIG. 5, after starting in step 602, in step 604, a customer 502 enters into a rental agreement with provider 504. In the present example, customer 502 uses a generic web browser to access an Internet web site associated with provider 504 and enter into a rental agreement that specifies that customer 502 may maintain a personal inventory of four movies (“Max Out” of four) and receive up to four new movies per month (“Max Turns” of four). Furthermore, the rental agreement specifies that new movies will be delivered upon return of a rented movie from customer 502, i.e., the delivery criteria is a return of a movie by the customer. In step 606, customer 502 creates and provides movie selection criteria to provider 504 that indicates movies that customer 502 desires to rent. For example, the movie selection criteria may specify particular movie titles that customer 502 desires to rent. The movie selection criteria may also specify an order or priority in which customer 502 wishes to rent the movies. Instead of identifying particular movie titles, the movie selection criteria may specify movie preferences for customer 502, e.g., types of movies, directors, actors, or any other movie preferences or attributes. In this situation, provider 504 automatically selects particular titles that satisfy the movie selection criteria. For example, the movie selection criteria may specify a preference for action movies starring a particular actor, with a preference for “new release” movies. Provider 504 attempt to provide movies to customer 502 that best satisfy the preferences indicated by the movie selection criteria. In step 608, one or more initial movies 512 are delivered to customer 502 over delivery channel 514. The one or more initial movies 512 may be delivered to customer 502 via mail, courier, delivery agent or any other suitable means negotiated between customer 502 and provider and the invention is not limited to any particular type of delivery mechanism. For purposes of explanation only, is presumed in the present example that movies are mailed between customer 502 and provider 504. The one or more initial movies 512 establish the personal movie inventory of customer 502. Customer 502 may choose to receive any number of movies up to the “Max Out” limit of four movies. Typically, customer 502 will choose to initially receive four movies in the initial delivery. Once the one or more initial movies 512 have been mailed to customer 502, then in step 610, a determination is made whether any movies 512 have been returned by customer 502 to trigger another movie delivery. In the present example, the delivery of additional movies is triggered by the receipt, e.g., via mail, of one or more movies from customer 502. In the situation where customer 502 elects to not receive the maximum number of movies 512 in the initial delivery, then the delivery of additional movies 512 may also be triggered by a request from customer 502 for additional movies 512. For example, customer 502 may notify provider 504 via telephone, email or by accessing the web site associated with provider 504. If, in step 610, a determination is made that one or more movies 512 were received from customer 502, then in step 612, a determination is made whether the maximum number of turns (“Max Turns”) limit has been reached for the current cycle. In the present example, a determination is made whether four or more movies have been mailed in the current month. If not, then control returns to step 608, where one or more additional movies 512 are mailed to customer 502 via delivery channel 514 up to the “Max Out” limit of four. If, in step 612, a determination is made that the “Max Turns” limit has been met for the current cycle, i.e., in the present example, four movies 512 have been mailed to customer 502 in the current month, then in step 614 a determination is made whether to override the current “Max Turns” limit. If so, then in step 616, a surcharge is applied to customer 502 and control returns to step 608 where the additional movies 514 are mailed to customer 502. If not, then in step 618, a determination is made whether to continue the subscription service. If so, then no additional movies are mailed to customer 502 during the current cycle, e.g., the current month, and the control returns to step 610. If, in step 618, a determination is made that service is not to be continued, then the process is complete in step 620. In some situations, customer 502 may desire to increase or decrease the size of customer's 502 personal movie inventory by changing the current “Max Out” limit. According to one embodiment, customer 502 notifies provider 504, e.g., by telephone, mail, email or by accessing the web site associated with provider 504, that customer 502 wishes to change the “Max Out” limit. The movie rental agreement between customer 502 and provider 504 is then modified to reflect the change of the “Max Out” limit. In the situation where the “Max Out” limit is increased, then additional movies 512 may be immediately mailed to customer 502. 6. Inventory Management The approach described herein for renting items to customers provides superior inventory management to prior approaches. Specifically, the use of item selection criteria provides for efficient inventory management by allowing the greatest number of items to be rented at any given time. Moreover, the greatest number of customers are provided with their most preferred items. For example, customers may specify priorities for the items indicated by the item selection criteria. Thus, if a particular customer's first choice is not available, or already rented, then the item having the next highest priority can be rented to the particular customer. According to one embodiment, customers may indicate items that are not yet available for rent. Then, the items are delivered to customers when they become available. For example, referring again to FIG. 5, suppose that a particular customer 502 desires to rent an as-yet-unreleased movie entitled “ABC.” The particular customer 502 indicates this movie to provider 504 by the item selection criteria. Since the movie ABC is not yet available, it cannot be delivered to the particular customer 502. However, when the movie ABC does become available, it can be shipped immediately to the particular customer 502, as well as other customers 502 who may have also requested the movie. This allows provider 504 to maximize the number of items rented while ensuring that customers 502 are able to rent the highest priority items that they requested. According to another embodiment, as yet unknown items may also be rented by specifying attributes of the unknown items. For example, the particular customer 502 may request to rent the next new movie of a particular director, for which the exact name is unknown to the particular customer. As another example, the particular customer 502 may request to rent the next album of a particular group that is currently in process and does not yet have a title. 7. Implementation Mechanisms The approach described herein for renting items to customers is applicable to any type of rental application and (without limitation) is particularly well suited for Internet-based rental applications for renting movies and music to customers. The invention may be implemented in hardware circuitry, in computer software, or a combination of hardware circuitry and computer software and is not limited to a particular hardware or software implementation. FIG. 7 is a block diagram that illustrates a computer system 700 upon which an embodiment of the invention may be implemented. Computer system 700 includes a bus 702 or other communication mechanism for communicating information, and a processor 704 coupled with bus 702 for processing information. Computer system 700 also includes a main memory 706, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for processor 704. A storage device 710, such as a magnetic disk or optical disk, is provided and coupled to bus 702 for storing information and instructions. Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704. Another type of user input device is cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. The invention is related to the use of computer system 700 for renting items to customers. According to one embodiment of the invention, the renting of items to customers is provided by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another computer-readable medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes processor 704 to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory 706. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software. The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 704 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 704 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 700 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector coupled to bus 702 can receive the data carried in the infrared signal and place the data on bus 702. Bus 702 carries the data to main memory 706, from which processor 704 retrieves and executes the instructions. The instructions received by main memory 706 may optionally be stored on storage device 710 either before or after execution by processor 704. Computer system 700 also includes a communication interface 718 coupled to bus 702. Communication interface 718 provides a two-way data communication coupling to a network link 720 that is connected to a local network 722. For example, communication interface 718 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 718 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 718 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link 720 typically provides data communication through one or more networks to other data devices. For example, network link 720 may provide a connection through local network 722 to a host computer 724 or to data equipment operated by an Internet Service Provider (ISP) 726. ISP 726 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 728. Local network 722 and Internet 728 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 720 and through communication interface 718, which carry the digital data to and from computer system 700, are exemplary forms of carrier waves transporting the information. Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718. In accordance with the invention, one such downloaded application provides for the renting of items to customers as described herein. The received code may be executed by processor 704 as it is received, and/or stored in storage device 710, or other non-volatile storage for later execution. In this manner, computer system 700 may obtain application code in the form of a carrier wave. The novel approach described herein for renting items to customers provides several advantages over prior approaches for renting items to customers. First, the decision of what items to rent may be separated from the decision of when to rent the items. Customers may specify what items to rent using the item selection criteria and receive the items at a future point in time, without having to go to the provider to pick up the items. The selection criteria may be user specific and may indicate a desired fulfillment sequence. Furthermore, customers are not constrained by conventional due dates and instead may establish continuous, serialized rental streams of items. The approach also allows more efficient inventory management. The “Max Out” approach for inventory management allows users to maintain their own inventory of items that are periodically replaced by other items according to specified event criteria. The event criteria that trigger sending another item to a customer are very flexible and may be tailored to the requirements of a particular application. For example, as described herein, the event criteria may include a return of any of the items currently in use by the customer or merely customer notification. This is very convenient in the context of movie rentals since a the return of a movie to the provider automatically triggers the sending of another movie to the customer. The “Max Turns” approach for inventory management, when used alone or in combination with “Max Out,” provides even greater flexibility for customers and providers. The max number of turns can be selected individually for particular customers depending upon their particular needs. The “Max Out” and “Max Turns” approaches provide great flexibility in establishing subscription models to satisfy the needs of a particular application. Specifically, the size and replacement frequency of customer inventories can be tailored to each customer with individualized subscription plans. In the foregoing specification, the invention has been described as applicable to an implementation anticipating Internet based ordering and mail or other long-distance delivery of the items, where the special advantages of the method are very attractive. However the same invention may be applied in a more conventional video, games, or music rental-store setting, where subscription customers may be allowed rentals of a specified number of movies, games, or music selections at any time, and/or in one subscription period, without rental return due dates, in exchange for a periodic rental subscription fee. 8. Sharing an Item Rental Account 8.1 Overview of Profiles Approach In one embodiment, a method of sharing an item rental account is provided. Sharing an item rental account may comprise establishing a unique user identity for each of two or more persons in association with an account that has been previously established with the service provider. Each unique user identity is described in an account profile. Each account profile may be stored as a record in a database. Use of profiles provides individualized or personalized features in an item rental system beyond queues of rental items. For instance, an account owner, such as a parent, can set up a different identity, distinguished by a unique username and password combination, for each member of the parent's household. Each profile is associated with a subordinate queue in a database of the item rental system. The parent can assign, to each profile, a maximum number of items that the person associated with an identity can obtain from the service provider at any one time. An item returned to an item rental service provider, and associated with a particular subordinate queue, is replaced by the next item in that queue. The account owner administers permissions and other characteristics of each profile and identity. The item rental service provider sends all rental items to the same postal address, and bills all transactions in the account to the same credit card. As another example of individualized or personalized features, personalized ratings and recommendations may be stored in association with a profile. Thus, each profile member rates rental items, the ratings are stored in a database in association with profile member identifying information, and the ratings are used to generate personalized rental item recommendations for each profile member without regard to the rental history of other profile members in the same account. The preceding example has described the use of account profiles for multi-person households such as families with children. In an alternative embodiment, a single-person household can establish a plurality of profiles for an account, so that a single person can have plural separate queues of rental items. In another alternative, a two-person household that wants a shared queue with personalized recommendations or reviews can establish two profiles within a single account. In one embodiment, each profile is associated in a database of an item rental service provider with a Queue of rental items, Ratings of rented items, a Rental History of previously rented items, and one or more personalized Recommendations of items that could be rented in the future. Thus, a person or identity associated with a profile can rate movies individually, receives item recommendations based upon their past ratings, is identified as a unique individual within an online user community, and can schedule or order the delivery of movies in that profile's Queue. In an embodiment, an account owner can delegate certain Account administration and Queue administration functions to a particular profile in the account. In an embodiment applied in the context of renting audiovisual items such as movies and games, an account owner can set limits on the maturity level of movies that other identities can place in the queues of those identities. In another embodiment, each identity in an account receives age-appropriate online page presentations from the rental item service provider. For example, a server and appropriate software of the rental item service provider can determine that a current session involves a young child profile within an account and, in response, can present only web pages for G-rated movies that include larger buttons, cartoon-like artwork, simplified page layout, etc. With the approach herein, persons associated with different profiles of an account may be resistant to any effort by the account owner to move away from the service provider to another service provider. Thus, the service provider in effect establishes a relationship with each identity of a profile. When persons associated with profiles interact in an online community product, each person can establish online relationships to particular friends, rather than establishing a relationship with a friend's households. 8.2 Features of an Example Profiles Approach In one embodiment, a method of sharing an item rental account is implemented in an online item rental service that provides, through one or more appropriate networked servers and application programs, one or more of the following operational features for each profile identity: 1. Personalized rental item list, or queue, including functions to add and delete items from the Queue, view an item ship order, and view a rental history. 2. Personalized email notifications. 3. Storage of personalized favorite data values associated with rental items. For example, when rental items are movies, personalized favorite data values may include favorite actors, actresses, directors, genres, etc. 4. Personalized ratings and recommendations. 5. Personalized community relationships. 6. A maturity value or other constraint, managed by the account owner for all queues. 7. Personalized mailing labels for rental items that are delivered by mail. 8. Other account owner controls. 9. Allow profile members to receive shipments and receive system notices, newsletters, and select other communications. 10. Allow owners to remove a username/password combination, designating a Profile, from their Accounts. 11. Allow profile members to change a Profile into a stand-alone Account. 12. Enable an Account Owner to identify a Co-Owner who can participate in Account administration. As described herein, a “User” or “Profile Member” is a subscriber to an item rental service who has unique authentication credentials into the service. Each User has a Profile. An “Account” is a billing relationship established by a customer with the item rental service; in an embodiment, an Account has at least one and no more than five Profiles. Other embodiments may have any number of Profiles in an Account. An “Owner” is the User that establishes and manages the Account and is responsible for the billing relationship. A “Queue” is a personalized list of rental items that a User manages at the item rental service. A “Subordinate” refers to any User identity established by the Owner within an Account. Embodiments may be used with any kind of rental item. Embodiments in which rental items are movies may use any format for such movies, including DVD, electronic download, etc. 8.3 Structural Overview FIG. 8 is a block diagram of a server computer system that may be used to implement an example embodiment. One or more users 802A, 802B are coupled through a network 803 to a service provider server computer 804. In this context, users 802A, 802B broadly represent any end station device suitable for connecting through a network to the server computer 804 and performing the functions described herein, such as a personal computer, workstation, wireless device, etc. For purposes of illustrating a clear example, FIG. 8 shows two users; however, embodiments may serve any number of users. Network 803 and links from users 802A, 802B to service provider computer 804 include, without limitation, a network such as a LAN, WAN or the Internet, a telecommunications link, a wire link, optical link or a wireless connection. Server computer 804 includes a user account database 806 and profile management logic 820. User account database 806 comprises one or more user accounts 808A, 808B, 808N; any number of user accounts may exist in an embodiment. Each user account 808A, 808B, 808N is associated with one or more users, such as users 802A, 802B or others. For each user account 808A, 808B, 808N, one individual is designated as an account owner, and one or more other individuals are designated as profile members. A profile 810A, 810B is associated with each of the account owner and the other individuals, and one or more profiles are associated with an account. For example, a first profile 810A is associated with user 802A, who is an account owner, and a second profile 810B is associated with user 802B, who is a profile member of the same user account 808A. Profiles 810A, 810B are both associated with user account 808A, as indicated by arrows. Generally, an account owner is responsible for payment to a service provider for rental item services, and controls attributes of all profiles associated with an account. In contrast, a profile member has a separate user identifier, item rental queue, and other attributes as described in sections 8.1 and 8.2 above, but is subject to limitations that the account owner sets, and is not responsible for payment. User accounts 808B, 808N each have an account owner, and may have zero or any number of profile members. Each user account has a “max out” value 809 associated therewith, which indicates the maximum number of rental items that the account may receive at a time. Each profile comprises a queue, a maturity level value, a value indicating the maximum number of rental items that an associated profile member is allowed to rent at a time (“max rental items out”), and one or more other attributes. For example, first profile 810A comprises a queue 812A, a maturity level value 814A, a max rental items out value 816A, and ”), and one or more other attributes 818A. Queue 812A comprises an ordered list of rental items of the kind described above in section 1 through section 5. The maturity level value 814A indicates the highest item maturity level that a profile member is allowed to rent. In an embodiment in which rental items are movies, the maturity level value 814A may indicate the highest movie rating that a movie may have for a profile member to rent that movie. For example, maturity level value 814A may store an MPAA rating value such as G, PG, PG-13, R, NC-17, etc. If the maturity level value 814A is PG, then a profile member of first profile 810A may rent only G-rated or PG-rated movies; however, a profile member of second profile 810B may have a different maturity level value 814B that allows renting PG-13 or R movies. Alternatively, a profile can store ratings under the TV rating system (TV-MA, TV-15, etc.). Maturity level values 814A, 814B are examples of constraints that a profile may store. In other embodiments, a profile stores a constraint other than a maturity level value. Thus, embodiments are not limited to the use of maturity level values as constraints on rental activity for profile members; any other appropriate, desired or useful constraint may be used. Examples of other constraints that may be used include rental item genre, rental item media format, rental item length, parental advisory warning values, video game rating values, etc. The max rental items out value 816A specifies the largest number of rental items that the profile member of the first profile 810A may receive at a time. The max rental items out value 816A is some number equal to or less than the max out value 809 for the user account 808A of profile 810A. Further, the sum of the max rental items out value 816A of first profile 810A and the max rental items out value 816B of second profile 810B is equal to max out value 809. Thus, if user account 808A is allowed four (4) rental items out at a time, max out value 809 is “4” and max rental items out values 816A, 816B may be any combination of values that equals 4. The sum of values 816A, 816B could be less than max out value 809, but such a configuration would represent less than optimal usage of the user account 808A. The other attributes 818A, 818B may store any other profile values that are found useful or convenient, such as the age of a person associated with a profile, a shipping address for a person associated with a profile, a date on which the profile was created, a flag indicating whether a person associated with a profile is participating in online community features, etc. Other attributes 818A, 818B may indicate that a profile is associated with one of a plurality of alternative means of delivery. For example, if a rental service contemplated fulfillment of audiovisual items via either physical delivery or electronic delivery to one or more TV set-top boxes or other customer premises equipment, information in profiles can designate target set-top boxes for specific movies. As a specific example, assume a household owns two TV set-top boxes one in the living room for the parents' use and one in the kids' room for kids' use. The parents might associate the parents' set-top box with the parents'-specific profile and queue, and the kids' set-top box with the kids' profile and queue. Any movies in the parents' queue would be fulfilled either by DVD in a physical mailer addressed to the parents, or by electronic delivery to the parents' set-top box, while any movies in the kids' queue would be delivered either by DVD in a physical mailer addressed the kids, or by electronic delivery to the kids' set-top box. Values of other attributes 818A, 818B may designate the particular mechanism for delivery. Various other ways to combine binding of set-top boxes and profile queues are contemplated. For example, in various embodiments specific queues are designated for delivery by DVD only, or delivery to the set-top box only. In another embodiment, rules in a profile can designate that particular available queue entries of profiles are designated as electronic delivery only between other profiles associated with specific users rather than to specific hardware devices. In one embodiment, user accounts, profiles, and the data structures and values within profiles are implemented using tables and relationships in a relational database system, such as Oracle, Microsoft SQL Server, etc. Profile management logic 820 comprises one or more computer programs, other software elements, or processes that implement the functions that are described further herein. 8.4 Functional Overview FIG. 9A is a flow diagram depicting an overview of a method of sharing an item rental account. In step 902, a request is received to add a queue or profile to an item rental account. In one embodiment, a user interface of an online item rental system provides an “Add Queue” option which, when selected by an owner or user of an item rental account, communicates a request to add a further queue to the account. Alternatively, a functionally equivalent user interface option may be termed an “Add Profile” option. In step 904, profile identifying information is received. Step 904 may involve receiving data specifying a name for a profile, a sign-in name, a password, a shipping address, or any other suitable combination of values that uniquely identities a profile. In step 906, a maturity level indicator is received. Step 906 may involve receiving user input for a value for maturity level value 814A as described above in connection with FIG. 8. In embodiments for rental items for which a maturity level indicator is not needed, step 906 may involve receiving user input for a constraint other than maturity level, or step 906 may be omitted. In step 908, a community participation indicator is received. Step 908 may involve receiving user input that indicates whether to allow the associated profile member to participate in online community features of the item rental system. Examples of online community features include sharing queue contents, notifying other account owners or profile members of item rental activity, instant messaging, writing reviews, communicating with friends, etc. In an embodiment, the community participation indicator is “disabled” for subordinates by default, but “enabled” for the Account Owner. In step 910, values for one or more other profile attributes are received. Other attributes may include the age of a person associated with a profile, a date on which the profile was created, etc. Such other attributes may be received through user input or may be generated by the item rental system for an account profile record. The values received in steps 904-910 all may be received in a single user interface display screen, or the method may involve displaying a user interface dialog or a succession of screens in which the data is collected. In step 912, a profile record and associated item rental queue are created and stored in a database of the item rental system. The profile record may have the values shown in FIG. 8. FIG. 9B is a flow diagram of a process of assigning a maximum allowed number of rental items to a profile. Using the process of FIG. 9B, an account owner can specify how many rental items the item rental service should send at a time to the profile member for a newly created profile. In one embodiment, after performing step 912 of FIG. 9A, the item rental system automatically generates and displays a user interface screen that indicates the maximum number of items that the account may have out at a time, and that prompts the user to enter the maximum number of items that each profile may have out at a time. Alternatively, at any time a user may provide user input requesting to enter such values. In such an embodiment, in step 914 a request to assign rental items to a profile is received. In step 916, values are received for the maximum allowed rental items for each profile of an account. For example, if an account has two profiles, then step 916 involves receiving a number of maximum allowed rental items for each of the two profiles, verifying that the sum of the two numbers does not exceed the maximum number of items that the associated account may receive at a time, and storing the two values in the database. In step 918, queues for each profile of the account are updated. Step 918 may involve computing, updating or displaying queue information such as the number of rental items that are available to use before a next subscription period ends, information indicating what rental items are scheduled for shipment, etc. Step 918 generally represents updating any information relating to an item queue or item queue functions that may require changes as a result of a change in the values received at step 916. FIG. 9C is a flow diagram of processing a request to add a rental item to a profile member queue. In step 920, login information for a profile member is received. The login information may comprise, for example, the sign-in name and password that the account owner supplied at step 904 of FIG. 9A. Step 920 represents receiving and validating or authenticating the login information to verify that the login information identifies a valid, active profile. In step 922, a request is received to add a rental item to a queue for the profile member. For example, the profile member browses an online catalog of available rental items, selects a desired rental item, and selects a user interface widget that requests adding the selected rental item to the queue of the profile member. In response, at step 924, the item rental system determines whether a maturity rating for the selected rental item is greater than the allowed maturity level for the profile. In other embodiments for which maturity level values are not associated with rental items, step 924 may involve performing other tests or checks to determine whether the item rental system can rent the selected rental item to the profile member based on a stored constraint other than maturity level. Thus, a test specifically based on maturity level is not required at step 924, and step 924 broadly represents testing for any configured constraint that applies to the rental item that the profile member has selected. If the maturity rating for the selected rental item is not greater than the allowed maturity level for the profile, then in step 928 the rental item is added to the queue for the profile member. In step 934, the queue is displayed so that the profile member can verify the addition and see the complete contents of the queue. Thereafter the rental item is provided to the profile member in the manner described above in sections 1-5. If the maturity rating for the selected rental item is greater than the allowed maturity level for the profile, then in step 926 the profile member is prompted to provide a password of the account owner. Thus, to rent an item having a disallowed maturity rating, the profile member must override the allowed maturity level by providing the account owner's password as proof that the account owner approves of the item rental transaction at the requested maturity level. If the account owner's password is correct, as tested at step 930, then control passes to step 928 as described above. If the account owner's password is incorrect, then in step 932 an error message is presented. In that case, the profile member is required to either provide a valid account owner password at step 926, or the profile member can abandon the transaction and not rent the item. The broad approach of FIG. 8 and FIGS. 9A-9C is now illustrated in the context of one example user interface that may be used to implement an embodiment. Other embodiments of the approach herein may use any other form of user interface that is desired or appropriate. FIG. 10A is a screen display diagram showing an example user interface display relating to browsing rental items. FIG. 10B is a screen display diagram showing an example user interface display relating to adding a queue or profile to an account. FIG. 10C is a screen display diagram showing an example profile introduction. FIG. 10D is a screen display diagram showing an example user interface display relating to entering attributes of a profile. FIG. 10E is a screen display diagram showing an example user interface display relating to browsing rental items. Referring first to FIG. 10A, in one embodiment a graphical user interface (GUI) 1000 of a conventional browser program, such as Microsoft Internet Explorer, Mozilla Firefox, Netscape Navigator, etc., displays a first page 1002 for the Netflix® item rental service that is commercially offered by Netflix, Inc., Los Gatos, Calif. Page 1002 comprises a plurality of page selection tabs 1004 which, when selected by user input such as a mouse click, causes a server of the item rental service to generate and transmit to the browser page content 1008 associated with the selected tab. In one embodiment, tabs respectively entitled Browse, Recommendations, Friends, and Queue enable a user to browse rental items such as movies, display recommendations for rental items that are automatically generated by the item rental service, review rental activity of friends who also use the service, and display the user's item rental queue. Each tab 1004 may have one or more associated sub-functions that are represented by hyperlinks 1006. A profile combo box 1010 specifies a name of the currently active profile member. If a user account has a plurality of profiles associated with the account, selecting profile combo box 1010 causes the browser to display a list of the profiles, enabling user selection of different profiles. In an embodiment, profile combo box 1010 functions using Javascript code that is delivered to the browser with page 1002. Referring now to FIG. 10B, when an account has only one associated profile for the account owner, selecting profile combo box 1010 causes the browser to display two links entitled “Add a Queue?” and “Sign Out.” If other profiles are associated with the account, then the names of such profiles are also displayed. For purposes of illustrating a clear example, the following description assumes that an account had one profile and the account owner wishes to add a second profile and allocate rental items to the new profile. Therefore, for purposes of the example, assume that the “Add a Queue?” link is selected in profile combo box 1010 of FIG. 10B. Referring now to FIG. 10C, in one embodiment, in response to selecting the “Add a Queue?” link a welcome page 1020 is displayed. Welcome page 1020 may comprise a panel 1022 providing information about how profiles function and a confirmation button 1024. Selecting the confirmation button 1024 enables the account owner to confirm that the account owner wishes to create a profile that will function as stated in panel 1022. Use of a welcome page 1020 is optional in an embodiment and may be omitted. However, the user of a welcome page 1020 may improve performance of an item rental system by preventing the needless creation of profile records by account owners who select “Add a Queue?” without fully understanding how profiles function. Referring now to FIG. 10D, in response either to selection of the “Add a Queue?” link or the confirmation button 1024, the item rental service generates and transmits to the browser a member profile page 1030 comprising data entry fields that define a profile member and the capabilities of the profile member. In one embodiment, member profile page 1030 comprises name fields 1032, a sign-in name field 1034, password fields 1036, maturity level combo box 1038, an address display 1040, a community check box 1042, a save button 1044, and a cancel button 1046. The name fields 1032 receive a name of a new profile member. The sign-in name field 1034 receives a name that the profile member will use to sign in to the profile, such as an email address or handle. The password fields 1036 receive a password that the profile member will use to obtain secure access to the profile and may include a password confirmation field to ensure that an entered password is accurate. The account owner may specify a maximum maturity level allowed for the profile member using maturity level combo box 1038. In other embodiments, a user interface widget other than a combo box may be used. In other embodiments, a constraint other than maturity level may be entered. The address display 1040 indicates the shipping address to which rental items for the profile member will be sent. In one embodiment, address display 1040 is a data entry field, and the account owner may specify an alternate delivery address for the profile member. The account owner may specify whether the profile member can participate in online community features by selecting a community check box 1042. The account owner may select the save button 1044 to cause the item rental service to verify the entered data values and save the entered values in the database of the item rental service. The account owner may select the cancel button 1046 to discontinue entering a profile record. If all the foregoing values are entered and the account owner selects the save button 1044, then in response, the item rental service generates and sends to the browser a page requesting entry of the maximum number of items that the profile member may receive at a time. Referring now to FIG. 10E, in one embodiment, an assignment page 1050 is displayed comprising a table 1052 that lists the profile names 1054, 1056 of each profile of the current account and comprises data entry fields 1058 for specifying the maximum number of items that each profile member is allowed to receive. The account owner may modify values in fields 1058 using the keyboard or other user input and may save the entered values using a save button 1062. In an embodiment, the sum of values in fields 1058 must be less than or equal to a maximum number of rental items allowed for the account, as indicated by a Membership Total value 1060. Therefore, in one embodiment, selecting save button 1062 causes Javascript code in the browser to verify that the sum of the fields is less than the allowed maximum. If not, then the account owner is prompted using a Javascript error message to correct the entered values. The account owner may save the revised values, or discontinue entering a profile record by selecting the cancel button 1064. Assuming the entered values are correct and are saved, in response, creation of a new profile is complete, and the item rental service generates and sends to the browser a queue display page for the newly created profile. The account owner or the profile member then may add rental items to the queue. Referring now to FIG. 10F, in one embodiment a queue page 1070 may include a first list 1076 of rental items that the profile member has already received and a second list 1078 of rental items that are in the queue but not yet provided to or received by the profile member. In the example of FIG. 10F, the profile member named “Jane Profile” has not received any rental items, and has one rental item (“Melinda and Melinda”) in queue. Profile combo box 1010 displays the name of the current profile member (“Jane Profile”) by default. When the profile combo box 1010 is selected, the profile combo box displays the current profile member name 1072, the account owner's name 1074, and a sign out link. To add other rental items to the queue, the profile member may select the Browse link 1004 to browse rental items, or may use a search box to enter the name of a particular rental item or other information about a specified rental item. In an embodiment, a profile member is allowed to add a rental item to the profile member's queue only if the maturity level value associated with that rental item is less than or equal to the maturity level value specified in the profile. As an example, assume that rental items are movies, and that the maturity level in the profile for profile member “Jane Profile” is PG-13. If Jane Profile attempts to rent a movie having a maturity level higher than PG-13, the item rental service requires confirmation by the account owner as a condition of allowing the rental. For example, assume that Jane Profile browses movies available for rental, selects “The Talented Mr. Ripley,” which is rated R, and selects an Add button to add that movie to Jane Profile's queue. In response, the item rental service generates and sends to the browser a confirmation page. Referring now to FIG. 10G, in one embodiment, a confirmation page 1080 comprises item rental information 1082, an information panel 1084, a password field 1086, and an Add Movie button 1088. Rental information 1082 provides brief information about the selected rental item, so that the account owner can see what the profile member wishes to rent. Information panel 1084 comprises text explaining to the profile member that account owner approval is required for the rental item because its maturity level exceeds the maturity level configured for the profile. If the proposed rental transaction is acceptable, then the account owner enters the account owner's password in the password field 1086 and selects the Add Movie button 1088. In response, the item rental service validates the entered password. If the password is valid, then the item rental service adds the selected rental item to the profile member's queue and re-displays the queue, as in FIG. 10F. If the password is invalid, then the item rental service re-generates the confirmation page 1080 and includes a message indicating that the password was invalid. The profile member can abandon the transaction by selecting the Browse, Recommendations, Friends, or Queue tabs, or by entering information in search box 1089. In an alternative embodiment, rather than displaying the information presented in confirmation page 1080 directly to the Profile Member, the item rental service may send an email message to the Account Owner to request approval for the proposed rental. The email message contains a hyperlink which, when selected by the Account Owner, causes a browser at the Account Owner's location to display confirmation page 1080. This approach is useful to facilitate remote approval of a rental item, that is, when the Account Owner and the Profile Member are in separate locations and the Profile Member wishes to obtain remote approval to rent an item. 8.5 Additional Features and Functions Other user interface displays and processes may implement the functions described above in section 8.1 and section 8.2. Further, various embodiments may implement any one or more of the following features and functions. In one embodiment, if an Owner elects to downgrade to a rental service subscription plan that does not support the number of rental items assigned to the Owner's current number of Profile Queues, at the end of the next billing period (that is, when the downgrade takes effect), the item rental service sets all non-Owner Queues to “0” and the owner must reallocate items to Profiles. When the downgrade occurs, an Owner receives an email notifying the Owner that all rental items have been reassigned to the Owner and Owner must reassign rental items to Profiles in the Account. In an embodiment, if the Owner upgrades his or her subscription with the item rental service, the item rental service receives a Program Change page that explains the rental items have been added to the Owner's Queue and that the Owner should immediately allocate those rental items using the process of FIG. 10E. If the upgrade is “deferred” because a customer has already taken advantage of an immediate upgrade, the Program Change page specifies that rental items will be added in the future when the change takes effect and the Owner should allocate the rental items at that time. In an embodiment, an Owner can view a full rental history for all Profiles in the Account. This approach enables the Owner to report problems for any items received in their Account. In an embodiment, the Owner can display either the Owner's rental history or an aggregate account rental history. In one embodiment, multiple Users of Profiles in the same Account may have the same rental item in their Queue. If any User adds a rental item to the User's Queue that already exists in one or more of the other Queues in the Account (or is currently “out” to the household associated with the Account), the item rental service generates and sends a page to the User advising that the rental item resides in another User's Queue for the same household or has been shipped. In an embodiment, the page indicates which Queue and what position the item is in. However, the User is allowed to place the movie in their Queue. In an embodiment, an Owner can place rental items into a Subordinate Queue, even if the Subordinate is restricted to a particular maturity level or restricted by another constraint. The rental items the Owner can place into a Subordinate Queue are not bound by the constraint of the Subordinate's Profile (e.g., An Owner can place an “R” movie into a PG-13 Queue.) In an embodiment, the item rental service provides a Switch Profile with which an Owner can quickly work with the Queue and Profile information of another User without entering a password. In this embodiment, by allowing the owner to quickly become another user and by implementing the constraint override function described above, the item rental service allows the Owner to become another User, find a rental item using either search or browsing, add the selected rental item to the subordinate's Queue, position the rental item in the queue, and then log out of the subordinate account and return to the User's own Profile. In an embodiment, an Owner can enable or disable restrictions on Subordinate users that define the types of email that the item rental service sends to the Subordinates. In one embodiment, restrictions can individually control each of the following: ship/receive notifications; newsletters; movie suggestions; critics reviews; account hold notices; and special offers. In an embodiment, a co-Owner may be designated. To reduce the administration burden on the Owner, the Owner can designate one or more of the subordinate users in the Account as Co-Owner with full administration rights. For example, a family of 4 might have an Account where “mom” signs up using her credit card and email address (and is the Owner). “Mom” then creates three additional Users (“dad”, “teen”, “kid”) and designates “dad” as Co-Owner. “Dad” would have all the same administration rights as “mom”. In an embodiment, Owners can “remove” a profile from an Account, and only Owners can do so. Removing a profile, in one embodiment, does not delete the profile but merely removes an association between the profile and the account. In an embodiment, when the Owner removes an profile from the Account, all existing identity, queue, ratings, recommendations, and rental history for the removed profile remains intact in the database in records for the removed profile. If a Profile Member associated with a removed Profile attempts to log into the item rental service, the Profile Member is required to complete a sign-up process before receiving access to the Profile. This approach reduces the possibility that Profile Members associated with removed Profiles that had constraints would be able to perform actions (e.g., view previews, etc.) that were prohibited when the Profile was active. In one embodiment, upon removal of a Profile, the Account Owner can request the item rental service to send an email to the newly removed Profile Member alerting the Profile Member to their new status, with instructions on how to reactivate the profile as a new account. In one embodiment, a Spin-Off feature enables subordinate users not subject to maturity restrictions or other constraints to “spin off” their profile out of an existing account and establish it as a new account. Any profile that is restricted by a constraint cannot be spun out; the Account Owner must first remove the constraint. This approach helps eliminate the risk that a child Profile Member could attempt to make the child's Profile “private” without parental knowledge. In an embodiment, when a profile user spins out, all existing identity, queue, ratings, recommendations, and rental history (for that profile) remain intact with that profile in the database, but the Profile is disassociated from the Account. If a Profile Member associated with a spun-off Profile attempts to log into the item rental service, the Profile Member is required to complete a sign-up process before receiving access to the Profile. In the foregoing specification, the invention has been described with reference to specific embodiments thereof. However, various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. 11297115 netflix, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open 725/5 Mar 25th, 2022 05:26PM Mar 25th, 2022 05:26PM Netflix Consumer Services General Retailers

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.