Alphabet

- NASDAQ:GOOG
Last Updated 2024-04-24

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Oct 23rd, 2018 12:00AM https://www.uspto.gov?id=US11314933-20220426 Customized user prompts for autofilling applications An example method includes determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information. The method further includes transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. The method also includes receiving a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI, and generating a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized by processing the one or more user input values using the at least one template. 11314933 1. A method comprising: determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information; transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI, wherein the at least one template comprises a partial value mask; receiving a response to the request from the remote provider, wherein the response comprises the at least one template specifying a prompt format and specifying one or more text input fields, wherein one or more user input values are to be extracted from the one or more specified text input fields and included within the specified prompt format; extracting the one or more user input values from the one or more specified text input fields; determining a value for inclusion within a prompt by applying the partial value mask to one of the one or more user input values; generating, based on the at least one template, the prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized in the prompt format, wherein the prompt includes one or more portions of the extracted one or more user input values from the one or more specified text input fields; receiving, via the prompt, authorization to transmit the one or more user input values to the remote provider; and in response to receiving the authorization, transmitting the one or more user input values to the remote provider which provided the at least one template. 2. The method of claim 1, further comprising: validating the one or more user input values using the at least one template, and wherein generating the prompt is performed in response to the validating. 3. The method of claim 1, further comprising: determining whether the one or more user input values are valid based on the at least one template in response to each of a plurality of user input actions via the UI, and wherein generating the prompt is performed only upon a determination that the one or more user input values are valid. 4. The method of claim 3, wherein each of the plurality of user input actions comprises entry of a text character into one of the one or more specified text input fields. 5. The method of claim 1, wherein the at least one template comprises at least one regular expression. 6. The method of claim 1, wherein the response comprises a layout of a plurality of value holders and a corresponding plurality of transformations, the method further comprising: determining a value for each value holder of the plurality of value holders by applying a corresponding transformation to at least one of the one or more user input values, wherein the prompt comprises the determined value for each value holder in the layout. 7. The method of claim 6, wherein the layout comprises a string template. 8. The method of claim 1, wherein the at least one template comprises a mapping between a plurality of values and a corresponding plurality of images, the method further comprising: determining an image for inclusion within the prompt based on the mapping and the one or more user input values. 9. The method of claim 1, wherein the request to the remote provider for the at least one template is transmitted in response to an initiation of the application. 10. The method of claim 1, wherein the request to the remote provider for the at least one template is transmitted in response to a selection of a text input field of the application. 11. The method of claim 1, wherein the request comprises metadata associated with the one or more specified text input fields. 12. The method of claim 1, further comprising discarding the at least one template after generating the prompt. 13. The method of claim 1, further comprising determining whether a data set corresponding to the one or more user input values is already stored by the remote provider, and wherein generating the prompt is performed only upon a determination that the data set corresponding to the one or more user input values is not already stored by the remote provider. 14. The method of claim 13, wherein determining whether the data set corresponding to the one or more user input values is already stored by the remote provider is based on a list of data sets previously provided by the remote provider. 15. The method of claim 1, wherein the prompt further comprises an identifier of the remote provider. 16. The method of claim 1, wherein the method is performed by an operating system of the user device. 17. The method of claim 1, wherein the remote provider is a given remote provider from multiple remote providers, wherein the method further comprises: displaying a second prompt that includes an option to select from the multiple remote providers; and receiving, via the second prompt, a selection indicating the given remote provider. 18. A non-transitory computer readable medium having stored therein instructions executable by one or more processors to cause the one or more processors to perform functions comprising: determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information; transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI, wherein the at least one template comprises a partial value mask; receiving a response to the request from the remote provider, wherein the response comprises the at least one template specifying a prompt format and specifying one or more text input fields, wherein one or more user input values are to be extracted from the one or more specified text input fields and included within the specified prompt format; extracting the one or more user input values from the one or more specified text input fields; determining a value for inclusion within a prompt by applying the partial value mask to one of the one or more user input values; generating, based on the at least one template, the prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized in the prompt format, wherein the prompt includes one or more portions of the extracted one or more user input values from the one or more specified text input fields; receiving, via the prompt, authorization to transmit the one or more user input values to the remote provider; and in response to receiving the authorization, transmitting the one or more user input values to the remote provider which provided the at least one template. 19. A user device comprising: one or more processors; a user interface (UI); and an operating system configured to: determine a subset of content displayed by an application on the UI of the user device, wherein the subset excludes user-specific information; transmit a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI; receive a response to the request from the remote provider, wherein the response comprises the at least one template specifying a prompt format and specifying one or more text input fields, wherein one or more user input values are to be extracted from the one or more specified text input fields and included within the specified prompt format, wherein the at least one template comprises a mapping between a plurality of values and a corresponding plurality of images; extract the one or more user input values from the one or more specified text input fields; determine an image for inclusion within a prompt based on the mapping and the one or more user input values; generate, based on the at least one template, the prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized in the prompt format, wherein the prompt includes one or more portions of the extracted one or more user input values from the one or more specified text input fields; receive, via the prompt, authorization to transmit the one or more user input values to the remote provider; and in response to receiving the authorization, transmit the one or more user input values to the remote provider which provided the at least one template. 19 CROSS REFERENCE TO RELATED APPLICATION The present application is a national stage entry of, and claims the benefit of, International (PCT) Application No. PCT/US2018/057122, filed on Oct. 23, 2018, which claims priority to U.S. Provisional Application No. 62/576,480, filed on Oct. 24, 2017, the contents of each of which are incorporated herein by reference in their entirety. BACKGROUND Many modern computing devices, including mobile devices, mobile phones, personal computers, and tablets, provide user interfaces (UIs) for permitting users to interact with the computing device. For example, application programs can use the UI to communicate with a user using images, text, and graphical elements such as windows, dialogs, pop-ups, images, buttons, scrollbars, and icons. The UI can also receive inputs from devices such as touch screens, a presence-sensitive display, computer mice, keyboards, and other devices to permit the user to control the UI, and thus the application program. In some cases, the UI can be used to interact with an operating system to manage the computing device. For example, the operating system can have a control panel or setting application that uses the UI to draw one or more windows related to control settings for some aspect(s) of the computing device, such as audio controls, video outputs, computer memory, and human language(s) used by the operating system (e.g., choose to receive information in English, French, Mandarin, Hindi, Russian, etc.). The control panel/settings application can receive subsequent input related to the window(s) using the UI. The UI can provide the inputs to the operating system, via the control panel/settings application, to manage the computing device. However, manually entering data into a UI can be inconvenient, slow and/or cumbersome for users or may generate errors, especially on mobile devices that may have a small UI. SUMMARY Example embodiments relate to a system that allows an operating system of a user device to intelligently prompt a user to save data for future autofill uses across multiple applications with the help of a remote provider without analyzing or storing the user-inputted data on the user device. More specifically, realizing that the user may input sensitive or confidential information, the operating system of a user device instead may determine a subset of content displayed by an application on a user interface (UI) of the user device that excludes information specific to that user. Then, in a further aspect, the operating system may use this subset of content to generate and transmit a request to a remote provider for at least one template to be used with the application. Additionally, based on this request, the operating system may receive a response from the remote provider that contains at least one template indicating how to process user input data inputted into one or more text input fields displayed by the application on the UI. Furthermore, the operating system may also receive one or more user input values in the one or more text input fields and generate a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized by processing the one or more user input values using the at least one template. In one aspect, a method is provided that includes determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information. The method further includes transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. The method also includes receiving a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. The method additionally includes receiving one or more user input values in the one or more text input fields. The method further includes generating a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized by processing the one or more user input values using the at least one template. In another aspect, a user device is provided. The user device includes a UI and an operating system configured to determine a subset of content displayed by an application on the UI of the user device, wherein the subset excludes user-specific information. The operating system is further configured to transmit a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. The operating system is also configured to receive a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. The operating system is additionally configured to receive one or more user input values in the one or more text input fields. The operating system is further configured to generate a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, where the prompt is customized by processing the one or more user input values using the at least one template. In another aspect, a non-transitory computer readable medium is provided having stored therein instructions executable by one or more processors to cause an operating system of a user device to perform functions. The functions include determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information. These functions also include transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. These functions additionally include receiving a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. These functions also include receiving one or more user input values in the one or more text input fields. These functions further include generating a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized by processing the one or more user input values using the at least one template. In another aspect, a system is provided that includes a UI, at least one processor, and a non-transitory computer readable medium having stored therein instructions (that when executed by the at least one processor, cause the at least one processor to perform functions). The system includes means for determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information. The system also includes means for transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. The system further includes means for receiving a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. The system additionally includes means for receiving one or more user input values in the one or more text input fields. The system also includes means for generating a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, wherein the prompt is customized by processing the one or more user input values using the at least one template. The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the figures and the following detailed description and the accompanying drawings. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 depicts a distributed computing architecture, in accordance with example embodiments. FIG. 2 is a flowchart of a method, in accordance with example embodiments. FIG. 3A illustrates a prompt for a user to set up autofill, in accordance with example embodiments. FIG. 3B illustrates a displayed data set identifier for selection via the UI of a user device, in accordance with example embodiments. FIG. 3C illustrates a plurality of displayed data set identifiers for selection via the UI of a user device, in accordance with example embodiments. FIG. 4A shows user interface functionality associated with a user's manual entry of data into the text input fields of an application displayed on the UI, in accordance with example embodiments. FIG. 4B illustrates a confirmation message and a generic data-save prompt associated with saving the user's manually entered data shown in FIG. 4A, in accordance with example embodiments. FIG. 4C shows user interface functionality associated with a user's manual entry of data into the text input fields of an application displayed on the UI, in accordance with example embodiments. FIG. 4D illustrates a confirmation message and customized prompt associated with saving the user's manually entered data shown in FIG. 4C, in accordance with example embodiments. FIG. 5 is a functional block diagram of an example computing device, in accordance with example embodiments. DETAILED DESCRIPTION Example methods and systems are described herein. Any example embodiment or feature described herein is not necessarily to be construed as preferred or advantageous over other embodiments or features. The example embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein. Furthermore, the particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments might include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an example embodiment may include elements that are not illustrated in the Figures. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein. I. Overview User devices, including mobile devices, mobile phones, personal computers, and tablets, are ubiquitous in modern communication networks. Many of these devices are capable of running one or more applications while facilitating communication within such networks. Further, many of these devices also provide one or more UIs for permitting users to interact with the user device. For example, a user may use a UI to communicate information to be used by an application on the user device through the use of images, text, and other graphical elements. The UI can also receive inputs from numerous devices connected to the user device, such as touch screens, a presence-sensitive display, computer mice, keyboards, and other devices that permit the user to control the UI, and thus the application. In an optimal scenario, the user would be able to effectively and efficiently use the UI to communicate such information; however, one or more factors may impose a limitation on the user's ability to do so. Thus, if operating under such a limitation, as the need to communicate more and more information grows, the ability to communicate this information effectively and efficiently may become restricted. By way of example, for user devices with small screens, typing extensive amounts of text into the UI to communicate information to be used by an application may be very difficult, especially if the text is also difficult for the user to remember. Accordingly, it may be advantageous for the application to be able to remember this information as it would not have to be re-communicated the next time the application was to be employed by the user. It is plausible, however, that as the number of applications with which a user attempts to communicate grows, the burden imposed by using the UI to communicate extensive amounts of text to each of these applications (for the first time or otherwise) may burden the user. And, this may be true even if individual applications remember information entered by the user. As a result, users may become less engaged in these applications (or abandon their use altogether) once prompted to enter such information. Some of these problems may be addressed through the use of methods, systems, and devices that allow the user of the user device to effectively and efficiently communicate information to be used by such applications by engaging the operating system of the user device to serve as an intermediary to facilitate autofill across multiple applications with the help of a remote provider. Specifically, in some examples, a user may use autofill at the operating system level of the user device by, in part, enabling autofill, allowing an authorized remote provider to provide data for autofill, retrieving autofill data, and saving autofill data for future use. Furthermore, this use of a remote provider in helping facilitate autofill may provide several improvements for the user's experience when using autofill. For example, the user may enjoy increased security for the data she inputs (e.g., passwords, credit card numbers, etc.), because that data can be stored remotely by the remote provider, instead of locally on the user device. In another example, the user may enjoy increased convenience for further autofill uses, potentially across multiple devices, for the data she inputs (e.g., passwords, credit card numbers, etc.), because that data can be stored remotely by the remote provider, instead of locally on one user device. But, as detailed above, because the remote provider may not have access to the data the user may enter for future autofill uses until the user agrees to allow that data to be used for future autofill uses, it is difficult for the remote provider to provide the operating system with any insight or direction on the type, extent, and details of, or even how, the user-inputted data will be used for future autofill uses. Thus, in such systems, another challenge that may diminish the user's inclination to engage with and utilize such autofill functionality would be a lack of specificity in what type or the extent of information that may be stored for future autofill uses. For example, if the user is presented with a generic autofill save prompt such as “Save <type> to <service>?,” without knowing the details of the inputted data from the user, the <type> and <service> may only be set by the use of generic words like “password,” “credit card,” etc. Furthermore, this problem may be further complicated by the fact that the device may not support analyzing or saving this user-inputted data for one or more compelling reasons (e.g., it is personally identifiable information or “PIT” and/or other sensitive information). And, the lack of analyzing or saving this user-inputted data by the device may present other analytical limitations for the device and/or unwanted experiences for the user (e.g., the device might show this generic save prompt even if the user has already elected to save the inputted data for future autofill uses (including this one), which can be annoying to the user). Additionally, this lack of analyzing or saving user-inputted data by the device may present challenges for other analytical components contributing to the user's autofill experience with the device (e.g., there may be limited validation by the device of the user-inputted data, so any affiliated autofill provider must handle any errors concerning this data upon receipt). Moreover, these issues may not occur on other devices or platforms, which may not have the security constraints of a mobile device (e.g., the operating system may not comprehend the concept of a credit card and hence cannot infer its type based on any user-inputted number, whether it's valid, what icon should be displayed based on the issuing bank, etc.), as that information and logic may be offloaded from the mobile device (e.g., onto an affiliated autofill provider). Thus, when attempting to provide the user with an accurate approximation of the type, extent, and details of the user-inputted data that will be saved by the system for future autofill uses, there's a chicken-and-egg problem: the operating system of the mobile device can only infer that logic once it gets the user-inputted data via the text input fields entered by the user, but also cannot analyze, transmit, or store the data to provide an informed prompt before the user consents. Hence, there is a need to provide the user with accurate approximation of the type, extent, and details of the user-inputted data that will be used for future autofill uses without compromising this data. Disclosed herein are example embodiments relating to methods, systems, and devices that allow the user of the user device to be apprised of the type, extent, and details of the user-inputted data that will be used for future autofill uses without compromising this data. Specifically, the example embodiments disclosed herein allow a user to be apprised, intelligently, of the data entered for future autofill uses at the operating system level of the user device by, in part, soliciting and receiving one or more templates for how user inputted data should be processed by the system, receiving the input of such data, processing and otherwise validating it in light of one or more templates the user device has received from an autofill provider, and generating and displaying a customized prompt to authorize transmission of this user-inputted data to the remote provider for future autofill uses. By receiving the template(s) before any data has been inputted by a user, the operating system of a device may immediately display a customized prompt after the user input has been entered, without requiring any network communications. An alternative system may require transmission of the user inputted data to a remote provider to allow the remote provider to generate the customized prompt and transmit the customized prompt back to the device. However, such an alternative system may suffer from network delay before the customized prompt can be displayed to the user of the device, particularly if the user is in an area with poor network connectivity. In an example embodiment, in accordance with the disclosure, an operating system, potentially of the user device, may receive authorization from the user to engage in autofill for an application displayed on the UI of the user device. For example, to enable autofill, the operating system may determine that a text input field is associated with a common autofill descriptor (e.g., the text input field is associated with a term such as “card #” or “Expiration Date”), and then prompt the user to set up autofill. In a further aspect, if the user only has one remote provider designated to help facilitate autofill, the user may be prompted to agree to use that remote provider; but if there are multiple remote providers designated, the user may be prompted to choose one or more remote providers. In either scenario, however, because engaging the operating system to perform autofill functions can involve sensitive and/or personal information specific to the user, the operating system may apprise the user of the details underlying the authorization of the autofill. These details may include accepting a detailed agreement concerning the operating system and/or a disclaimer associated with each remote provider, to inform the user and confirm her consent before authorizing the use of autofill with one or more of these remote providers. In a further aspect, the operating system may also detect an event which triggers the operating system to examine the contents displayed on the UI. In example embodiments, this event may include: the initiation of the application; the selection of a text input field on the application; or a signal that a text input field has focus (e.g., a particular text field has become engaged such that a keyboard is displayed on the UI), among other illustrative examples. In a further aspect, the operating system may also determine what portion of this content does not contain sensitive user-specific information. In yet a further aspect, utilizing this portion of the content, the operating system may transmit a request to a remote provider to help facilitate autofill for the application by providing at least one template for use in the application. In this embodiment, such a request may be beneficial for the operating system and the user alike as it may allow the remote provider to securely and privately parse the content displayed on the UI to determine what information may be useful for facilitating autofill for the application (e.g., determining what text input fields on the application may be auto filled). In some embodiments, this request to the remote provider for the at least one template may be transmitted in response to an initiation of the application. In other embodiments, this request may be transmitted in response to a selection of a text input field of the application. In still other embodiments, this request may comprise metadata associated with the one or more text input fields. Either way, once such a request has been sent to a remote provider, the operating system may receive a response to the request containing at least one temple indicating how to process user input data in one or more text input fields of the application displayed on the UI. Additionally, in further embodiments, a template may include at least one regular expression. In additional embodiments, a template may include a mapping between a plurality of values and a corresponding plurality of images, and the system may also determine an image for inclusion within the prompt based on the mapping and the one or more user input values. In further embodiments, a template may include a partial value mask, and the system may also determine a value for inclusion within the prompt by applying the partial value mask to one of the one or more user input values. In another example, the template and/or the UI may be augmented if a condition associated with a given augmentation evaluates to true. For example, the remote provider may provide a template for a payment data which may be further customized based on a credit card type (e.g., for credit card type A a logo or associated graphic may be shown along with customized text in a customized prompt, but for credit card type B, only a logo or associated graphic may be shown in a customized prompt). In a further aspect, this is achieved by passing conditional customizations for the template where if the corresponding condition is met then the template may be further augmented with various types of input data. In a further aspect, after the template is augmented it may be populated via a list of transformations associated with each augmentation. For example, if the card is credit card type A, the operating system may insert the credit card type A specific UI elements and the use the associated transformations to populate these elements. In further embodiments, the response may also contain a layout of a plurality of value holders and a corresponding plurality of transformations, and the system may determine a value for each value holder of the plurality of value holders by applying a corresponding transformation to at least one of the one or more user input values, wherein the prompt comprises the determined value for each value holder in the layout. In a further aspect, in some embodiments, this layout may comprise a string template. Either way, once this response is received, the operating system may also receive one or more user input values in the one or more text input fields. The operating system may generate a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, where the prompt is customized by processing the one or more user input values using the at least one template. In a further aspect, in some embodiments, this prompt may comprise an identifier of the remote provider. In a further aspect, in some embodiments, once the prompt is generated, the operating system may discard the at least one template after generating the prompt. Once this prompt is generated, the operating system may also receive, via the prompt, authorization to transmit the one or more user input values to the remote provider. Then, in response to receiving the authorization, the operating system may transmit the one or more user input values to the remote provider. In additional embodiments, the operating system may also validate the one or more user input values using the at least one template, and generate the prompt in response to successful validation (e.g., validation that a user has entered a correct number of digits for a credit card number). In other examples, however, if this validation is not successful, the operating system may not generate or display the prompt. In further embodiments, the operating system may also determine whether the one or more user input values are valid based on the at least one template in response to each of a plurality of user input actions via the UI. The operating system may only generate the prompt upon a determination that the one or more user input values are valid. For example, each of the plurality of user input actions may be entry of a text character into one of the one or more text input fields. In still other embodiments, the operating system may also determine whether a data set corresponding to the one or more user input values is already stored by the remote provider. The operating system may only generate the prompt upon a determination that the data set corresponding to the one or more user input values is not already stored by the remote provider. In a further aspect, in some embodiments, the operating system may determine whether the data set corresponding to the one or more user input values is already stored by the remote provider based on a list of data sets previously provided by the remote provider. II. Distributed Computing Architecture Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure and the described embodiments. However, the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. FIG. 1 depicts a distributed computing architecture 100 with server devices 108, 110 configured to communicate, via network 106, with user devices 104a, 104b, 104c, 104d, 104e, and remote providers 112 and 114, in accordance with example embodiments. Network 106 may correspond to a local area network (LAN), a wide area network (WAN), a corporate intranet, the public Internet, or any other type of network configured to provide communication paths between networked computing devices. Network 106 may also correspond to a combination of one or more LANs, WANs, corporate intranets, and/or the public Internet. Although FIG. 1 only shows a small collection of user devices, distributed application architectures may serve tens, hundreds, or thousands of user devices. Moreover, user devices 104a, 104b, 104c, 104d, 104e (or any additional programmable devices) may be any sort of computing device capable of allowing a user to engage the operating system of the computing device to facilitate autofill across multiple applications with the help of a remote provider, such as an ordinary laptop computer, desktop computer, wearable computing device, mobile computing device, head-mountable device (HMD), network terminal, wireless communication device (e.g., a smartphone or cell phone), and so on. In some embodiments, such as indicated with user devices 104a, 104b, and 104c, user devices can be directly connected to network 106. In other embodiments, such as indicated with user devices 104d and 104e, user devices can be indirectly connected to network 106 via an associated computing device, such as user device 104c. In such embodiments, user device 104c can act as an associated computing device to pass electronic communications between user devices 104d and 104e and network 106. In still other embodiments not shown in FIG. 1, a user device can be both directly and indirectly connected to network 106. Server devices 108, 110 may operate as part of a cloud-based server system that shares computer processing resources and data to computers and other devices on demand. In particular, server devices 108, 110 can be configured to perform one or more services requested by user devices 104a-104e. For example, server device 108 and/or 110 can provide content to user devices 104a-104e. In a further aspect, server device 108 and/or 110 may provide content to user devices 104a-104e directly or by facilitating the transmission of content requested from a third party. The content can include, but is not limited to, web pages, hypertext, scripts, binary data such as compiled software, images, audio, and/or video. The content can include compressed and/or uncompressed content. The content can be encrypted and/or unencrypted. Remote providers 112, 114 may also operate as part of a cloud-based server system that shares computer processing resources and data to computers and other devices on demand. In particular, remote providers 112, 114 may provide, receive, store, manage, and transmit content on the network 106, in accordance with example embodiments. For example, remote provider 112 and/or 114 can receive a request for content to be used by user devices 104a-104e, and generate and transmit a response containing the content to devices connected to the network. Within examples, server device 108 and/or 110 may provide content that facilitates autofill across multiple applications on user devices 104a-104e with the help of remote provider 112 and/or 114. Additionally, server device 108 and/or 110 can provide user devices 104a-104e with access to software for database, search, computation, graphical, audio, video, World Wide Web/Internet utilization, and/or other functions. Many other examples of content are possible as well. III. Method Flowchart and Example Embodiments FIG. 2 illustrates a flowchart showing a method 200 that may be performed to allow a user to engage the operating system of a user device to save user inputted data to facilitate future autofill across multiple applications with the help of a remote provider. Method 200 may be carried out by one or more computing devices, such as the user devices 104a-104e and remote provider 112 and/or 114, and in some instances server 108 and/or 110 as well, as illustrated and described with respect to FIG. 1. In additional examples, method 200 may be carried out by user devices 104a-104e and remote provider 112 and/or 114, and in some instances server 108 and/or 110 as well, operating as part of a cloud-based system. Additionally, method 200 may be performed by one or more other types of computing devices besides those specially illustrated in FIG. 1. Additionally, although the steps of method 200 are described below as being completed by an operating system, other components, applications, and/or technologies related to the user device could perform the steps of method 200. Furthermore, it is noted that the functionality described in connection with the flowcharts described herein can be implemented as special-function and/or configured general-function hardware modules, portions of program code executed by a processor for achieving specific logical functions, determinations, and/or steps described in connection with the flowchart shown in FIG. 2. Where used, program code can be stored on any type of computer-readable medium, for example, such as a storage device including a disk or hard drive. In addition, each block of the flowchart shown in FIG. 2 may represent circuitry that is wired to perform the specific logical functions in the process. Unless specifically indicated, functions in the flowchart shown in FIG. 2 may be executed out of order from that shown or discussed, including substantially concurrent execution of separately described functions, or even in reverse order in some examples, depending on the functionality involved, so long as the overall functionality of the described method is maintained. At block 210, method 200 may include determining a subset of content displayed by an application on a user interface (UI) of a user device, wherein the subset excludes user-specific information. In particular, a user may be interacting with a computing device and decide to interact with an application on that device. The operating system may recognize that the user is interacting with an application and that there is content that the operating system knows is relevant to autofilling the application. In accordance with this data, the operating system may then prompt the user to authorize the operating system to engage in autofill for the application, which may improve the user's interaction with, and/or the responsiveness of, user interface functionality. In general, the operating system can recognize certain data that is commonly associated with autofill. In an example, the operating system may recognize that a text input field associated with an application contains or is associated with a common autofill descriptor (e.g., “card #” or “Expiration Date”). In response, in this example, the operating system may then prompt the user to set up autofill. In some examples, before, after, or during the user's response to this prompt, the operating system may also compile a list of one or more remote providers to further aid in facilitating autofill. In one aspect, if there is only one remote provider designated to help facilitate autofill, the user may be prompted with an agreement to use that remote provider. In another aspect, however, if there are multiple remote providers designated to help facilitate autofill, then the user may be prompted to choose a remote provider and also prompted with an agreement to use that remote provider. In yet another aspect, the user may be allowed to choose more than one remote provider, but still may be prompted with an agreement to use each of the chosen remote providers. In another example, the operating system may begin determining a subset of content displayed by an application on the UI due to the triggering of an event, perhaps an autofill trigger event. Autofill trigger events may provide information about the current state of the user device, or an application thereon, including the state of user's interaction with the device. Autofill trigger events may also be used to help the operating system know when to engage in authorized autofill at the right points in time. In some examples, autofill triggers events may be direct or indirect user interactions with the device. In general, once autofill is approved by the user, however, user interaction with the device may be monitored by the operating system. In one embodiment, example autofill trigger events may be indicated by data associated with direct user interaction with the user device, such as a user's initiation of an application, a user's selection of a text input field of an application, or a request from the user to set up autofill for one or more applications, among other scenarios. In other examples, the autofill trigger events may include data associated with indirect user interaction with the user device, such as a signal that an application has been initiated, or a signal that a text input field of the application, or some other parameter of content displayed on the UI of the user device, has focus, among other scenarios. In a further aspect, the user's indirect interaction with the user device may be reflected by a graphic or GUI, such as a keyboard, displayed on the UI In general, pursuant to any of these scenarios, the operating system may review sensitive and/or personal information when attempting to facilitating autofill, and it may be advantageous for the user to be informed of the details of this autofill before consenting to its use. Specifically, because the operating system may review content on the device containing sensitive and/or personal information, the operating system may inform the user of the details underlying the authorization of the autofill before engaging in autofill. In a further aspect, because the chosen remote provider may receive some information that the user may not typically share, the user may also be prompted to approve an agreement containing the terms for using each of the autofill providers chosen by the user. In some examples, in order to ensure that the user fully understands the details for using these autofill providers, before authorizing the use of autofill, the user may be prompted with an agreement that may include a disclaimer for using the operating system and/or each of the autofill providers chosen by the user for autofill. In still other examples, before authorizing the use of autofill, the user may be prompted with a verified transmission prompt authorizing the operating system to send one or more values entered into one or more text input fields displayed on the UI to a remote provider. For example, the user may have entered the one or more values into an application other than the one that served as the basis for the operating system's prompt for the user to set up autofill (a “second application”). In a further aspect, once authorized by the user to do so, the operating system may then transmit the one or more values to the remote provider for future use. In general, the content displayed by an application on the UI refers to any information associated with an application that is ascertainable by the operating system. In one example, this content may include a current view hierarchy of the content displayed on the UI of the user device. Because, however, the content may also contain information that is sensitive and/or private, the operating system determines only a subset of the content which excludes the user-specific information. In some examples, the user-specific information may include personally identifiable information, or any other information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in the context of other information or data. In additional examples, the user-specific information may include information that the user has designated as sensitive and/or private. In still other examples, the user-specific information may include information that has been designated as sensitive and/or private based on one or more factors associated with the user. For example, the user-specific information may include information that has been designated as sensitive and/or private based on the geographical region in which the user, the user device, and/or the remote provider, is located, among other possibilities. In other examples, the user-specific information may include information that has been designated as sensitive and/or private based on an attribute of the user (e.g., the user's age). At block 220, method 200 further includes transmitting a request to a remote provider for at least one template for use with the application, wherein the request comprises the subset of content displayed on the UI. The request for a template may be sent to the remote provider selected to help facilitate the use of autofill on the user device. Specifically, based on the subset of displayed content excluding user-specific information, the operating system may send a request containing information within or associated with this subset of content to the remote provider to alert the remote provider of, amongst other things, potentially fillable fields displayed on the UI. In any event, this request may be beneficial for the operating system and the user alike as it may allow the remote provider to securely and privately parse the content displayed on the UI to determine what information may be useful for facilitating autofill for the application (e.g., determining what text input fields on the application may be autofilled). In some examples, the request may include information associated with the text input fields displayed on the UI. In one aspect, this information may include one or more descriptors associated with the text input fields. For example, these descriptors may include terms such as “name,” “username,” “email,” “password,” “address,” “city,” “state,” “zip code,” “country,” “account number,” and/or “card number,” among other possibilities. In other examples, the request may include information associated with the current view hierarchy of the user device. In one aspect, this information may include information associated with compiling or maintaining the subset of content displayed on the UI (e.g., underlying script and/or code). In another aspect, this information may include information associated with certain approximations of the subset of content displayed on the UI (e.g, wireframe representations of the subset of content). In yet another aspect, this information may include information associated with the architecture of the subset of content displayed on the UI (e.g., information associated with the relative layout, linear layout, buttons, table layout, table rows, check boxes, and/or other elements). Either way, the operating system may request a template for use with the application. In some examples, a template may be mechanism where the remote provider provides the business logic expressed in parcelable objects that are sent to the operating system, then the operating system feeds the user-provided data into these objects to infer the necessary business logic. For example, these templates may be used as part of the autofill process, and may be represented by one or more optional objects (e.g., a Validator template—used to validate credit card information (if it's not valid, the framework will not show a “save” prompt via UI); a Generator template—used to generate a credit card number (if such credit card was already saved for that service, the framework will not show a “save” prompt via UI); a Custom Presentation template—used to display a custom presentation which could have images, masked credit card numbers, expiration dates, texts with links, etc.). Either way, in some examples, this request may be transmitted in response to an initiation of the application. In some examples, this request may be transmitted in response to a selection of a text input field of the application. In still some other examples, this request may comprise metadata associated with the one or more text input fields. This metadata may be data or information that provides information about other data (e.g., descriptive metadata (which may describe a resource for purposes such as discovery and identification and can include elements such as title, abstract, author, and keywords), structural metadata (which may be about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters and describe the types, versions, relationships and other characteristics of digital materials and administrative metadata (which may provide information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it)). At block 230, method 200 may further include receiving a response to the request from the remote provider, wherein the response comprises the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. In general, the response received from the remote provider may provide data (e.g., a template) that is helpful to the operating system in facilitating autofill. In one example, the response may contain data helpful to the operating system in facilitating autofill in the first instance. For example, the response may contain the at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI. In some examples, a template may include at least one regular expression. Here, a regular expression (sometimes called regex, regexp, or a rational expression) may be an object that describes a pattern of characters that can be used to perform pattern-matching and/or “search-and-replace” functions on user-inputted text. In a further aspect, the regular expressions may use any of a variety of different syntaxes or formats, such as those used in search engines, search and replace dialogs of word processors and text editors, in text processing utilities, and in lexical analysis. Additionally, many programming languages provide regular expression capabilities, built-in, or via libraries. In some examples, a template may include a mapping between a plurality of values and a corresponding plurality of images, and the operating system may determine an image for inclusion within the prompt based on the mapping and the one or more user input values (e.g., to shown the credit card symbol of the credit card type entered by the user). Here, mapping may be the process of creating data element mappings between two distinct data models (which may be used as a first step for a wide variety of data integration tasks). The mapping itself may be created as a regular expression. In some examples, a template may include a partial value mask, and the operating system may determine a value for inclusion within the prompt by applying the partial value mask to one of the one or more user input values (e.g., to only show the last four digits of a credit card number). Here, a value mask or partial value mask may include the application of a mask on an input field so the user can see only certain portions of the inputted data. At block 240, method 200 may further include receiving one or more user input values in the one or more text input fields. The user input values may be received over a period of time as the user enters (e.g., types) individual characters, numbers, or words. At block 250, method 200 may further include generating a prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill, where the prompt is customized by processing the one or more user input values using the at least one template. In some examples, the customized prompt may include details pertaining to the user inputted data that the remote provider does not yet have access to, but has nevertheless provided a template to aid the operating system in analyzing and applying to create the customized prompt. In some examples, this template may allow the operating system to generate a customized prompt to authorize transmission of the one or more user input values to the remote provider for future use in autofill (e.g., “Save Credit Card B to Autofill Provider X?”) as opposed to a generic autofill save prompt such as (“Save <type> to <service>?,” where, without knowing the details of the inputted data from the user, <type> and <service> may only be set by the use of generic words like “password,” “credit card,” etc.). In a further aspect, in some embodiments, this prompt may comprise an identifier of the remote provider (e.g., an image specific to Credit Card B, Autofill Provider X, or both). In general, the identifier received from the remote provider may be associated with data inputted by the user corresponding to potentially fillable fields displayed on the UI. In a further aspect, displaying an identifier or identifiers associated with data inputted by the user may benefit the operating system and the user alike (as the user may select a displayed identifier to use the with data inputted by the user associated with that identifier instead of reviewing all of the available data when deciding which data the operating system should use for autofill). In a further aspect, in some embodiments, once the prompt is generated, the operating system may discard the at least one template after generating the prompt. This discarding may be part of the regular workflow or process undertaken by the operating system and may provide advantageous results for the operating system (e.g., allowing the operating system to disregard any further processing of the user inputted data via the template, which may free resources for the operating system). The remote autofill provider may therefore be responsible for sending a new template each time an application is initiated and/or has run its course. IV. Further Example Embodiments In some examples, the methods described herein may further include, once this prompt is generated, the operating system receiving, via the prompt, authorization to transmit the one or more user input values to the remote provider. Then, in response to receiving the authorization, the operating system may also transmit the one or more user input values to the remote provider. In some examples, the operating system may also validate the one or more user input values using the at least one template, and generate the prompt in response to successful validation (e.g., validation that a user has entered a correct number of digits for a credit card number, validation that a user has not entered the inputted data before (and thus no autofill prompt, generic or customized, should be created)). In some examples, the operating system may also determine whether the one or more user input values are valid based on the at least one template in response to each of a plurality of user input actions via the UI. The operating system may only generate the prompt upon a determination that the one or more user input values are valid. For example, each of the plurality of user input actions may be entry of a text character into one of the one or more text input fields. In some other example, the operating system may also determine whether a data set corresponding to the one or more user input values is already stored by the remote provider. The operating system may only generate the prompt upon a determination that the data set corresponding to the one or more user input values is not already stored by the remote provider. In a further aspect, in some examples, the operating system may determine whether the data set corresponding to the one or more user input values is already stored by the remote provider based on a list of data sets previously provided by the remote provider. In another example, if there are multiple sets of input values to be used in autofill and associated identifiers, these identifiers may be displayed as a list of identifiers (in a drop-down menu or otherwise), each corresponding to a set of values, for the user's selection. In yet another example, a fill indicator may also be displayed in the text input fields that could be filled with these values. Specifically, the fill indicator may allow the user to preview what text input fields will be autofilled with a particular set of values before engaging in autofill. Further, the fill indicator may be displayed in the form of a graphic generated by the operating system (e.g., a pencil graphic) or a specific graphic received from the remote provider or otherwise (e.g., a brand or graphic associated with the remote provider). Other illustrative examples are certainly possible. Additionally, the methods described herein may further include receiving, by the operating system, input data indicating a selection of the data set identifier. In general, the receipt of the input data indicating the selection of a data set identifier may indicate to the operating system that the user is selecting the data set identifier and the values associated with that identifier for use in autofill. V. Additional Explanatory Figures and Example Embodiments FIG. 3A shows a prompt to set up autofill, in accordance with example embodiments. In particular, a user device 302 such as a cellular phone may display a portion of application 304 on the user device. The application 304 may also include text input fields containing or associated with common autofill descriptors 306 (e.g., “Card #”) and/or 308 (e.g., “Expiration Date”). In this example, once the operating system of the user device 302 recognizes one or more common autofill descriptors 306 and/or 308, the operating system displays a prompt to set up autofill 310. In further examples, as discussed above, after selecting to setup autofill, the user may be prompted to select one or more remote providers to help facilitate autofill and may also be prompted to review and approve one or more agreements associated with the selected provider. FIG. 3B shows a displayed data set identifier for selection via the UI of the user device, in accordance with example embodiments. In particular, a user device 302 such as a cellular phone may display a portion of application 304 on the user device. In this example, as described above, in response to a text input field of the application having some focus 312 (here, the “Card #” text input field has a vertical line indicating text can be typed into the field via the displayed keyboard) and those without such focus 314 (here, the “Expiration Date” text input field has no such vertical line) the operating system recognizes this focus, determines a subset of the content displayed on the UI that excludes user-specific information, transmits that a subset of content to the remote provider, and once the response from the remote provider is received, display, for the user's selection, the data set identifier 316 associated with previous user inputted values to be used in autofill. In further examples, the operating system may also display a fill indicator 318 (here, a pencil graphic) in the text input fields to be filled with the previously inputted user data. FIG. 3C shows a plurality of displayed data set identifiers for selection via the UI of the user device, in accordance with example embodiments. Unlike the example embodiment in FIG. 3B, once the response from the remote provided is received, the operating system of the user device 302 displays, for the user's selection, a list of data set identifiers 320 associated with each set of previously inputted user data to potentially be used in autofill. In further examples, the operating system may also display a fill indicator 318 (here, a pencil graphic) in the text input fields to be filled with the user inputted data in a variety of ways. For example, in one aspect, the operating system may display a fill indicator 318 in the text input fields based on receiving a preselection of an identifier from the displayed list of identifiers. In another example, however, the operating system may display a fill indicator 318 in the text input fields that could be filled with the previously inputted user data associated with any of the identifiers from the displayed list of identifiers. Other illustrative examples are certainly possible. FIG. 4A shows user interface functionality associated with a user's manual entry of data into the text input fields of an application displayed on the UI, in accordance with example embodiments. More specifically, a user device 402 such as a cellular phone may display a portion of application 404 on the user device. In this example, in spite of authorizing the operating system of the mobile to engage in autofill, the user may manually enter input data into a first text input field 406 (here, the “Card #” text input field) and a second text input field 408 (here, the “Expiration Date” text input field). In a further aspect, the user may manually enter this input data in spite of one or more displayed data set identifiers 410 associated with the previously inputted user data to be used in autofill and/or a displayed fill indicator 412 (here, a pencil graphic) in the text input field to be filled with this data. In another example embodiment, in response to a user manually entering input data into a text input field, the operating system may filter the displayed data set identifiers and/or the associated data to be used in autofill to limit the displayed data set identifiers and/or the associated data to be used in autofill to those that match or correspond to the input data being manually entered by the user. For example, if the user began manually entering a credit card that did not match any of the previously inputted user data, the operating system may filter out all of that data when determining what to display for autofill uses via the UI. FIG. 4B illustrates a confirmation message and a generic data-save prompt associated with the user's manual input of data into the text input fields of the application displayed on the UI as shown in FIG. 4A, in accordance with example embodiments. Specifically, once the user has manually entered input data associated with the text input fields displayed on the UI, the operating system may generate and display a confirmation message 414 to apprise the user that the text input fields of the application displayed on the UI have been filled. In a further example, the operating system may also display a generic prompt 416 allowing the user to save the input data entered into the text input fields of the application, which may also include transmitting the input data to the selected remote provider for future autofill use. In a further aspect, the operating system may save the input data for future autofill uses by temporarily holding the data until the user responds to the prompt, at which point, the operating system may send the data to the remote provider. In another example, however, the operating system may disregard the user inputted data as soon as the data has been sent to the autofill provider. Similar to FIG. 4A, FIG. 4C shows user interface functionality associated with a user's manual entry of data into the text input fields of an application displayed on the UI, except in accordance with other example embodiments. More specifically, a user device 402 such as a cellular phone may display a portion of application 404 on the user device. In this example, in spite of authorizing the operating system of the mobile to engage in autofill, the user may manually enter input data into a first text input field 406 (here, the “Card #” text input field) and a second text input field 408 (here, the “Expiration Date” text input field). In a further aspect, the user may manually enter this input data in spite of one or more displayed data set identifiers 410 associated with the previously inputted user data to be used in autofill and/or a displayed fill indicator 412 (here, a pencil graphic) in the text input field to be filled with this data. FIG. 4D, unlike FIG. 4B, however, illustrates a confirmation message and a customized data-save prompt associated with the user's manual input of data into the text input fields of the application displayed on the UI as shown in FIG. 4C, in accordance with example embodiments. Specifically, once the user has manually entered input data associated with the text input fields displayed on the UI, the operating system may generate and display a confirmation message 418 to apprise the user that the text input fields of the application displayed on the UI have been filled. In a further example, the operating system may also display a customized prompt 420 allowing the user to save the input data entered into the text input fields of the application, which may also include transmitting the input data to the selected remote provider for future autofill use. Here, as opposed to generic prompt 416, customized prompt 420 includes several customized aspects based on the user's manually inputted data. For example, these customized aspects may include: a customized identifier “B” (which may be associated with the credit card provider ((e.g., a credit card type A a logo or associated graphic may be shown along with customized text in a customized prompt), the remote provider, or both), a customized representation of the credit card's details (last four digits of the credit card number “-0121” and its expiration date “12/20”) and its relationship to other previously-inputted credit cards “CC3-0121.” Furthermore, these customized aspects may differ from previously generated aspects (e.g., for credit card type A a logo or associated graphic “A” (shown here in the context of 410), but for credit card type B, a logo or associated graphic “B” (shown here in the context of 416) may be shown along with customized text (shown here at 416, “Save Credit Card B for Autofill Provider X?”) in a customized prompt). In a further aspect, the operating system may save the input data for future autofill uses by temporarily holding the data until the user responds to the prompt, at which point, the operating system may send the data to the remote provider; or the operating system may disregard the user inputted data as soon as the data has been sent to the autofill provider. Either way, to accomplish generation and display of a customized prompt 420 is no trivial task. The operating system may not comprehend the pertinent details or logic of manually inputted user data (e.g., the concept of a credit card, and hence cannot infer its type based on any user-inputted number, whether it's valid, what icon should be displayed based on the issuing bank, etc.). That information and logic may be offloaded from the mobile device (e.g., onto an affiliated remote autofill provider) to generate customized prompts. Thus, when attempting to provide the user with an accurate approximation of the type, extent, and details of the user-inputted data that will be saved by the system for future autofill uses, there's a chicken-and-egg problem: the operating system of the mobile device can only infer that logic once it gets the user-inputted data via the text input fields entered by the user, but also cannot analyze, transmit, or store the data to provide an informed prompt before the user consents. To provide the user with accurate approximation of the type, extent, and details of the user-inputted data that will be used for future autofill uses without compromising this data before the user consent, the operating system needs a way to provide this logic from the remote provider without exposing the user's manually inputted data (which may again be PII or secure information, for example a credit card number) before the user consents. Additionally, it may be advantageous for the operating system to do so, all while maintaining one or more predefined conditions (e.g., preserving security features of the existing save dialog with the user; letting the remote provider customize the save dialog, letting the remote provider transform the data before displaying, providing a way to not display the save dialog when certain information (for example, a credit card number) is already saved, not offering the option to save certain information (for example, a credit card number) if the form of the input data is invalid, etc.). For example, the operating system of user device 402 may determine a subset of content displayed by application 404 (excluding user-specific information) and transmit a request to a remote provider for at least one template for use with application 404. This response from the remote provider may also contain at least one template indicating how to process user input data in one or more text input fields displayed by the application on the UI (e.g., text input field 406 (here, the “Card #” text input field) and/or text input field 408 (here, the “Expiration Date” text input field)). In some examples, such templates may be mechanisms where the remote provider provides the business logic expressed in parcelable objects that are sent to the operating system, into which the operating system may feed the user-provided data to infer the necessary business logic. For example, these templates may be used as part of this process, and may be represented by one or more optional objects. In any event, the operating system may need to extract the actual value of a input field displayed on the UI (e.g., in an autofill process), or some other content displayed on the UI, potentially through a ValueFinder. This ValueFinder may be needed to map the actual user input to the fields provided by the template (i.e., the template provided by the remote provider may have references to the id, and this function may get the actual value). For example, the operating system may use an interface to aid in this process such as: public interface ValueFinder { AutofillValue findByAutofillId(AutofillId id); } In some examples, the template may be a Validator template, which may be used to validate the format of user-inputted data (e.g., credit card information). In some examples, the Validator template may be used to help the operating system evaluate if user-inputted data is not valid (e.g., an invalid credit card number). In some examples, if the inputted data is not valid, the operating will not show a prompt for the user to save the inputted data for future autofill uses at all, generic and customized alike (shown as 416 in FIG. 4B and 420 in FIG. 4D, respectively). Thus, the generation of a customized prompt is not interdependent on the validation template and logic, and vice versa. To facilitate the effective use of this Validator template, the operating system may use a Validator interface such as: public interface Validator extends Parcelable { boolean isValid (ValueFinder finder); } In a further aspect, the operating system may provide multiple implementations of the Validator interface, such as: CharSequenceValidator (which may validate the contents of a single view, based on a regular expression passed by the remote provider); LuhnChecksumValidator (which may validate whether the supplied identification may pass the Luhn algorithm); or Validators (which may provide methods to combine multiple validators using logical expressions (like AND and OR)), among other such possibilities. Either way, when a SaveInfo object has a Validator and the operating system is ready to display a prompt, generic and customized alike (shown as 416 in FIG. 4B and 420 in FIG. 4D, respectively) via the UI, it will first feed the session data into the Validator and only show the prompt via the UI if isValid (session) returns true. To facilitate these example uses of this Validator template, the operating system may use the following exemplary code samples. In the context of a naive validator that requires a credit card input field to have exactly 16 alpha-numeric digits: saveBuilder.setValidator(new CharSequenceValidator.Builder(ccNumberId, “{circumflex over ( )}\\d{16}$”).build( ); In the context of a validator that supports either 15 or 16 alpha-numeric digits: import static android.service.autofill.Validators.or; saveBuilder.setValidator(or( new CharSequenceValidator.Builder(ccNumberId, “{circumflex over ( )}\\d{15}$”).build( ), new CharSequenceValidator.Builder(ccNumberId, “{circumflex over ( )}\\d{16}$”).build( )) )); In the context of a validator that supports either 15 or 16 alpha-numeric digits, but they must pass the Luhn algorithm: import static android.service.autofill.Validators.or; saveBuilder.setValidator( and (LuhnChecksumValidator.getInstance( ), or (new CharSequenceValidator.Builder(ccNumberId, “{circumflex over ( )}\\d{15}$”).build( ), new CharSequenceValidator.Builder(ccNumberId, “{circumflex over ( )}\\d{16}$”).build( )) ) )); In the context of a validator for a screen that stores the credit card number in 4 fields with 4 alpha-numeric digits each: import static android.service.autofill.Validators.and; saveBuilder.setValidator(and( new CharSequenceValidator.Builder(ccNumber1Id, “{circumflex over ( )}\\d{4}$”).build( )), new CharSequenceValidator.Builder(ccNumber2Id, “{circumflex over ( )}\\d{4}$”).build( )), new CharSequenceValidator.Builder(ccNumber3Id, “{circumflex over ( )}\\d{4}$”).build( )), new CharSequenceValidator.Builder(ccNumber4Id, “{circumflex over ( )}\\d{4}$”).build ( )) )); In some examples, a template may be a Generator template, which may be used to generate a credit card number. The Generator template may be used to help determine if such a credit card was already saved for that service. If so, the operating system will not show a prompt for the user to save the inputted data for future autofill uses at all, generic and customized alike (shown as 416 in FIG. 4B and 420 in FIG. 4D, respectively). Thus, the generation of a customized prompt is not interdependent on the Generator template and logic, and vice versa. In some examples, when the remote provider already has some user-inputted data, one or more identifiers may be displayed as a list of identifiers (in a drop-down menu or otherwise), each corresponding to a set of values and/or data, for the user's selection. The Generator template may be integrated with this identifier functionality as well as new, manually-inputted user data to compare or distinguish the two, solve this issue, among other methods (e.g., showing different inputted values and/or identifiers via the UI when the user input is slightly different (for example, if the remote provider sends the values “1234” via one or more templates and the user enters “1 2 3 4,” these input values may render different results via the UI, or the same results, depending on the implementation). In further examples, a template may be a CustomPresentation template, which may be used to display a custom presentation (e.g., a customized prompt) via the UI of user device 402. The CustomPresentation template may be used to display a custom presentation that may present images, masked credit card numbers, expiration dates, texts with links, and other such information to the user to apprise the user of the type, extent, and details of her inputted data that will be used for future autofill uses. In some examples, the CustomPresentation object lets the remote provider define a RemoteViews template for the credit card title, and Transformation[s] that will be used to replace child views on that template by values inferred at runtime. In a further aspect, to facilitate the effective use of this CustomPresentation template, the operating system may use a Transformation interface such as: public interface Transformation extends Parcelable { void apply (ValueFinder finder, RemoteViews parentTemplate, int childViewId); } In a further aspect, the operating system may provide multiple implementations of the Transformation interface, including: SingleViewCharSequenceTransformation (which may transform a single view in a string, using a regular expression and a group substitution, and may typically be used to mask a credit card number); MultipleViewsCharSequenceTransformation (which may transform multiple views in a string, using a regular expression and a group substitution, and may typically be used to generate the expiration date); CharSequenceTransformation (which may include some combination of SingleViewCharSequenceTransformation and MultipleViewsCharSequenceTransformation); or ImageTransformation (which may select an image based on which regexp match the view's value, and may typically used to select the proper credit card icon), among other such possibilities such as TextTransformation, which may be similar to ImageTransformation but may generate a text (like the credit card bank name) instead, among other such possibilities. To facilitate these example uses of the CustomPresentation template, the operating system may use the following exemplary code samples, which use SingleViewCharSequenceTransformation, MultipleViewsCharSequenceTransformation, or ImageTransformationtransformations to generate a customized prompt (such as 420 in FIG. 4D). In the context of defining the XML template for the remote view: <LinearLayout> <ImageView android:id = “ @+id/templateccLogo ”/> <TextView android:id=“ @+id/templateCcNumber ”/> <TextView android:id = “ @+id/templateExpDate ”/> </LinearLayout> Then, in the context of defining the XML template for the remote view: saveBuilder.setCustomPresentation (new CustomPresentation.Builder(presentation) .addChild (R.id.templateCcNumber, new SingleViewCharSequenceTransformation.Builder( ccNumberId, “{circumflex over ( )}.*(\\d\\d\\d\\d)$”, “...$1”) . build( )) .addChild (R.id.templateExpDate, new MultipleViewsCharSequenceTransformation.Builder( ) .addField (ccExpMonthId, “{circumflex over ( )}(\\d\\d)$”, “Exp: $1”) .addField (ccExpYearId, “{circumflex over ( )}(\\d\\d)$”, “/$1”) .build( )) .addChild (R.id.templateccLogo, new ImageTransformation.Builder(ccNumberId) .addOption (“{circumflex over ( )}4815.*$”, R.drawable.visa) .addOption (“{circumflex over ( )}4816.*$”, R.drawable.master_card) .build( )) .build( )); In a further aspect, in some examples, the MultipleViewsCharSequenceTransformation may be replaced by StringTranformation, which may be accomplished by exemplary code samples such as: new StringFormatTransformation.Builder(“Exp: %s/%s”) .addArg(ccExpMonthId, “{circumflex over ( )}(\\d\\d)$”) .addArg(ccExpYearId, “{circumflex over ( )}(\\d\\d)$”) .build( )) And, in a further aspect, this approach may present certain advantages (e.g., making the template more readable for the operating system). Other such examples are possible. Illustrative documentation and sample code segments from another example implementation are provided below: CharSequenceTransformation: /** * Replaces a {@link TextView} child of a {@link CustomDescription} with the contents of one or * more regular expressions (regexs). * * <p>When it contains more than one field, the fields that match their regex are added to the * overall transformation result. * * <p>For example, a transformation to mask a credit card number contained in just one field would * be: * * <pre class=“prettyprint”> * new CharSequenceTransformation * .Builder(ccNumberId, Pattern.compile(“{circumflex over ( )}.*(\\d\\d\\d\\d)$”), “...$1”) * .build( ); * </pre> * * <p>But a transformation that generates a {@code Exp: MM / YYYY} credit expiration date from two * fields (month and year) would be: * * <pre class=“prettyprint”> * new CharSequenceTransformation * .Builder(ccExpMonthId, Pattern.compile(“{circumflex over ( )}(\\d\\d)$”), “Exp: $1”) * .addField(ccExpYearId, Pattern.compile(“{circumflex over ( )}(\\d\\d\\d\\d)$”), “ / $1”); * </pre> */ CustomDescription: /** * Defines a custom description for the Save UI affordance. * * <p>This is useful when the autofill service needs to show a detailed view of what would be saved; * for example, when the screen contains a credit card, it could display a logo of the credit card * bank, the last four digits of the credit card number, and its expiration number. * * <p>A custom description is made of 2 parts: * <ul> * <li>A {@link RemoteViews presentation template} containing children views. * <li>{@link Transformation Transformations} to populate the children views. * </ul> * * <p>For the credit card example mentioned above, the (simplified) template would be: * * <pre class=“prettyprint”> * &lt;LinearLayout&gt; * &lt;ImageView android:id=“@+id/templateccLogo”/&gt; * &lt;TextView android:id=“@+id/templateCcNumber”/&gt; * &lt;TextView android:id=“@+id/templateExpDate”/&gt; * &lt;/LinearLayout&gt; * </pre> * * <p>Which in code translates to: * * <pre class=“prettyprint”> * CustomDescription.Builder buider = new Builder(new RemoteViews(pgkName, R.layout.cc_template); * </pre> * * <p>Then the value of each of the 3 children would be changed at runtime based on the the value of * the screen fields and the {@link Transformation Transformations}: * * <pre class=“prettyprint”> * // Image child - different logo for each bank, based on credit card prefix * builder.addChild(R.id.templateccLogo, * new ImageTransformation.Builder(ccNumberId) * .addOption(Pattern.compile(“{circumflex over ( )}4815.*$”), R.drawable.ic_credit_card_logo1) * .addOption(Pattern.compile(“{circumflex over ( )}1623.*$”), R.drawable.ic_credit_card_logo2) * .addOption(Pattern.compile(“{circumflex over ( )}42.*$”), R.drawable.ic_credit_card_logo3) * .build( ); * // Masked credit card number (as .....LAST_4_DIGITS) * builder.addChild(R.id.templateCcNumber, new CharSequenceTransformation * .Builder(ccNumberId, Pattern.compile(“{circumflex over ( )}.*(\\d\\d\\d\\d)$”), “...$1”) * .build( ); * // Expiration date as MM / YYYY: * builder.addChild(R.id.templateExpDate, new CharSequenceTransformation * .Builder(ccExpMonthId, Pattern.compile(“{circumflex over ( )}(\\d\\d)$”), “Exp: $1”) * .addField(ccExpYearId, Pattern.compile(“{circumflex over ( )}(\\d\\d)$”), “/$1”) * .build( ); * </pre> * * <p>See {@link ImageTransformation}, {@link CharSequenceTransformation} for more info about these * transformations. */ ImageTransformation: /** * Replaces the content of a child {@link ImageView} of a * {@link RemoteViews presentation template} with the first image that matches a regular expression * (regex). * * <p>Typically used to display credit card logos. Example: * * <pre class=“prettyprint”> * new ImageTransformation.Builder(ccNumberId, Pattern.compile(“{circumflex over ( )}4815.*$”), * R.drawable.ic_credit_card_logo1, “Brand 1”) * .addOption(Pattern.compile(“{circumflex over ( )}1623.*$”), R.drawable.ic_credit_card_logo2, “Brand 2”) * .addOption(Pattern.compile(“{circumflex over ( )}42.*$”), R.drawable.ic_credit_card_logo3, “Brand 3”) * .build( ); * </pre> * * <p>There is no imposed limit in the number of options, but keep in mind that regexs are * expensive to evaluate, so use the minimum number of regexs and add the most common first * (for example, if this is a tranformation for a credit card logo and the most common credit card * issuers are banks X and Y, add the regexes that resolves these 2 banks first). */ Validator: /** * Sets an object used to validate the user input - if the input is not valid, the * autofill save UI is not shown. * * <p>Typically used to validate credit card numbers. Examples: * * <p>Validator for a credit number that must have exactly 16 digits: * * <pre class=“prettyprint”> * Validator validator = new RegexValidator(ccNumberId, Pattern.compile(““{circumflex over ( )}\\d{16}$”)) * </pre> * * <p>Validator for a credit number that must pass a Luhn checksum and either have * 16 digits, or 15 digits starting with 108: * * <pre class=“prettyprint”> * import android.service.autofill.Validators; * * Validator validator = * and( * new LuhnChecksumValidator(ccNumberId), * or( * new RegexValidator(ccNumberId, Pattern.compile(““{circumflex over ( )}\\d{16}$”)), * new RegexValidator(ccNumberId, Pattern.compile(““{circumflex over ( )}108\\d{12}$”)) * ) * ); * </pre> * * <p><b>Note:</b> the example above is just for illustrative purposes; the same validator * could be created using a single regex for the {@code OR} part: * * <pre class=“prettyprint”> * Validator validator = * and( * new LuhnChecksumValidator(ccNumberId), * new RegexValidator(ccNumberId, Pattern.compile(““{circumflex over ( )}(\\d{16}|108\\d{12})$”)) * ); * </pre> * * <p>Validator for a credit number contained in just 4 fields and that must have exactly * 4 digits on each field: * * <pre class=“prettyprint”> * import android.service.autofill.Validators; * * Validator validator = * and( * new RegexValidator(ccNumberId1, Pattern.compile(““{circumflex over ( )}\\d{4}$”)), * new RegexValidator(ccNumberId2, Pattern.compile(““{circumflex over ( )}\\d{4}$”)), * new RegexValidator(ccNumberId3, Pattern.compile(““{circumflex over ( )}\\d{4}$”)), * new RegexValidator(ccNumberId4, Pattern.compile(““{circumflex over ( )}\\d{4}$”)) * ); * </pre> * * @param validator an implementation provided by the Android System. * @return this builder. * * @throws IllegalArgumentException if {@code validator} is not a class provided * by the Android System. * VI. Computing Device In reference now to FIG. 5, FIG. 5 is a functional block diagram of computing device 500, in accordance with example embodiments. In particular, computing device 500 shown in FIG. 5 can be configured to perform at least one function of server device 108 and/or 110, and/or remote provider 112 and/or 114, any of user device 104a-104e, method 200, user device 302, and/or user device 402 as previously described. Computing device 500 may include a user interface module 501, a network-communication interface module 502, one or more processors 503, data storage 504, and one or more sensors 520, all of which may be linked together via a system bus, network, or other connection mechanism 505. User interface module 501 can be operable to send data to and/or receive data from external user input/output devices. For example, user interface module 501 can be configured to send and/or receive data to and/or from user input devices such as a keyboard, a keypad, a touch screen, a presence-sensitive display, a computer mouse, a track ball, a joystick, a camera, a voice recognition module, and/or other similar devices. User interface module 501 can also be configured to provide output to user display devices, such as one or more cathode ray tubes (CRT), liquid crystal displays, light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices, either now known or later developed. User interface module 501 can also be configured to generate audible output(s), such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices. User interface module 501 can further be configured with one or more haptic devices that can generate haptic output(s), such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device 500. In some embodiments, user interface module 501 can be used to provide a GUI for utilizing computing device 500. Network-communications interface module 502 can include one or more wireless interfaces 507 and/or one or more wireline interfaces 508 that are configurable to communicate via a network. Wireless interfaces 507 can include one or more wireless transmitters, receivers, and/or transceivers, such as a Bluetooth transceiver, a Zigbee transceiver, a Wi-Fi transceiver, a WiMAX transceiver, and/or other similar type of wireless transceiver configurable to communicate via a wireless network. Wireline interfaces 508 can include one or more wireline transmitters, receivers, and/or transceivers, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiber-optic link, or a similar physical connection to a wireline network. In some embodiments, network communications interface module 502 can be configured to provide reliable, secured, and/or authenticated communications. For each communication, information for ensuring reliable communications (i.e., guaranteed message delivery) can be provided, perhaps as part of a message header and/or footer (e.g., packet/message sequencing information, encapsulation header(s) and/or footer(s), size/time information, and transmission verification information such as CRC and/or parity check values). Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA. Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure (and then decrypt/decode) communications. One or more processors 503 can include one or more general purpose processors, and/or one or more special purpose processors (e.g., digital signal processors, graphics processing units, application specific integrated circuits, etc.). One or more processors 503 can be configured to execute computer-readable program instructions 506 that are contained in data storage 504 and/or other instructions as described herein. Data storage 504 can include one or more computer-readable storage media that can be read and/or accessed by at least one of one or more processors 503. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of one or more processors 503. In some embodiments, data storage 504 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other embodiments, data storage 504 can be implemented using two or more physical devices. Data storage 504 can include computer-readable program instructions 506 and perhaps additional data. In some embodiments, data storage 504 can additionally include storage required to perform at least part of the herein-described methods, scenarios, and techniques and/or at least part of the functionality of the herein-described devices and networks. In some embodiments, computing device 500 can include one or more sensors 520. Sensor(s) 520 can be configured to measure conditions in an environment of computing device 500 and provide data about that environment. For example, sensor(s) 520 can include one or more of: (i) an identification sensor to identify other objects and/or devices, such as, but not limited to, an RFID reader, proximity sensor, one-dimensional barcode reader, two-dimensional barcode (e.g., Quick Response (QR) code) reader, and a laser tracker, where the identification sensor(s) can be configured to read identifiers, such as RFID tags, barcodes, QR codes, and/or other devices and/or object configured to be read and provide at least identifying information; (ii) sensors to measure locations and/or movements of computing device 500, such as, but not limited to, a tilt sensor, a gyroscope, an accelerometer, a Doppler sensor, a Global Positioning System (GPS) device, a sonar sensor, a radar device, a laser-displacement sensor, and a compass; (iii) an environmental sensor to obtain data indicative of an environment of computing device 500, such as, but not limited to, an infrared sensor, an optical sensor, a light sensor, a camera, a biosensor, a biometric sensor, a capacitive sensor, a touch sensor, a temperature sensor, a wireless sensor, a radio sensor, a movement sensor, a microphone, a sound sensor, an ultrasound sensor, and/or a smoke sensor; and (iv) a force sensor to measure one or more forces (e.g., inertial forces and/or G-forces) acting about computing device 500, such as, but not limited to one or more sensors that measure: forces in one or more dimensions, torque, ground force, friction, and/or a zero moment point (ZMP) sensor that identifies ZMPs and/or locations of the ZMPs. Many other examples of sensor(s) 520 are possible as well. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The above detailed description describes various features and functions of the disclosed systems, devices, and methods with reference to the accompanying figures. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, figures, and claims are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. With respect to any or all of the ladder diagrams, scenarios, and flow charts in the figures and as discussed herein, each block and/or communication may represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments. In these alternative embodiments, for example, functions described as blocks, transmissions, communications, requests, responses, and/or messages may be executed out of order from that shown or discussed, including substantially concurrent or in reverse order, depending on the functionality involved. Further, more or fewer blocks and/or functions may be used with any of the ladder diagrams, scenarios, and flow charts discussed herein, and these ladder diagrams, scenarios, and flow charts may be combined with one another, in part or in whole. A block that represents a processing of information may correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a block that represents a processing of information may correspond to a module, a segment, or a portion of program code (including related data). The program code may include one or more instructions executable by a processor for implementing specific logical functions or actions in the method or technique. The program code and/or related data may be stored on any type of computer readable medium such as a storage device including a disk or hard drive or other storage medium. The computer readable medium may also include non-transitory computer readable media such as non-transitory computer-readable media that stores data for short periods of time like register memory, processor cache, and random access memory (RAM). The computer readable media may also include non-transitory computer readable media that stores program code and/or data for longer periods of time, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. A computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. Moreover, a block that represents one or more information transmissions may correspond to information transmissions between software and/or hardware modules in the same physical device. However, other information transmissions may be between software modules and/or hardware modules in different physical devices. While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are provided for explanatory purposes and are not intended to be limiting, with the true scope being indicated by the following claims. 16608372 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Oct 18th, 2019 12:00AM https://www.uspto.gov?id=US11317018-20220426 Camera operable using natural language commands In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token. 11317018 1. A method comprising: while a computing device is operating in an image capture mode: determining, by the computing device, based on an audio input detected by the computing device, a visual token, and a specified action to be performed by the visual token, to be included in one or more images to be captured by a camera of the computing device; locating, by the computing device, the visual token within an image preview generated by the computing device; responsive to locating the visual token within the image preview: determining, by the computing device, a context of the one or more images to be captured by the camera of the computing device; and automatically adjusting, based on the context, camera controls to zoom, or crop the visual token within the image preview; monitoring, by the computing device, the image preview to determine when the visual token in the image preview begins performing the specified action; and responsive to determining that the visual token in the image preview begins performing the specified action, automatically capturing, by the computing device, one or more images that include the visual token. 2. The method of claim 1, wherein the audio input associated with the image capture command comprises a natural language user input detected by a microphone of the computing device. 3. The method of claim 1, wherein the visual token is one of a plurality of visual tokens to be included in the one or more images to be captured by the camera, the method further comprising: locating, by the computing device, each of the plurality of visual tokens within the image preview to be captured by the camera, wherein automatically capturing the one or more images that include the plurality of visual tokens is further in response to determining that the image preview includes the plurality of visual tokens. 4. The method of claim 1, further comprising: determining, by the computing device, one or more relationships between at least two visual tokens from a plurality of visual tokens, wherein the visual token is included in the plurality of visual tokens. 5. The method of claim 4, further comprising: determining, by the computing device, the one or more relationships based at least in part on a hidden Markov model. 6. The method of claim 1, wherein the context comprises a location or a scene of the one or more images. 7. The method of claim 1, wherein the audio input is natural language user input, and wherein locating the visual token comprises: matching, by the computing device, the natural language user input with a first referential visual token of one or more referential visual tokens from a model of predetermined tokens; comparing, by the computing device, the first referential visual token with each of one or more visual tokens within the image preview; and determining, by the computing device, that the visual token that most closely matches the first referential visual token is the visual token to be included in the one or more images to be captured. 8. The computing device of claim 1, wherein the context comprises a location or a scene of the one or more images. 9. A computing device comprising: a camera; a microphone that detects audio input; at least one processor; and a storage device that stores one or more modules that, when executed by the at least one processor, causes the at least one processor to: determine, based on the audio input detected by the microphone, a visual token, and a specified action to be performed by the visual token, to be included in one or more images to be captured by the camera; locate the visual token within an image preview; responsive to locating the visual token within the image preview: determine a context of the one or more images to be captured by the camera of the computing device; and automatically adjust, based on the context, camera controls to zoom, or crop the visual token within the image preview; monitor the image preview to determine when the visual token in the image preview begins performing the specified action; and responsive to determining that the visual token in the image preview begins performing the specified action automatically capture, using the camera, one or more images that include the visual token. 10. The computing device of claim 9, wherein the audio input associated with the image capture command comprises a natural language user input detected by the microphone. 11. The computing device of claim 9, wherein the visual token is one of a plurality of visual tokens to be included in the one or more images to be captured by the camera, and wherein the one or more modules cause the at least one processor to: locate each of the plurality of visual tokens within the image preview to be captured by the camera; and automatically capture the one or more images that include the plurality of visual tokens in response to determining that the image preview includes the plurality of visual tokens. 12. The computing device of claim 9, wherein the one or more modules cause the at least one processor to: determine one or more relationships between at least two visual tokens from a plurality of visual tokens, wherein the visual token is included in the plurality of visual tokens. 13. The computing device of claim 12, wherein the at least one processor is further configured to determine the one or more relationships based at least in part on a hidden Markov model. 14. The computing device of claim 9, wherein the one or more modules cause the at least one processor to capture the one or more images further in response to obtaining an indication of user input to confirm the one or more images. 15. A non-transitory computer-readable storage medium comprising instructions that, when executed, cause at least one processor of a computing device to: determine, based on an audio input detected by the computing device, a visual token, and a specified action to be performed by the visual token, to be included in one or more images to be captured by a camera of the computing device; locate the visual token within an image preview generated by the computing device; responsive to locating the visual token within the image preview: determine a context of the one or more images to be captured by the camera of the computing device; and automatically adjust, based on the context, camera controls to zoom, or crop the visual token within the image preview; monitor the image preview to determine when the visual token in the image preview begins performing the specified action; and responsive to determining that the visual token in the image preview begins performing the specified action, automatically capture one or more images that include the visual token. 16. The non-transitory computer-readable storage medium of claim 15, wherein the audio input associated with the image capture command comprises a natural language user input detected by a microphone of the computing device. 17. The non-transitory computer-readable storage medium of claim 15, wherein the visual token is one of a plurality of visual tokens to be included in the one or more images to be captured by the camera of the computing device, and wherein the instructions, when executed, cause the at least one processor to: locate each of the plurality of visual tokens within the image preview to be captured by the camera of the computing device; and automatically capture the one or more images that include the plurality of visual tokens in response to determining that the image preview includes the plurality of visual tokens. 18. The non-transitory computer-readable storage medium of claim 15, wherein the instructions, when executed, cause the at least one processor to: determine one or more relationships between at least two visual tokens from a plurality of visual tokens, wherein the visual token is included in the plurality of visual tokens. 19. The non-transitory computer-readable storage medium of claim 18, wherein the instructions, when executed, cause the at least one processor to determine the one or more relationships using a hidden Markov model. 20. The non-transitory computer-readable storage medium of claim 15, wherein the context comprises a location or a scene of the one or more images. 20 RELATED APPLICATION This application is a continuation of U.S. application Ser. No. 16/242,724, filed Jan. 8, 2019, which is a continuation of U.S. application Ser. No. 15/358,770, filed Nov. 22, 2016 and issued as U.S. Pat. No. 10,212,338, the entire contents of each of which are hereby incorporated by reference. BACKGROUND Some computing devices may rely on presence-sensitive technology for receiving user input to operate a camera of the computing device. For example, a computing device may display a graphical user interface (GUI) for controlling a camera at a touch screen and receive user input at the touch screen to cause the camera to take a picture or video, focus the camera on a particular subject, adjust the flash of the camera, or control some other camera function and/or picture characteristic. Relying on a GUI and presence-sensitive technology as the primary way to control a camera can have drawbacks. For example, while trying to take a picture or video (e.g., of moving object), a user may be too slow in providing his or her inputs at the GUI and may cause the camera to miss the shot. In addition, interacting with a GUI while trying to frame the scene in a camera viewfinder may be cumbersome and somewhat impractical, as inputs to the GUI may cause the device to move which may blur or otherwise adversely affect the quality of the resulting photo or video. SUMMARY In one example, the disclosure is directed to a method that includes, while a computing device is operating in an image capture mode, receiving, by the computing device, an indication of a natural language user input associated with an image capture command. The method further includes determining, by the computing device, based on the image capture command, a visual token to be included in one or more images to be captured by a camera of the computing device. The method also includes locating, by the computing device, the visual token within an image preview output by the computing device while operating in the image capture mode. The method further includes capturing, by the computing device, one or more images of the visual token. In another example, the disclosure is directed to a computing device that includes a camera, at least one processor, and at least one non-transitory computer-readable storage medium storing instructions that are executable by the at least one processor to, while the computing device is operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The instructions are further executable by the at least one processor to determine based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The instructions are further executable by the at least one processor to locate the visual token within an image preview output by the computing device while operating in the image capture mode. The instructions are further executable by the at least one processor to capture one or more images of the visual token. In another example, the disclosure is directed to a non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor of a computing device to, while the computing device is operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The instructions further cause the at least one processor of the computing device to determine based on the image capture command, a visual token to be included in one or more images to be captured by a camera of the computing device. The instructions further cause the at least one processor of the computing device to locate the visual token within an image preview output by the computing device while operating in the image capture mode. The instructions further cause the at least one processor of the computing device to capture one or more images of the visual token. The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a conceptual diagram illustrating an example computing system with an example computing device configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. FIG. 2 is a block diagram illustrating an example computing device configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. FIG. 3 is a conceptual diagram illustrating an example image capture command executable by a computing device, in accordance with one or more aspects of the present disclosure. FIG. 4 is another conceptual diagram illustrating a second example image capture command executable by a computing device. FIG. 5 is a flowchart illustrating example operations of an example computing device configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. DETAILED DESCRIPTION In general, techniques of this disclosure may enable a computing device to interpret natural language user inputs for precisely controlling a camera of the computing device to take pictures or videos of specific visual tokens of real-world objects, actions, persons, locations, concepts, or scenes. For example, a computing device that includes a camera may receive an indication of a natural language user input associated with an image capture command. For instance, a microphone of the computing device may receive an audio input as the user speaks the phrase “take a picture of the girl in the yellow dress jumping up and down.” The computing device may analyze the natural language input and determine, an image capture command and one or more visual tokens to be included in one or more images to be captured by the camera. For example, using natural language processing techniques on the audio input received by the microphone, the computing device may recognize the phrase “take a picture” as an image capture command and the phrase “girl in the yellow dress jumping up and down” as the visual token. The computing device may locate the visual token within an image preview being output for display by the computing device (e.g., as part of a viewfinder of a graphical user interface). For example, using image processing techniques, the computing device may identify a portion of the image preview that corresponds to the shape and color of a girl in a yellow dress. The computing device may automatically execute the image capture command indicated by the natural language input to capture one or more images of the object specified by the natural language input. For example, the computing device may adjusts the camera controls to focus, crop, or otherwise enhance the image preview so that the camera takes a picture that is fixated on the girl in the yellow dress. In this way, rather than requiring user inputs at a presence-sensitive input device to control a camera of a device, the techniques of this disclosure may enable a computing device to take pictures, video, or otherwise control a camera using natural language user inputs. The computing device may execute complex operations in capturing one or more images of a visual token based purely on voice inputs and without requiring the user to touch a screen or a button of the computing device. The computing device may receive the natural language user input orally, allowing the user to devote their full attention to stabilizing the computing device while the computing device processes the image capture command and performs the functions associated with the image capture command. Throughout the disclosure, examples are described where a computing device and/or a computing system may analyze information (e.g., voice inputs from a user) associated with a computing device only if the computing device receives permission from the user to analyze the information. For example, in situations discussed below in which the computing device may collect or may make use of information associated with the user, the user may be provided with an opportunity to provide input to control whether programs or features of the computing device can collect and make use of user information or to dictate whether and/or how to the computing device may receive content that may be relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally-identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined about the user. Thus, the user may have control over how information is collected about the user and used by the computing device. FIG. 1 is a conceptual diagram illustrating an example computing system 1 with an example computing device 10 configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. Computing system 1 of FIG. 1 is an example computing system that includes computing device 10. Computing system 1, in other examples, may also include other external devices, such as a server device, a network, or other camera devices. In the example of FIG. 1, computing device 10 is a mobile computing device (e.g., a mobile phone). However, computing device 10 may be any type of mobile or non-mobile computing device such as a tablet computer, a personal digital assistant (PDA), a desktop computer, a laptop computer, a gaming system, a media player, an e-book reader, a television platform, an automobile navigation system, or a wearable computing device (e.g., a computerized watch, computerized eyewear, a computerized glove). As shown in FIG. 1, computing device 10 includes a user interface device (UID) 12. UID 12 of computing device 10 may function as an input device for computing device 10 and as an output device. UID 12 may be implemented using various technologies. For instance, UID 12 may function as an input device using a presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology. UID 12 may function as an output (e.g., display) device using any one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 10. UID 12 of computing device 10 may include a presence-sensitive display that may receive tactile input from a user of computing device 10. UID 12 may receive indications of the tactile input by detecting one or more gestures from a user of computing device 10 (e.g., the user touching or pointing to one or more locations of UID 12 with a finger or a stylus pen). UID 12 may present output to a user, for instance at a presence-sensitive display. UID 12 may present the output as a graphical user interface (e.g., user interface 14), which may be associated with functionality provided by computing device 10. For example, UID 12 may present various user interfaces of components of a computing platform, operating system, applications, or services executing at or accessible by computing device 10 (e.g., an electronic message application, an Internet browser application, a mobile operating system, etc.). A user may interact with a respective user interface to cause computing device 10 to perform operations relating to a function. In accordance with the techniques of this disclosure, user interface (UI) module 21 of computing device 10 may utilize UID 12 to show image preview 16 when computing device 10 is operating in an image capture mode. Computing device 10 may be configured to operate in different modes, or device states. In some examples, the mode in which computing device 10 is operating may be dependent on an application being executed by one or more modules of computing device 10. In general, as referred to in this disclosure, an “image capture mode” may be considered any mode or state in which a computing device, such as computing device 10, enters after receiving an initial indication of user input to utilize a camera, such as camera 30, but prior to the camera actually being utilized to capture an image, take a photo, take a video, or otherwise generate and store data that represents one or more captured images. For instance, when computing device 10 is operating in the image capture mode, one or more modules of computing device 10 may be executing a camera application or otherwise providing an interface where a user may interact with camera 30 utilizing computing device 10. However, while operating in image capture more, camera 30 of computing device 10 may not yet have performed an operation to capture an image that is stored as a captured image or video. An image capture mode is in contrast and different than a “post capture mode”, such as an “image evaluation mode”. As referred to in this disclosure, a “post capture mode” represents any mode in which a computing device, such as computing device 10, enters immediately after performing an operation to capture an image that is stored as a captured image or video. For example, computing device 10 may, while operating in a post capture mode, output for display the captured image taken by camera 30 for post processing, user evaluation, user confirmation, or user initiated deletion, among other things. In some examples, computing device 10 receives a subsequent indication of user input indicating that the user would like to take another picture, computing device 10 may exit the post capture mode and return to operating in the image capture mode. Computing device 10 may include various input devices. For instance, computing device 10 may include camera 30. Camera 30 may be an optical instrument for recording or capturing images. Camera 30 may capture individual still photographs or sequences of images constituting videos or movies. Camera 30 may be a physical component of computing device 10. Camera 30 may include a camera application that acts as an interface between a user of computing device 10 and the functionality of camera 30. Camera 30 may perform various functions, such as capturing one or more images, focusing on one or more objects, and utilizing various flash settings, among other things. Computing device 10 may include microphone 32. Microphone 32 may be a transducer that converts sound into an electrical signal to be processed by one or more modules of computing device 10. Microphone 32 may use electromagnetic induction (dynamic microphones), capacitance change (condenser microphones) or piezoelectricity (piezoelectric microphones) to produce the electrical signal from air pressure variations. Microphone 32 may output the electrical signal in analog or digital form. For example, microphone 32 may output the electrical signal as an analog output and/or may output the electrical signal in digital form, such as a message, a sequence of bits, or other digital output. Object module 20 may receive the output from microphone 32 and process the output to determine spoken input received by microphone 32. Computing device 10 may include object module 20 and image module 22. Modules 20 and 22 may perform operations described using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/or executing at computing device 10. Computing device 10 may execute modules 20 and 22 with one or more processors. Computing device 10 may execute modules 20 and 22 as a virtual machine executing on underlying hardware. Modules 20 and 22 may execute as a service or component of an operating system or computing platform. Modules 20 and 22 may execute as one or more executable programs at an application layer of a computing platform. UID 12 and modules 20 and 22 may be otherwise arranged remotely to and remotely accessible to computing device 10, for instance, as one or more network services operating at a network in a network cloud. In general, object module 20 may perform various techniques of this disclosure associated with natural language command processing and object location. For instance, object module 20 may receive indications of user input to computing device 10, such as the spoken inputs received by microphone 32. Object module 20 may further interpret the indications of user input to determine a function to be performed in response to the receipt of the indications of user input. Object module 20 may locate and determine various visual tokens with an image preview of an image to be captured by camera 30 or an image that camera 30 has already captured based on referential visual tokens stored by computing device 10. In other words, a referential visual token may be data stored in computing device 10 that describes one or more characteristics of visual tokens that computing device 10 may detect within the image preview. In general, image module 22 may perform various techniques of this disclosure associated with capturing images and executing image capture commands that interpreted from user inputs being processed by object module 20. For instance, image module 22 may utilize camera 30 to capture one or more images of the object located by object module 20. Image module 22 may further perform aspects of the image capture command, such as focusing camera 30 on a visual token, cropping an image around a visual token, zooming camera 30 to a visual token, or capturing one or more images of the visual token using camera 30 while the object is performing a particular action. In other words, image module 22 may perform actions directly associated with the use of camera 30. In accordance with the techniques of this disclosure, computing device 10 may perform various functions while operating in the image capture mode. When computing device 10 is operating in the image capture mode, one or more modules of computing device 10 may be executing a camera application or otherwise providing an interface where a user may interact with camera 30 utilizing computing device 10. In other instances, computing device 10 may be operating in the image capture mode whenever computing device 10 is able to receive indications of user input to readily capture one or more images. While in the image capture mode, UI module 21 of computing device 10 may output graphical user interface 14 that includes image preview 16. Image preview 16 may include a digital representation of what would be included in a captured image if camera 30 were to immediately capture an image. As a user of computing device 10 moves camera 30, UI module 21 may update image preview 16 to show the new digital representation of what would be included in the captured image if camera 30 were to immediately capture in image after moving. In the example of FIG. 1, image preview 16 includes subjects 18A-18F (collectively, subjects 18). While operating in the image capture mode, object module 20 may receive an indication of a natural language user input associated with an image capture command. For instance, in the example of FIG. 1, a user of computing device 10 may speak a natural language user input into microphone 32, where the natural language user input includes the image capture command. Microphone 32 may convert the natural language user input into some form of output, such as an electrical signal, a message, or a sequence of bits. Object module 20 may receive the output as the indication of the natural language user input. Object module 20 may analyze the output to determine the image capture command. In the example of FIG. 1, the image capture command may be an instruction to take a picture of the leftmost subject of subjects 18 (i.e., subject 18A). Object module 20 may determine, based on the image capture command, a visual token to be included in one or more images to be captured by camera 30 of computing device 10. For instance, object module 20 may parse the natural language user input into two or more distinct portions: a specific image capture command, as well as a particular visual token or multiple visual tokens that will be the subject of one or more images captured by camera 30 using the specific image capture command. In accordance with the techniques of this disclosure, a visual token may be any object, person, action, location, or concept (e.g., “wildlife,” “wedding,” “kiss,” “military,” or “love”). In the example of FIG. 1, the visual token included in the natural language user input is subject 18A. As such, object module 20 may determine that the visual token that will be the subject of one or more images captured by camera 30 using the specific image capture command is the leftmost subject of subjects 18 (i.e., subject 18A). Object module 20 may locate the visual token within image preview 16 output by UID 12 of computing device 10 while operating in the image capture mode. As stated above, object module 20 may determine that subject 18A is the visual token to be captured in one or more images by camera 30. Object module 20 may scan image preview 16 to locate subjects 18 and determine the leftmost subject of subjects 18 (i.e., subject 18A). More detailed examples of various ways object module 20 may locate the visual token within image preview 16 are described below with respect to FIG. 2. Using the visual token location and the image capture command determined by object module 20, image module 22 may capture one or more images of the visual token. For instance, image module 22 may receive, from object module 20, the image capture command and the location of subject 18A. Image module 22 may utilize camera 30 to execute the image capture command and capture one or more images of subject 18A. In some examples, image module 22 may use camera 30 to capture one or more images of subject 18A once subject 18A is fully located within image preview 16. In some other examples, image module 22 may focus camera 30 on subject 18A and keep the zoom level of camera 30 consistent. In some instances of such examples, image module 22 may crop the captured image to provide an illusion of zooming camera 30 in on subject 18A. In still other examples, image module 22 may zoom camera 30 onto subject 18A such that subject 18A is the center of the captured image. Rather than requiring a user to input multiple touch indications on computing device 10 in order to take a picture, by performing an image capture command based on a natural language user input, computing device 10 may execute complex operations in capturing one or more images of a visual token without requiring the user to touch UID 12 or a button of computing device 10. Computing device 10 may receive the natural language user input orally, such as via microphone 32, allowing the user to devote their full attention to stabilizing camera 30 while computing device 10 processes the image capture command and performs the functions associated with the image capture command. Further, by requiring fewer indications of touch inputs (e.g., multiple touches to adjust focus, zoom, flash settings, and to take the picture), computing device 10 may perform fewer operations in response thereto, thereby consuming less electrical power. The techniques described herein may further have benefits for people who are physically impaired. For example, if a user has a physical impairment that limits the use of their arms or hands, a computing device that receives indications of natural language user inputs to capture images and perform complex image capture commands may allow such a user to still take pictures without the use of their hands. Users with various physical impairments may find it difficult to operate computing devices that require touch inputs or other manual inputs while also holding the computing device. As such, computing device 10 may provide valuable assistance to such users with various physical impairments. FIG. 2 is a block diagram illustrating an example computing device 10 configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. Computing device 10 of FIG. 2 is described below within the context of system 1 of FIG. 1. FIG. 2 illustrates only one particular example of computing device 10 and many other examples of computing device 10 may be used in other instances. In the example of FIG. 2, computing device 10 may be a wearable computing device, a mobile computing device, or a non-portable (e.g., desktop, etc.) computing device. Computing device 10 of FIG. 2 may include a subset of the components included in example computing device 10 or may include additional components not shown in FIG. 2. As shown in the example of FIG. 2, computing device 10 includes user interface device 12 (“UID 12”), one or more processors 40, one or more input devices 42, one or more communication units 44, one or more output devices 46, and one or more storage devices 48. Input devices 42 include camera 30, microphone 32, and one or more sensors 52. Storage devices 48 of computing device 10 also include object module 20, UI module 21, image module 22, visual tokens 24, future visual token model 26, image queue 28, and action model 29. Output module 20 may further include command module 54, visual token module 56, and action module 58. Object module 20, UI module 21, and image module 22 may rely on information stored as visual tokens 24, future visual token model 26, image queue 28, and action model 29 at storage device 48. In other words, as is described in more detail below, object module 20, UI module 21, and image module 22 may be operable by processors 40 to perform read/write operations on information, stored as visual tokens 24, future visual token model 26, image queue 28, and action model 29, at storage device 48. Object module 20, UI module 21, and image module 22 may access the information stored in visual tokens 24, future visual token model 26, image queue 28, and action model 29 to perform a function of computing device 10. Communication channels 50 may interconnect each of the components 12, 20, 21, 22, 24, 26, 28, 29, 30, 32, 40, 42, 44, 46, 48, 52, 54, 56, and 58 for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels 50 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. One or more output devices 46 of computing device 10 may generate output. Examples of output are tactile, audio, and video output. Output devices 46 of computing device 10, in one example, includes a presence-sensitive display, sound card, video graphics adapter card, speaker, cathode ray tube (CRT) monitor, liquid crystal display (LCD), or any other type of device for generating output to a human or machine. One or more input devices 42 of computing device 10 may receive input. Examples of input are tactile, audio, and video input. Input devices 42 of computing device 10, in some examples, includes a presence-sensitive display, touch-sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone, sensor, or any other type of device for detecting input from a human or machine. Camera 30 of input devices 42 may be similar and include some or all of the same features as camera 30 of FIG. 1. Camera 30 may be an optical instrument for recording or capturing images. Camera 30 may capture individual still photographs or sequences of images that make up videos or movies. Camera 30 may be a physical component of computing device 10. Camera 30 may include a camera application that acts as an interface between a user of computing device 10 and the functionality of camera 30. Camera 30 may perform various functions, such as capturing one or more images, focusing on one or more visual tokens, and utilizing various flash settings, among other things. In some examples, camera 30 may be a single camera. In other examples, camera 30 may include multiple cameras. Microphone 32 of input devices 42 may be similar and include some or all of the same features as microphone 32 of FIG. 1. Microphone 32 may be a transducer that converts sound into an electrical signal to be processed by one or more modules of computing device 10. Microphone 32 may use electromagnetic induction (dynamic microphones), capacitance change (condenser microphones) or piezoelectricity (piezoelectric microphones) to produce the electrical signal from air pressure variations. Microphone 32 may produce other output based on the received audio input, such as a message or a sequence of bits. Object module 20 may receive the output from microphone 32 and process the output to determine spoken input received by microphone 32. In some examples, microphone 32 may be a single microphone. In other examples, microphone 32 may include multiple microphones. Sensors 52 may include one or more other input devices of input devices 42 that record changes in the environment around computing device 10 and convert the changes to data. Examples of sensors 52 may include an accelerometer that generates accelerometer data. Accelerometer data may indicate an acceleration and/or a change in acceleration of computing device 10. Sensors 52 may include a gyrometer that generates gyrometer data. Gyrometer data may indicate a physical orientation and/or change in physical orientation of computing device 10. In some examples, the orientation may be relative to one or more reference points. Sensors 52 may include a magnetometer that generates magnetometer data. Magnetometer data may indicate the magnetization of an object that is touching or in proximity to computing device 10. Magnetometer data may indicate the Earth's magnetic field, and in some examples, provide directional functionality of a compass. Sensors 52 may include a barometer for sensing barometric pressure associated with computing device 10. Computing device 10 may infer a change in elevation or detect movement based on the barometric pressure data obtained by a barometer of sensors 52. Additional examples of sensors 52 may include an ambient light sensor that generates ambient light data. The ambient light data may indicate an intensity of light to which computing device 10 is exposed. Sensors 52 may include a proximity sensor that generates proximity data. Proximity data may indicate whether an object is within proximity to computing device 10. In some examples, proximity data may indicate how close an object is to computing device 10. In some examples, sensors 52 may include a clock that generates a date and time. The date and time may be a current date and time. Sensors 52 may include a pressure sensor that generates pressure data. Pressure data may indicate whether a force is applied to computing device 10 and/or a magnitude of a force applied to computing device 10. Pressure data may indicate whether a force is applied to UID 12 and/or a magnitude of a force applied to UID 12. Sensors 52 may include a global positioning system that generates location data. One or more communication units 44 of computing device 10 may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Examples of communication unit 44 include a network interface card (e.g. such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of communication units 44 may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers. UID 12 is similar to UID 12 of FIG. 1 and may include some or all of the same features as UID 12 of FIG. 1. In some examples, UID 12 of computing device 10 may include functionality of input devices 42 and/or output devices 46. In the example of FIG. 2, UID 12 may be or may include a presence-sensitive input device. In some examples, a presence sensitive input device may detect an object at and/or near a screen. As one example range, a presence-sensitive input device may detect an object, such as a finger or stylus that is within 2 inches or less of the screen. The presence-sensitive input device may determine a location (e.g., an (x,y) coordinate) of a screen at which the object was detected. In another example range, a presence-sensitive input device may detect an object six inches or less from the screen and other ranges are also possible. The presence-sensitive input device may determine the location of the screen selected by a user's finger using capacitive, inductive, and/or optical recognition techniques. In some examples, presence sensitive input device also provides output to a user using tactile, audio, or video stimuli as described with respect to output device 46, e.g., at a display. In the example of FIG. 2, UID 12 presents a user interface (such as user interface 14 of FIG. 1). While illustrated as an internal component of computing device 10, UID 12 also represents and external component that shares a data path with computing device 10 for transmitting and/or receiving input and output. For instance, in one example, UID 12 represents a built-in component of computing device 10 located within and physically connected to the external packaging of computing device 10 (e.g., a screen on a mobile phone). In another example, UID 12 represents an external component of computing device 10 located outside and physically separated from the packaging of computing device 10 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer). One or more storage devices 48 within computing device 10 may store information for processing during operation of computing device 10 (e.g., computing device 10 may store data (e.g., visual tokens 24, future visual token model 26, image queue 28, and action model 29) that modules 20 (including modules 54, 56, and 58), 21, and 22 access during execution at computing device 10). In some examples, storage device 48 is a temporary memory, meaning that a primary purpose of storage device 48 is not long-term storage. Storage devices 48 on computing device 10 may configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage devices 48, in some examples, include one or more computer-readable storage media. Storage devices 48 may be configured to store larger amounts of information than volatile memory. Storage devices 48 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices 48 may store program instructions and/or information (e.g., data) associated with modules 20 (including modules 54, 56, and 58), 21, and 22, visual tokens 24, future visual token model 26, image queue 28, and action model 29. One or more processors 40 may implement functionality and/or execute instructions within computing device 10. For example, processors 40 on computing device 10 may receive and execute instructions stored by storage devices 48 that execute the functionality of object module 20 (including the functionality of modules 54, 56, and 58), UI module 21, and image module 22. These instructions executed by processors 40 may cause computing device 10 to process and execute image commands for computing device 10 based on visual tokens 24, future visual token model 26, image queue 28, and action model 29, within storage devices 48 during program execution. Processors 40 may execute instructions of modules 20 (including modules 54, 56, and 58), 21, and 22 to cause to perform various actions or functions of computing device 10. Visual tokens 24 represents any suitable storage medium for storing different visual tokens discovered in an image preview displayed on UID 12. In accordance with the techniques of this disclosure, a visual token may be associated with any on of an object, a person, an action, a location, or a concept, as well as spatial relationships between objects, people, locations, or any combination thereof. In accordance with the techniques of this disclosure, visual token data may include any information usable by object module 20 to identify visual tokens within an image preview, such as visual token shape information, visual token color information, visual token size information, visual token orientation information, visual token environment information, visual token motion information, sample images of the visual token, sample images of exemplary portions of the visual token, or any other identifying information of an associated visual token that object module 20 may use to identify the associated visual token in an image preview. For instance, visual tokens 24 may be a short-term data structure for organizing visual token data as received by object module 20 based on the image preview captured by camera 30. Object module 20 may access visual tokens 24 to determine any current visual tokens representing visual tokens in the image preview on computing device 10. Object module 20 may perform read/write operations for adding identifying information to visual tokens 24 or editing identifying information in visual tokens 24 (e.g., when camera 30 shifts and/or when new visual tokens are being displayed in the image preview). Future visual token model 26 represents any suitable storage medium for storing a model that may be utilized by computing device 10 to determine expected visual tokens in an image preview based on the current visual tokens determined in visual tokens 24. For instance, future visual token model 26 may be a long-term data structure for organizing visual token data as received by object module 20 based on the image preview captured by camera 30. Visual token model 26 may associate different visual tokens with one another and object module 20 may predict, based on the associations between visual tokens, the future presence of visual tokens based on current visual tokens 24. Object module 20 may access future visual token model 26 to determine expected visual tokens based on current visual tokens representing visual tokens in the image preview on computing device 10. Object module 20 may perform read/write operations for adding information to future visual token model 26 (e.g., when object module 20 determines new parings of visual tokens in the image preview) or editing information from future visual token model 26 (e.g., when object module 20 changes associations between visual tokens within future visual token model 26). In some instances, future visual token model 26 may describe one or more relationships between one or more subsets of visual tokens 24, potentially based at least in part on a hidden Markov model. For instance, if object module 20 determines that two or more visual tokens are present in an image preview, object module 20 may access data within future visual token model 26 to determine a relationship between the two or more visual tokens. From this relationship, object module 20 may determine a scene or a location of the image preview. For instance, if object module 20 determines that both a scoreboard and a fence are present in an image preview, object module 20 may access future visual token model 26 to determine a relationship between the scoreboard and the fence. Visual token model 26 may indicate that both visual tokens may be present at a baseball field. Visual token model 26 may also indicate that baseball equipment and baseball players are also generally present at baseball fields. As such, object module 20 may determine that an expected future visual token is baseball equipment or a baseball player. Image queue 28 represents any suitable storage medium for storing one or more different images captured by camera 30. For instance, image queue 28 may be a short-term data structure for organizing one or more images as received by image module 22 based on images captured by camera 30. Image module 22 may access image queue 28 to store one or more images captured by camera 30. Object module 20, including action module 58, may further perform read/write operations for editing information from image queue 28 (e.g., object module 20 is analyzing the one or more images in image queue 28 to determine when a visual token in the one or more images is performing a specific action). Action model 29 represents any suitable storage medium for storing a model that may be utilized by computing device 10 to determine whether an object within one or more images is performing a particular action as defined by the visual token. For instance, action model 29 may be a long-term data structure for organizing action data as determined by object module 20 based on past images captured by camera 30 and associating different configurations of objects within the images to particular actions. Examples of action data include any information describing motions of various visual tokens, such as visual token shape configurations before, during, and after a motion, speed of motion information, sample images of visual tokens performing the specific action, visual token orientation shifts, or visual token environment changes, among other things. Object module 20 may access action model 26 to determine any actions being taken by particular objects within one or more visual tokens in the one or more images of image queue 28 in computing device 10. Object module 20 may perform read/write operations for adding information to action model 29 (e.g., when object module 20 determines new actions performed by various objects/visual tokens) or editing information from action model 29 (e.g., when object module 20 updates how various objects/visual tokens within images captured by camera 30 appear when performing various actions based on user feedback). Storage device 48 may include object module 20, UI module 21, and image module 22. Object module 20, UI module 21, and image module 22 may be additional examples of modules 20, 21, and 22 from FIG. 1, including similar and some or all the same functionality of modules 20, 21, and 22 from FIG. 1 In accordance with the techniques of this disclosure, computing device 10 may perform various functions for controlling camera 30 while computing device 10 is operating in image capture mode. While operating in the image capture mode, object module 20 may utilize command module 54 to receive and process natural language user inputs. That is, command module 54 may receive an indication of a natural language user input associated with an image capture command. For instance, a user of computing device 10 may speak a natural language user input that is detected by microphone 32, where the natural language user input includes the image capture command specifying a visual token including at least an object and an action. Microphone 32 may convert the natural language user input into an output, such as a message, a sequence of bits, or an electrical signal, and command module 54 may receive the output from microphone 32 over communication channels 50 as the indication of the natural language user input. Command module 54 may analyze the output from microphone 32 to determine the image capture command stated by the user when the user provides the natural language user input. In the example of FIG. 2, the image capture command may be an instruction to capture an image of a visual token including a particular object (e.g., a dog) while the particular object is performing a particular action (e.g., catching a ball). In other instances, the image capture command may be an instruction to crop one or more images to fit around the visual token of the particular object or to focus camera 30 on the particular object and capture one or more images focused on the particular object. Command module 54 may determine, based on the image capture command, a visual token to be included in one or more images to be captured by camera 30 of computing device 10. For instance, command module 54 may parse the natural language user input into two or more distinct portions: one or more portions of the natural language input that include a specific image capture command, as well as one or more portions of the natural language input that include a particular visual token or multiple visual tokens that will be the subject of one or more images captured by camera 30 using the specific image capture command. In the example of FIG. 2, the visual token included in the natural language user input is the dog object. As such, command module 54 may determine that the object that will be the subject of one or more images captured by camera 30 using the specific image capture command is a dog located within an image preview. For instance, in parsing the natural language user input, command module 54 may determine if the received input includes portions of audio indicative of human speech. Using speech recognition techniques, command module 54 may transcribe received natural language user input into one or more words of spoken language. Command module 54 may utilize data containing various speech characteristics during the transcribing process to compensate for variances in the speech of different users. These characteristics may include tone, accent, rhythm, flow, articulation, pitch, resonance, or other characteristics of speech that the device has learned about the user from previous natural language inputs from the user. Taking into considerations known characteristics about the user's speech, command module 54 may improve results in transcribing the natural language user input for that user. Visual token module 56 may locate the visual token determined from a natural language input within an image preview output by UI module 21 via UID 12 of computing device 10 while operating in the image capture mode. As stated above, command module 54 may determine that a dog is the visual token to be captured in one or more images by camera 30. Visual token module 56 may scan an image preview to locate and determine a dog within the image preview. In some instances, in locating the visual token, visual token module 56 may determine one or more referential visual tokens associated with a respective visual token of one or more visual tokens within the image preview. In accordance with the techniques described herein, a referential visual token may be data stored in computing device 10 that describes one or more characteristics of visual tokens that computing device 10 may detect within the image preview. Visual token module 56 may store such referential visual tokens in visual tokens 24. Visual token module 56 may then match the natural language user input with a first referential visual token of the one or more referential visual tokens 24 and determine the visual token associated with the first referential visual token is the visual token to be included in the one or more images to be captured. For instance, in the image preview, visual token module 56 may recognize the dog, grass, a bush, and a tree. Visual token module 56 may determine respective referential visual tokens identifying each of the four recognized visual tokens. Visual token module 56 may match the determined visual tokens with the visual token identified from the image capture command (i.e., the dog) and determine that the visual token that matches the dog is the visual token to be captured in the one or more images. In some instances, the one or more visual tokens determined by visual token module 56 may be current visual tokens. In such instances, using future visual token model 26, visual token module 56 may determine one or more expected future visual tokens. As described above, future visual token model 26 may describe relationships between one or more subsets of visual tokens. Based on the current visual tokens 24 and the relationships within future visual token model 26, visual token module 56 may determine one or more expected future visual tokens. For instance, based on the current visual tokens of the dog and the grass, visual token module 56 may determine that the scene may be a park, and that a visual token of a ball is expected in a future image preview. In some examples, visual token module 56 may update future visual token model 26 based on various information. For instance, visual token module 56 may determine one or more actual future visual tokens associated with a respective visual token of the one or more visual tokens in a second image preview generated after the original image preview. Visual token module 56 may compare the one or more actual future visual tokens with the one or more expected future visual tokens previously determined. Visual token module 56 may then update future visual token model 26 based on this comparison. For instance, in the example of FIG. 2, visual token module 56 determined a ball to be an expected future visual token. If visual token module 56 analyzes a second image preview and determines that a ball is now present within the second image preview, visual token module 56 may update future visual token model 26 by increasing the future likelihood of determining a ball to be present when a dog and grass is present, confirming the previous prediction. If, however, visual token module 56 analyzes the second image preview and determines that a rope toy is now present within the second image preview, visual token module 56 may update future visual token model 26 by decreasing the future likelihood of determining a ball to be present when a dog and grass is present and increasing the future likelihood of determining a rope toy to be present when a dog and grass is present. In other instances, visual token module 56 may update future visual token model 26 based on crowdsourced visual token data. For instance, visual token module 56 may receive crowdsourced visual token data that includes a set of one or more expected future visual tokens associated with the one or more current visual tokens for one or more crowdsourced computing devices different than computing device 10. The crowdsourced data may be based on users with similar interests as a user of computing device 10. For instance, the user of computing device 10 may belong to a social media group for dog lovers. Given visual token module 56's attempts to determine expected future visual tokens based on the current visual token of a dog, visual token module 56 may receive crowdsourced visual token data from computing devices associated with users of the same social media group for expected visual tokens when such users are taking pictures of dogs and grass together. Visual token module 56 may update future visual token model 26 based on this crowdsourced visual token data from users with similar interests as the user, as it is expected that users with similar interests may encounter similar visual tokens in their captured images. Visual token module 56 may utilize future visual token model 26 to analyze future images for current visual tokens. By consistently updating future visual token model 26, visual token module 56 may more efficiently analyze images and actions within the images during the execution of the techniques described herein. Future visual token model 26 may provide a basis on top of which computing device 10 may categorize or “build a narrative of” captured images or videos for an event when receiving future image capture commands based on categories. Rather than manually placing object labels across video frames, computing device 10 may analyze certain tokens common throughout multiple images to contextualize and successfully predict the occurrence of the various tokens in future image previews. In contextualizing and predicting the various tokens, computing device 10 may improve recognition in a more precise way than tracking-based temporal smearing. As such, computing device 10 may identify a small set of contextual categories in future visual token model 26 that cover a large fraction of potential images, as well as a vocabulary of visual tokens associated with objects within individual images. In some examples, computing device 10 may make these identifications personal to a user based on common user contexts. As such, computing device 10 may find clusters of images and determine the vocabulary of visual tokens in the clusters of images. For instance, future visual token model 26 may include categories for a wedding, a grill party, a graduation, a baptism, camping, sport games, a festival, an air show, a concert, and a cruise trip. For some of these categories, future visual token model 26 may include typical predicted visual tokens, e.g. in a wedding, visual tokens may include a formal ceremony followed by a party, where the formal ceremony consists of the main actors walking in, then a mix of songs and/or speeches, then wedding rings being brought in and placed on a bride and a groom, a kiss, and finally the main actors leaving. However, other categories in future visual token model 26 may be more loosely structured, and certain visual tokens within such categories may provide more insight than others as to what is likely to come. For instance, if the category in future visual token model 26 is a camping trip and there is an image with a sunset, future visual token model 26 may indicate that a visual token of fire or a grill may be present in a future image. With context-specific token prediction in future visual token model 26, computing device 10 may be configured to utilize dynamic programming, where each new captured image seen in a user stream may be labeled as a continuation of an instance of an event belonging to a particular category in future visual token model 26, a distractor from future visual token model 26 (e.g., an image that does not fit in the current category), the start of a new event in the same or a different category, or the start of a new episode of an event in the current category that had been previously interrupted. Computing device 10 may assign each one of these label assignments a cost that depends on the topical specificity of the item (e.g., how common the item is within images for the particular category) and spatio-temporal gaps to neighbor images (e.g., an amount of time that passes between images captured). Alternatively, computing device 10 may train a distance metric that would measure how likely any two images are to belong to a single event in the same category in future visual token model 26 (e.g., based on factors like temporal, geographical and semantic distance). Computing device 10 may train future visual token model 26 using a clustering algorithm to grow clusters by combining such distance with the narrative fitness to measure the cost of adding each candidate item to an existing cluster. The techniques described herein may enable both the specialization of these existing constraints for each relevant contextual category, as well as the addition of a narrative completeness of the selected subset. Computing device 10 may not exclude content from an event for a particular category just because the content does not fit a typical instance of the particular category, as surprising elements may be the motivation for capturing the image. However, computing device 10 may train future visual token model 26 such that certain key narrative elements that are normally present to tell a certain kind of story. For instance, computing device 10 may compare two options for generating future visual token model 26: one that includes only visual tokens A and C and another that includes A, B and C. If computing device 10 trains future visual token model 26 to predict that the likelihood of A and C is smaller than A, B, and C, then computing device 10 may institute a penalty for leaving B out. To handle this properly, computing device 10 may separate the tokens that are central for the entire contextual category in future visual token model 26 from those that are central for a given user story relative to its contextual category in future visual token model 26. Using the techniques described herein, computing device 10 may further improve capture time. Future visual token model 26 may model what elements are central narrative elements in a given scene, so that computing device 10 may focus on the location of important visual tokens. Such selections may be biased toward image previews where the key narrative elements are well represented. Using the visual token location and the image capture command determined by command module 54, image module 22 may capture one or more images of the visual token. For instance, image module 22 may receive, from object module 20, the image capture command and the location of the dog within the image preview. Image module 22 may utilize camera 30 to execute the image capture command and capture one or more images of the dog. In some examples, image module 22 may use camera 30 to capture one or more images of the dog once the dog is fully located within the image preview. In some other examples, image module 22 may focus camera 30 on the dog and keep the zoom level of camera 30 consistent. In some instances of such examples, image module 22 may crop the captured image to provide an illusion of zooming camera 30 in on the dog. In still other examples, image module 22 may zoom camera 30 onto the dog such that the dog is the center of the captured image. In the example of FIG. 2, the image capture command includes capturing one or more images of a visual token that includes a particular object (i.e., the dog) performing a particular action (i.e., catching the ball). In such instances, to execute the image capture command, image module 22 may monitor the particular object within the image preview. Once action module 58 determines that the particular object in the image preview is beginning to perform the particular action, image module 22 may use camera 30 to capture the one or more images of the object as shown in the image preview and store the one or more images of the object in image queue 28. Image module 22 may continue to capture the one or more images until action module 58 determines that the particular object in the image preview has completed performing the particular action. Action module 58 may then analyze each of the one or more images in image queue 28 to determine a status, or an indication of the progress of the particular object in performing the action, of the object within each of the one or more images. Action module 58 may select a particular image of the one or more images in response to determining, based on action model 29, that a status of the object in the particular image more closely matches the particular object being in the middle of performing particular action. For instance, in the example of FIG. 2, where the image capture command includes instructions for capturing an image of the dog catching the ball, action module 58 may analyze each image of the series of images to determine a status of the dog. For instance, action module 58 may determine the dog is sitting, the dog is jumping, the dog has its mouth open or closed, or some other status of the dog. Action module 58 may determine if a visual token associated with the ball is present any of the one or more images and how close the ball is to the dog's mouth in each of the one or more images in which the ball is present. Action model 29 may include data associated indicating requirements for a portion of an image to indicate the action of catching the ball, such as requiring: the ball and the dog to both be present in the image, the ball should be in the dog's mouth, or any other information that could indicate the dog catching the ball. Image module 22 may then capture a series of images of the dog once the dog begins to jump in the air or once the ball is present in the image preview and stop capturing images when the dog lands back on the ground with the ball in its mouth. Based on the data associated with the action of catching the ball included in action model 29, action module 58 may select the particular image of the one or more images where the status of the dog more closely matches the requirements of action model 29. For instance, action module 58 may select the image where the status of the dog indicates the dog is jumping in the air and the status of the ball indicates the ball is located in the dog's mouth. Action module 58 may update action model 29 based on user feedback. For instance, UI module 21 may cause UID 12 to present the first image selected by action module 58 and also output a prompt for obtaining an indication of user input to either confirm the particular image or decline the particular image. If action module 58 receives an indication of user input confirming the particular image, action module 58 may store the particular image to memory and update action model 29 to reinforce the analysis and determinations of the dog performing the specific act of catching the ball. If, however, action module 58 receives an indication of user input declining the first image, action module 58 may update action model 29 to decrease the associations between the dog and the ball as currently defined. Action module 58 may select one or more additional images of the dog in the process of catching the ball and utilize UI module 21 to cause UID 12 to present the one or more additional images. Action module 58 may receive an additional indication of user input selecting a second image of the one or more additional images and update action model 29 based on the updated selection. In some instances, the image capture command may include capturing a series of images for the visual token. In such instances, image module 22 may utilize camera 30 to capture a plurality of images for the visual token. UI module 21 may then cause UID 12 to display the images. Command module 54 may then receive an additional command to focus on the images of the plurality of images that show the visual token of the object performing a particular action. As such, techniques of this disclosure further enable computing device 10 to process the one or more images after the one or more images have been captured to select images of the visual token of an object performing a particular action in a manner similar to the techniques described above. After capturing the images, computing device 10 may utilize future visual token model 26 to organize previously captured images by suggesting possibly-discontinuous subsets of the images that belong to the same category within future visual token model 26 as albums, possibly by segmenting the captured images into pages that correspond to smaller narrative units using future visual token model 26. Computing device 10 may also build an ontology of scenes, objects and actions that users capture with camera 30 using future visual token model 26, in a way that computing device 10 may compute probabilities of the occurrence of each visual token, action, or N-gram in each one of certain contexts in future visual token model 26. Similarly, computing device 10 may enable deeper personalization. If the user wants to focus on a particular subject, it may be likely that the particular subject is important and may appear in other images captured in the past. Computing device 10 may analyze the space of possible appearances of the particular subject with respect to future visual token model 26 which parts of such space are preferred by the user. That may be used, for instance, to make the final saved image less blurry and a higher quality. Throughout the disclosure, examples are described where a computing device and/or a computing system may analyze information (e.g., voice inputs from a user) associated with a computing device only if the computing device receives permission from the user to analyze the information. For example, in situations discussed above in which the computing device may collect or may make use of information associated with the user, including voice inputs or location information indicated by image data, the user may be provided with an opportunity to provide input to control whether programs or features of the computing device can collect and make use of user information or to dictate whether and/or how to the computing device may receive content that may be relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally-identifiable information is removed. For example, stored image data may be treated so that no personally identifiable information can be determined about the user. Thus, the user may have control over how information is collected about the user and used by the computing device. FIG. 3 is a conceptual diagram illustrating an example image capture command executable by a computing device, in accordance with one or more aspects of the present disclosure. The operations of computing device 10 are described within the context of system 1 of FIG. 1 and computing device 10 of FIG. 2. The conceptual diagram of FIG. 3 include example image previews 60A-60E which are meant to show a sequence of images previewed by camera 30 of computing device 10 in response to an image capture command received by computing device 10. For instance, in the example of FIG. 3, the image capture command may include capturing an image of a visual token including a human jumping. In such an example, computing device 10 may preview images 60A-60E in temporal order, with image 60A being previewed first and image 60E being previewed last. Computing device 10 may analyze each image preview of image previews 60A-60E to determine a status of the human within the image. For instance, computing device 10 may determine that the human in image preview 60A is standing in a stationary position. Computing device 10 may determine that the human in image preview 60B is crouching in preparation of a jump. At this point, once computing device 10 determines that the visual token in image preview 60B is beginning to perform the action specified in the image capture command, computing device 10 may begin capturing one or more images of the visual token. Computing device 10 may determine that the human in image preview 60C is midair in the process of a jump. Computing device 10 may determine that the human in image preview 60D is making an initial touch with the ground after a jump. Finally, computing device 10 may determine that the human in image preview 60E is crouching as a result of landing after a jump due to the force of the landing. At this point, once computing device 10 determines that the visual token in image preview 60E is completing the action, computing device 10 may cease capturing the images. Computing device 10 may then select a captured image based on image preview 60C where the status of the human in image preview 60C matches the definitions of jumping in an action model. Computing device 10 may then output image 62 as the selected image of the human jumping. FIG. 4 is another conceptual diagram illustrating a second example image capture command executable by a computing device. The operations of computing device 10 are described within the context of system 1 of FIG. 1 and computing device 10 of FIG. 2. The conceptual diagram of FIG. 4 includes example image preview 70A, which is meant to show an example image previewed by camera 30 of computing device 10 when computing device 10 receives an image capture command. In the example of FIG. 4, the image capture command may include capturing an image of the leftmost subject 18A of the plurality of subjects 18A-18F in image preview 70A. In such an example, computing device 10 may capture a portion of image preview 70A that includes only leftmost subject 18A. Computing device 10 may analyze image preview 70A to determine a location of each of subjects 18 relative to one another. Based on this analysis, computing device 10 may determine that subject 18A is the leftmost of subjects 18 within image preview 70A. As such, computing device 10 may crop image preview 70A such that subject 18A is in the center of the image preview and the main, or only, subject of the image preview. Computing device 10 may then capture image 70B based on the updated image preview. Computing device 10 may output image 70B, which includes a cropped version of image preview 70A that gives the illusion of zooming in on subject 18A. FIG. 5 is a flowchart illustrating example operations of an example computing device configured to receive an indication of a natural language user input associated with an image capture command and execute the image capture command, in accordance with one or more aspects of the present disclosure. The operations of computing device 10 are described within the context of system 1 of FIG. 1 and computing device 10 of FIG. 2. In accordance with the techniques of this disclosure, computing device 10 may perform various functions while operating in the image capture mode. While operating in the image capture mode, computing device 10 may receive an indication of a natural language user input associated with an image capture command (200). For instance, in the example of FIG. 5, a user of computing device 10 may speak a natural language user input into microphone 32, where the natural language user input includes the image capture command. Microphone 32 may convert the natural language user input into a computer-readable output, such as a message, a sequence of bits, or an electrical signal. Computing device 10 may analyze the output of microphone 32 to determine the image capture command. In the example of FIG. 5, the image capture command may be an instruction to capture an image of a visual token including a particular object (e.g., a human) while the particular object is performing a particular action (e.g., shooting a basketball). In other instances, the image capture command may be to crop one or more images to fit around the particular visual token or to focus camera 30 on the particular visual token and capture one or more images focused on the particular visual token. Computing device 10 may determine, based on the image capture command, a visual token to be included in one or more images to be captured by camera 30 of computing device 10 (210). For instance, computing device 10 may parse the natural language user input into two or more distinct portions: a specific image capture command, as well as a particular visual token or multiple visual tokens that will be the subject of one or more images captured by camera 30 using the specific image capture command. In the example of FIG. 5, the visual token included in the natural language user input is the human. As such, computing device 10 may determine that the visual token that will be the subject of one or more images captured by camera 30 using the specific image capture command is a human located within an image preview. Computing device 10 may locate the visual token within an image preview output by computing device 10 via UID 12 of computing device 10 while operating in the image capture mode (220). As stated above, computing device 10 may determine that a human shooting a basketball is the visual token to be captured in one or more images by camera 30. Computing device 10 may scan an image preview to locate and determine a human within the image preview. In some instances, in locating the visual token, computing device 10 may match the natural language user input with a first referential visual token of one or more referential visual tokens from referential visual tokens 24 of predetermined tokens. Computing device 10 may compare the first referential visual token with each of one or more visual tokens within the image preview and determine the visual token that most closely matches the first referential visual token is the visual token to be included in the one or more images to be captured. For instance, in the image preview, computing device 10 may recognize the human, a wooden court, and a basketball. Computing device 10 may determine respective referential visual tokens identifying each of the three recognized objects. Computing device 10 may match the determined visual tokens with the visual token identified from the image capture command (i.e., the human) and determine that the visual token that matches the human is the visual token to be captured in the one or more images. In some instances, the one or more visual tokens determined by computing device 10 may be current visual tokens. In such instances, using future visual token model 26, computing device 10 may determine one or more expected future visual tokens. As described above, future visual token model 26 may describe relationships between one or more subsets of visual tokens. Based on the current visual tokens 24 and the relationships within future visual token model 26, computing device 10 may determine one or more expected future visual tokens. For instance, based on the current visual tokens of the human, the wooden court, and the basketball, computing device 10 may determine that the scene may be a basketball court, and that a visual token of a defender human player or a basketball hoop is expected in a future image preview. In some examples, computing device 10 may update future visual token model 26 based on various information. For instance, computing device 10 may determine one or more actual future visual tokens associated with a respective visual token of the one or more visual tokens in a second image preview generated after the original image preview. Computing device 10 may compare the one or more actual future visual tokens with the one or more expected future visual tokens previously determined. Computing device 10 may then update future visual token model 26 based on this comparison. For instance, in the example of FIG. 5, computing device 10 determined a basketball hoop to be an expected future visual token. If computing device 10 analyzes a second image preview and determines that a basketball hoop is now present within the second image preview, computing device 10 may update future visual token model 26 by increasing the future likelihood of determining a basketball hoop to be present when a human, a wooden court, and a basketball is present, confirming the previous prediction. In other instances, computing device 10 may update future visual token model 26 based on crowdsourced visual token data. For instance, computing device 10 may receive crowdsourced visual token data that includes a set of one or more expected future visual tokens associated with the one or more current visual tokens for one or more crowdsourced computing devices different than computing device 10. The crowdsourced data may be based on users with similar interests as a user of computing device 10. For instance, the user of computing device 10 may frequently take pictures of basketball games. Given computing device 10's attempts to determine expected future visual tokens based on the current visual token of a human, a wooden court, and a basketball, computing device 10 may receive crowdsourced visual token data from computing devices associated with users who also frequently take pictures of basketball games. Computing device 10 may update future visual token model 26 based on this crowdsourced visual token data from users with similar interests as the user, as it is expected that users with similar interests may encounter similar visual tokens in their captured images. Using the visual token location and the image capture command determined by computing device 10, computing device 10 may capture one or more images of the visual token (230). For instance, computing device 10 may receive the image capture command and the location of the human within the image preview. Computing device 10 may utilize camera 30 to execute the image capture command and capture one or more images of the human. In some examples, computing device 10 may use camera 30 to capture one or more images of the human once the human is fully located within the image preview. In some other examples, computing device 10 may focus camera 30 on the human and keep the zoom level of camera 30 consistent. In some instances of such examples, computing device 10 may crop the captured image to provide an illusion of zooming camera 30 in on the human. In still other examples, computing device 10 may zoom camera 30 onto the human such that the human is the center of the captured image. In the example of FIG. 5, the image capture command includes capturing one or more images of the visual token including a particular object (i.e., the human) performing a particular action (i.e., shooting the basketball). In such instances, to execute the image capture command, computing device 10 may monitor the image preview to determine when the particular object is beginning to perform the particular action. Once computing device 10 determines that the particular object is beginning to perform the particular action, computing device 10 may capture the one or more images of the object and store the one or more images of the object in image queue 28. Computing device 10 may cease capturing images of the object once computing device 10 determines the object in the image preview is no longer performing the particular action. Computing device 10 may then analyze each of the one or more images in image queue 28 to determine a status of the object within the respective image. Computing device 10 may select a first image of the one or more images when a status of the object in the first image most closely matches the particular objecting being in the middle of performing the particular action based on action model 29. For instance, in the example of FIG. 5 where the image capture command includes capturing an image of the human shooting the basketball, computing device 10 may determine if the human is standing, if the human is jumping, if the human is catching the basketball, if the human is letting go of the basketball, etc. Computing device 10 may determine a location of a visual token associated with the basketball with relation to the human's hand. Action model 29 may include data associated with the action of shooting the basketball, such as requiring the basketball and the human to both be present in the image, that the ball should be in the human's hand, that the human should be mid-jump, and any other information that could depict the human shoot the basketball. Once computing device 10, using action model 29, determines that the human in the image preview is beginning to jump, computing device 10 may begin to capture a series of images of the human. Computing device 10 may cease capturing images of the human when the human in the image preview lands from jumping. Computing device 10 may analyze each image of the series of images to determine a status of the human. Computing device 10 may then select the first image of the one or more images where the status of the human matches the requirements of action model 29. For instance, computing device 10 may select the image where the human is mid-air and the basketball is located in the human's hands. Computing device 10 may update action model 29 based on user feedback. For instance, computing device 10 may present the first image selected by computing device 10 and prompt for an indication of user input to either confirm the first image or decline the first image. If computing device 10 receives an indication of user input confirming the first image, computing device 10 may store the first image to memory and update action model 29 to reinforce the analysis and determinations of the human performing the specific act of shooting the basketball. If, however, computing device 10 receives an indication of user input declining the first image, computing device 10 may update action model 29 to decrease the associations between the human and the basketball as currently defined. Computing device 10 may select one or more additional images of the human in the process of shooting the basketball and utilize computing device 10 to present the one or more additional images. Computing device 10 may receive an additional indication of user input selecting a second image of the one or more additional images and update action model 29 based on the updated selection. Example 1. A method comprising: while a computing device is operating in an image capture mode: receiving, by the computing device, an indication of a natural language user input associated with an image capture command; determining, by the computing device, based on the image capture command, a visual token to be included in one or more images to be captured by a camera of the computing device; locating, by the computing device, the visual token within an image preview output by the computing device while operating in the image capture mode; and capturing, by the computing device, one or more images of the visual token. Example 2. The method of example 1, wherein locating the visual token comprises: matching, by the computing device, the natural language user input with a first referential visual token of one or more referential visual tokens from a model of predetermined tokens; comparing, by the computing device, the first referential visual token with each of one or more visual tokens within the image preview; and determining, by the computing device, that the visual token that most closely matches the first referential visual token is the visual token to be included in the one or more images to be captured. Example 3. The method of example 2, wherein the one or more visual tokens comprise one or more current visual tokens, wherein the method further comprises: determining, by the computing device and based at least on part on the one or more current visual tokens, a future visual token model, and one or more relationships between one or more subsets of the one or more current visual tokens, one or more expected future visual tokens. Example 4. The method of example 3, further comprising: determining, by the computing device, the one or more relationships between the one or more subsets of the one or more current visual tokens based at least in part on a hidden Markov model. Example 5. The method of any of examples 3-4, wherein the image preview comprises a first image preview, and wherein the method further comprises: determining, by the computing device, one or more actual future visual tokens associated with a respective visual token of one or more visual tokens within a second image preview, wherein the second image preview is generated after the first image preview; comparing, by the computing device, the one or more actual future visual tokens and the one or more expected future visual tokens; and updating, by the computing device, the future visual token model based on the comparison between the one or more actual future visual tokens and the one or more expected future visual tokens. Example 6. The method of example 5, further comprising: receiving, by the computing device, crowdsourced visual token data comprising a set of one or more expected future visual tokens associated with the one or more current visual tokens for one or more crowdsourced computing devices different than the computing device; and updating, by the computing device, the future visual token model based on the crowdsourced visual token data. Example 7. The method of any of examples 1-6, wherein the image capture command comprises capturing the one or more images of the visual token comprising an object performing a particular action. Example 8. The method of example 7, wherein executing the image capture command comprises: determining, by the computing device, a first time at which the object in the image preview begins performing the particular action; beginning to capture, by the computing device, the one or more images of the object at the first time; determining, by the computing device, a second time at which the object in the image preview completes performing the particular action; ceasing to capture, by the computing device, the one or more images of the object at the second time; analyzing, by the computing device, each of the one or more images to determine a status of the object within the respective image; and selecting, by the computing device and based on an action model, a first image of the one or more images, wherein a status of the object of the first image most closely matches the particular action. Example 9. The method of example 8, further comprising: outputting, by the computing device and for display at a display device operatively connected to the computing device, the first image; prompting, by the computing device, for an indication of user input to either confirm the first image or decline the first image; responsive to receiving an indication of user input confirming the first image, storing, by the computing device, the first image to a memory of the computing device; and responsive to receiving an indication of user input declining the first image: updating, by the computing device, the action model based on the indication of user input declining the first image; outputting, by the computing device and for display at the display device, one or more additional images of the one or more images of the visual token; receiving, by the computing device, an additional indication of user input selecting a second image, wherein the second image is included in the one or more additional images; and updating, by the computing device, the action model based on the selection of the second image. Example 10. The method of any of examples 1-9, wherein the image capture command comprises one of cropping, by the computing device, the one or more images to fit around the visual token or focusing, by the computing device, the one or more images on the visual token. Example 11. The method of any of examples 1-10, wherein the visual token comprises at least one of an object, a person, an action, a location, or a concept. Example 12. The method of any of examples 1-11, wherein the natural language user input comprises a spoken user input. Example 13. A computing device comprising: a camera; at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that are executable by the at least one processor to: while the computing device is operating in an image capture mode: receive an indication of a natural language user input associated with an image capture command; determine based on the image capture command, a visual token to be included in one or more images to be captured by the camera; locate the visual token within an image preview output by the computing device while operating in the image capture mode; and capture one or more images of the visual token. Example 14. The computing device of example 13, wherein the instructions executable by the at least one processor to locate the visual token comprise instructions executable by the at least one processor to: match the natural language user input with a first referential visual token of one or more referential visual tokens from a model of predetermined tokens; compare the first referential visual token with each of one or more visual tokens within the image preview; and determine that the visual token that most closely matches the first referential visual token is the visual token to be included in the one or more images to be captured. Example 15. The computing device of example 14, wherein the one or more visual tokens comprise one or more current visual tokens, wherein the instructions are further executable by the at least one processor to: determine, based at least on part on the one or more current visual tokens, a future visual token model, and one or more relationships between one or more subsets of the one or more current visual tokens, one or more expected future visual tokens. Example 16. The computing device of example 15, wherein the instructions are further executable by the at least one processor to: determine the one or more relationships between the one or more subsets of the one or more current visual tokens based at least in part on a hidden Markov model. Example 17. The computing device of any of examples 14-15, wherein the image preview comprises a first image preview, and wherein the instructions are further executable by the at least one processor to: determine one or more actual future visual tokens associated with a respective visual token of one or more visual tokens within a second image preview, wherein the second image preview is generated after the first image preview; compare the one or more actual future visual tokens and the one or more expected future visual tokens; and update the future visual token model based on the comparison between the one or more actual future visual tokens and the one or more expected future visual tokens. Example 18. The computing device of example 17, wherein the instructions are further executable by the at least one processor to: receive crowdsourced visual token data comprising a set of one or more expected future visual tokens associated with the one or more current visual tokens for one or more crowdsourced computing devices different than the computing device; and update the future visual token model based on the crowdsourced visual token data. Example 19. The computing device of any of examples 13-18, wherein the image capture command comprises capturing the one or more images of the visual token comprising an object performing a particular action. Example 20. The computing device of example 19, wherein the instructions executable by the at least one processor to execute the image capture command comprise instructions executable by the at least one processor to: determine a first time at which the object in the image preview begins performing the particular action; begin to capture the one or more images of the object at the first time; determine a second time at which the object in the image preview completes performing the particular action; cease to capture the one or more images of the object at the second time; analyze each of the one or more images to determine a status of the object within the respective image; and select, based on an action model, a first image of the one or more images, wherein a status of the object of the first image most closely matches the particular action. Example 21. The computing device of example 20, wherein the instructions are further executable by the at least one processor to: output, for display at a display device operatively connected to the computing device, the first image; prompt for an indication of user input to either confirm the first image or decline the first image; responsive to receiving an indication of user input confirming the first image, store the first image to a memory of the computing device; and responsive to receiving an indication of user input declining the first image: update the action model based on the indication of user input declining the first image; output, for display at the display device, one or more additional images of the one or more images of the visual token; receive an additional indication of user input selecting a second image, wherein the second image is included in the one or more additional images; and update the action model based on the selection of the second image. Example 22. The computing device of any of examples 13-21, wherein the image capture command comprises one of cropping, by the computing device, the one or more images to fit around the visual token or focusing, by the computing device, the one or more images on the visual token. Example 23. The computing device of any of examples 13-22, wherein the visual token comprises at least one of an object, a person, an action, a location, or a concept. Example 24. The computing device of any of examples 13-23, wherein the natural language user input comprises a spoken user input. Example 25. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor of a computing device to: while the computing device is operating in an image capture mode: receive an indication of a natural language user input associated with an image capture command; determine based on the image capture command, an visual token to be included in one or more images to be captured by a camera of the computing device; locate the visual token within an image preview output by the computing device while operating in the image capture mode; and capture one or more images of the visual token. Example 26. The non-transitory computer-readable storage medium of example 25, wherein the instructions that cause the at least one processor to locate the visual token comprise instructions that cause the at least one processor to: match the natural language user input with a first referential visual token of one or more referential visual tokens from a model of predetermined tokens; compare the first referential visual token with each of one or more visual tokens within the image preview; and determine that the visual token that most closely matches the first referential visual token is the visual token to be included in the one or more images to be captured. Example 27. The non-transitory computer-readable storage medium of example 26, wherein the one or more visual tokens comprise one or more current visual tokens, wherein the image preview comprises a first image preview, and wherein the instructions further cause the at least one processor to: determine, based at least on part on the one or more current visual tokens, a future visual token model, and one or more relationships between one or more subsets of the one or more current visual tokens, one or more expected future visual tokens; determine one or more actual future visual tokens associated with a respective object of one or more objects within a second image preview, wherein the second image preview is generated after the first image preview; compare the one or more actual future visual tokens and the one or more expected future visual tokens; and update the future visual token model based on the comparison between the one or more actual future visual tokens and the one or more expected future visual tokens. Example 28. The non-transitory computer-readable storage medium of any of examples 25-27, wherein the image capture command comprises capturing the one or more images of the visual token comprising an object performing a particular action, wherein the instructions that cause the at least one processor to execute the image capture command comprise instructions that cause the at least one processor to: determine a first time at which the object in the image preview begins performing the particular action; begin to capture the one or more images of the object at the first time; determine a second time at which the object in the image preview completes performing the particular action; cease to capture the one or more images of the object at the second time; analyze each of the one or more images to determine a status of the object within the respective image; and select, based on an action model, a first image of the one or more images, wherein a status of the object of the first image most closely matches the particular action. Example 29. The non-transitory computer-readable storage medium of example 28, wherein the instructions further cause the at least one processor to: present, for display on a display device operatively connected to the computing device, the first image; prompt for an indication of user input to either confirm the first image or decline the first image; responsive to receiving an indication of user input confirming the first image, store the first image to a memory of the computing device; and responsive to receiving an indication of user input declining the first image: update the action model based on the indication of user input declining the first image; present, for display on the display device, one or more additional images of the one or more images of the visual token; receive an additional indication of user input selecting a second image, wherein the second image is included in the one or more additional images; and update the action model based on the selection of the second image. Example 30. The non-transitory computer-readable storage medium of any of examples 25-29, wherein the image capture command comprises one of cropping, by the computing device, the one or more images to fit around the visual token or focusing, by the computing device, the one or more images on the visual token. Example 31. A computing device configured to perform any of the methods of examples 1-12. Example 32. A computing device comprising means for performing any of the methods of examples 1-12. Example 33. A computer-readable storage medium encoded with instructions for causing one or more programmable processors to perform any of the methods or examples 1-12. In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. Various examples of the disclosure have been described. Any combination of the described systems, operations, or functions is contemplated. These and other examples are within the scope of the following claims. 16657468 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Dec 30th, 2019 12:00AM https://www.uspto.gov?id=US11314930-20220426 Generating and provisioning of additional content for source perspective(s) of a document Implementations described herein determine, for a given document generated by a given source, one or more portions of content (e.g., phrase(s), image(s), paragraph(s), etc.) of the given document that may be influenced by a source perspective of the given source. Further, implementations determine one or more additional resources that are related to the given source and that are related to the portion(s) of content of the given document. Yet further, implementations utilize the additional resource(s) to determine additional content that provides context for the portion(s) that may be influenced by a source perspective. A relationship, between the additional resource(s) and the portions of the given document, can be defined. Based on the relationship being defined, the additional content can be caused to be rendered at a client device in response to the client device accessing the given document. 11314930 1. A method implemented by one or more processors, the method comprising: identifying a target electronic document; processing the target electronic document to determine a source perspective portion of the target electronic document, wherein the source perspective portion of the target electronic document corresponds to a biased portion of the target electronic document; identifying at least one source of the target electronic document; searching, based on identifying the at least one source of the target electronic document, one or more corpuses to identify a plurality of additional resources that are explanatory of the source perspective portion of the target electronic document; for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document: processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score; selecting, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of the source perspective portion of the target electronic document; generating, based on at least first content of the first additional resource and second content of the second additional resource, a source perspective summary for the at least one source, wherein generating, based on at least the first content of the first additional resource and the second content of the second additional resource, the source perspective summary for the at least one source comprises: including, in the source perspective summary, first text from the first additional resource and second text from the second additional resource, wherein the first text from the first additional resource and the second text from the second additional resource included in the source perspective summary provide a natural language explanation of the biased portion of the target electronic document with respect to the at least one source; and subsequent to generating the source perspective summary for the at least one source: causing a computing device that is rendering the target electronic document to render the source perspective summary for the at least one source simultaneous with the rendering of the target electronic document at the computing device. 2. The method of claim 1, wherein the at least one source includes an author that penned the target electronic document, and wherein the source perspective summary includes an author perspective summary. 3. The method of claim 2, wherein generating the source perspective summary for the at least one source based on at least the first content of the first additional resource and the second content of the second additional resource further comprises: analyzing the first content of at least the first additional resource and the second content of the second additional resource to identify the first text from the first additional resource and the second text from the second additional resource, wherein the first text from the first additional resource and the second text from the second additional resource included in the source perspective summary provide the natural language explanation of the biased portion of the target electronic document with respect to the author. 4. The method of claim 3, wherein the author perspective summary further includes source perspective metrics for the source perspective portions of the target electronic document. 5. The method of claim 4, wherein generating the source perspective summary for the at least one source based on at least the first content of the first additional resource and the second content of the second additional resource further comprises: generating a first portion of the author perspective summary based on the natural language explanation of the source perspective portions of the target electronic document; generating a second portion of the author perspective summary based on the source perspective metrics for the source perspective portions of the target electronic document; and including, in the author perspective summary, both the first portion of the author perspective summary and the second portion of the author perspective summary. 6. The method of claim 3, wherein at least the first additional resource and the second additional resource that are explanatory of the source perspective portion of the target electronic document include other documents penned by the author, social media posts of a social media account associated with the author, or social media interactions of the social media account associated with the author. 7. The method of claim 6, wherein generating the source perspective summary for the at least one source based on at least the first content of the first additional resource and the second content of the second additional resource further comprises: generating a first portion of the author perspective summary based on additional content of the other documents penned by the author; and generating a second portion of the author perspective summary based on the social media posts and the social media interactions of the social media account associated with the author. 8. The method of claim 2, wherein the at least one source further includes a creator that collated the target electronic document, and wherein the source perspective summary further includes a separate creator perspective summary. 9. The method of claim 8, further comprising: searching, based on identifying the at least one source further includes the creator of the target electronic document, one or more of the corpuses to identify a plurality of further additional resources that are explanatory of the source perspective portion of the target electronic document; for each of the identified further additional resources that are explanatory of the source perspective portion of the target electronic document: processing corresponding further additional resource features of a corresponding one of the further additional resources and the features of the source perspective portion to generate a corresponding additional relatedness score; selecting, based on the additional relatedness scores, and from the identified further additional resources, at least a third additional resource and a fourth additional resource that are explanatory of the source perspective portion of the target electronic document; and generating, based on at least third content of the third additional resource and fourth content of the fourth additional resource, the separate creator perspective summary, wherein generating, based on at least the third content of the third additional resource and the fourth content of the fourth additional resource, the separate creator perspective summary comprises: including, in the separate creator source perspective summary, third text from the third additional resource and fourth text from the fourth additional resource, wherein the third text from the third additional resource and the fourth text from the fourth additional resource included in the source perspective summary provide a natural language explanation of the biased portion of the target electronic document with respect to creator. 10. The method of claim 9, wherein generating the separate creator perspective summary for the at least one source based on at least the third content of at least the third additional resource and the fourth content of the fourth additional resource comprises: analyzing at least the third content of the third additional resource and the fourth content of the fourth additional resource to identify the third text from the third additional resource and the fourth text from the fourth additional resource. 11. The method of claim 10, wherein the separate creator perspective summary further includes source perspective metrics for the source perspective portions of the target electronic document. 12. The method of claim 11, wherein generating the source perspective summary for the at least one source based on at least the third content of the third additional resource and the fourth content of the fourth additional resource further comprises: generating a first portion of the separate creator perspective summary based on the natural language explanation of the source perspective portions of the target electronic document; generating a second portion of the separate creator perspective summary based on the source perspective metrics for the source perspective portions of the target electronic document; and including, in the separate creator perspective summary, both the first portion of the separate creator perspective summary and the second portion of the separate creator perspective summary. 13. The method of claim 1, wherein searching one or more of the corpuses to identify the plurality of additional resources that are explanatory of the source perspective portion of the target electronic document comprises: applying one or more de-duping techniques to the identified plurality of additional resources to determine a subset of the identified plurality of additional resources; generating, based on features of the subset of the identified plurality of additional resources, source perspective metrics for the source perspective portions of the target electronic document; and including, in the source perspective summary, the source perspective metrics for the source perspective portions of the target electronic document. 14. The method of claim 1, wherein one or more of the corpuses include a knowledge graph having source nodes corresponding to the at least one source connected to at least resource nodes corresponding to the plurality of additional resources, and wherein processing the corresponding additional resource features of the corresponding one of the additional resources and the features of the source perspective portion to generate the corresponding relatedness score for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document comprises: applying the knowledge graph as input across a graph neural network to generate embedding nodes corresponding to the source nodes and resource nodes of the knowledge graph; and generating, based on information included in the embedding nodes, the relatedness scores. 15. A method implemented by one or more processors, the method comprising: identifying a target electronic document; processing the target electronic document to determine a source perspective portion of the target electronic document, wherein the source perspective portion of the target electronic document corresponds to a biased portion of the target electronic document; identifying a publisher that published the target electronic document; identifying an author that penned the target electronic document; searching, based on identifying the publisher that published the target electronic document, one or more corpuses to identify a plurality of additional resources that are explanatory of the source perspective portion of the target electronic document and that are also published by the publisher; for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document: processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score; selecting, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of the source perspective portion of the target electronic document; generating, based on at least first content of the first additional resource and second content of the second additional resource, a publisher perspective summary, wherein the publisher perspective summary provides a natural language explanation of the biased portion of the target electronic document with respect to the publisher of the target electronic document; searching, based on identifying the author that penned the target electronic document, one or more of the corpuses to identify a plurality of further additional resources that are explanatory of the source perspective portion of the target electronic document and that are also penned by the author; for each of the identified further additional resources that are explanatory of the source perspective portion of the target electronic document: processing corresponding further additional resource features of a corresponding one of the further additional resources and the features of the source perspective portion to generate a corresponding additional relatedness score; selecting, based on the additional relatedness scores, and from the identified additional resources, at least a third additional resource and a fourth additional resource that are explanatory of the source perspective portion of the target electronic document; generating, based on at least third content of the third additional resource and fourth content of the fourth additional resource, an author perspective summary, wherein the author perspective summary provides a natural language explanation of the biased portion of the target electronic document with respect to the author of the target electronic document; and subsequent to generating the publisher perspective summary and subsequent to generating the author perspective summary: causing a computing device that is rendering the target electronic document to render both the publisher perspective summary and the author perspective summary simultaneous with the rendering of the target electronic document at the computing device. 16. The method of claim 15, wherein generating the publisher perspective summary based on at least the first content of the first additional resource and the second content of the second additional resource comprises: analyzing at least the first content of the first additional resource and the second content of the second additional resource to determine an explanation of the source perspective portion of the target electronic document for the publisher; and including, in the publisher perspective summary, the explanation of the source perspective portion of the target electronic document for the publisher. 17. The method of claim 16, wherein the explanation of the source perspective portion of the target electronic document for the publisher includes the natural language explanation of the source perspective portion of the target electronic document and source perspective metrics for the source perspective portion of the target electronic document. 18. A system, comprising: a database; memory storing instructions; and one or more processors executing the instructions, stored in the memory, to cause the one or more processors to: identify a target electronic document; process the target electronic document to determine one or more source perspective portions of the target electronic document, wherein the one or more source perspective portions of the target electronic document correspond to one or more corresponding biased portions of the target electronic document; identify at least one source of the target electronic document; search, based on identifying the at least one source of the target electronic document, one or more corpuses to identify a plurality of additional resources that are explanatory of one or more of the source perspective portions of the target electronic document; for each of the identified additional resources that are explanatory of one or more of the source perspective portions of the target electronic document: process corresponding additional resource features of a corresponding one of the additional resources and features of one or more of the source perspective portions to generate a corresponding relatedness score; select, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of one or more of the source perspective portion of the target electronic document; generate, based on at least first content of the first additional resource and second content of the second additional resource, a source perspective summary for the at least one source, wherein the instructions to generate, based on at least the first content of the first additional resource and the second content of the second additional resource, the source perspective summary for the at least one source comprise instructions to: include, in the source perspective summary, first text from the first additional resource and second text from the second additional resource, wherein the first text from the first additional resource and the second text from the second additional resource included in the source perspective summary provide a natural language explanation of the one or more corresponding biased portions of the target electronic document with respect to the at least one source; and subsequent to generating the source perspective summary for the at least one source, and responsive to receiving an indication to view the source perspective summary from a user consuming the target electronic document: cause a computing device that is rendering the target electronic document to render the source perspective summary for the at least one source along with the rendering of the target electronic document at the computing device. 18 BACKGROUND A user may have interest in reading a document, but have little or no knowledge of the source (e.g., author, creator, and/or publisher) of the document. For example, a user may read a news article without knowing the author of the article, the background of the author, and/or the intended audience for the document. In some instances, the document includes information that is not necessarily based on objective reasoning but is, instead, based on experiences and/or subjective opinions that are particular to the source. Some instances of source perspectives included in content of a document may be identifiable by the user without additional information. However, some content may have instances that are not readily recognizable by the reader as including a source perspective. Further, whether content of a document is considered to include a source perspective can be a subjective determination by a user and, as result, can vary from user to user. For example, one user can deem certain content includes a source perspective, while another user may not deem that certain content includes a source perspective. In some instances, the source of a document may be the source of additional documents, the source may be the subject of other documents, and/or additional information regarding the experiences of the source may be available. A user can attempt to manually identify these additional documents and/or information. However, substantial computational and network resources can be required for the user to search for and identify relevant additional information related to the source in order to decide whether any of the statements of the source are indicative of a source perspective. For example, the user may have to switch to another application on their computing device, issue one or more searches for additional information about the source using the other application, and review such information. This can consume both resources of the computing device (e.g., switching to the other application and reviewing information) and network resources (e.g., in issuing the searches and retrieving the information). Further, such consumption of resources is exacerbated when multiple users that view the document each perform similar searches and reviews of source information. Yet further, different users can identify different additional information when determining whether statement(s) of a source are indicative of a source perspective. This can be due to the different users performing different searches, selecting different search results, viewing different portions of selected search result document(s), etc. As a result of the different additional information, the different users can reach different conclusions with regards to whether the statement(s) are indeed indicative of a source perspective. SUMMARY Implementations described herein determine, for a given document, one or more portions of content (e.g., sentences, phrases, paragraphs, etc.) of the given document that may be influenced by a source perspective of a given source associated with the portion(s) of the content. Further, those implementations determine one or more additional documents that are related to the given source (e.g., also from the given source and/or describing the given source) and that are related to the portion(s) of content of the given document. Yet further, some of those implementations utilize the additional document(s) to determine additional content that provides context for the portions of content of the given document that may be influenced by a source perspective of the given source. A relationship, between the additional content and the portions of the given document, can be defined. Based on the relationship being defined, the additional content can be caused to be rendered at a client device in response to the client device accessing the given document. For example, rendering of the given document can be modified to incorporate the additional content, the additional content can be presented in a pop-up window, or a selectable indication of the additional content can be provided and, if selected, can cause the additional content to be presented. As described herein, determining that a portion of content is a source perspective portion can be an objective determination. Further, determining additional document(s) and/or additional content based on the additional document(s) can likewise be an objective determination. Accordingly, implementations present a uniform (e.g., independent of a user's analysis) process for determining whether portion(s) of a document include a source perspective and/or for determining additional document(s) and/or additional content that are related to a source of content that includes a source perspective. As one example, a user can access a document that is related to the travel experiences of an author of the document. The document can include the phrase “Thailand is the best country in Asia.” Based on one or more terms of the phrase (e.g., “best” being a term that implies an opinion), the phrase can be identified as a phrase that may include a source perspective of the author. Additional documents associated with the author can include other articles written by the author, publicly available biographical information for the author, and/or other documents that detail the experiences of the author. The additional documents and the potential source perspective phrase can be provided as input to a trained machine learning model to generate a relatedness score between the phrase and each of the additional documents. For example, one of the documents can include information related to other countries that the author has visited. Based on a generated relatedness score between the additional document and the identified phrase being indicative of relevance of the content of the additional document and the phrase, additional content can be determined based on the additional document. For instance, the additional content can include a link to the additional document, a summary of the additional document, and/or other information regarding the author that is identified from the additional document. For example, the additional content can include a pop-up box associated with the phrase “Thailand is the best country in Asia”, where the pop-up box indicates that Thailand is the only country in Asia that the author has visited, as identified from the related additional document. A “document”, as used herein, is to be broadly interpreted and can include, for example, an article, a news item, a blog entry, a social media posting, a web page, an email, an image, an audio clip, a video clip, a quote, an advertisement, a news group posting, a word processing document, a portable document format document, and/or other documents. Further, implementations described herein can be applied to all or portions of a document. A portion of a document can include, for example, a sentence, a phrase, a title, a footnote, an advertisement, an image, a quote, an audio clip, a video clip, metadata of the document, and/or other portions. As described herein, documents can be stored in one or more corpuses and a relationship between one or more documents, source(s) thereof, and/or other entities can be defined in a knowledge graph. A “source perspective”, as used herein, is to be broadly interpreted and can include, for example, specific perspectives, basis or prior positions, predispositions, experiences, biases, inclinations, preferences, specific assumptions, opinions, and/or other perspectives that alter a representation of content from a purely objective perspective toward a subjective perspective. Further, a source perspective can be explained by explicit content of additional documents (e.g., portion(s) of additional documents) and/or implicit content of additional documents (e.g., inferred based on portion(s) of additional documents). The above is provided as an overview of some implementations disclosed herein. Further description of these and other implementations is provided below. In some implementations, a method performed by one or more processors is provided and includes identifying a target electronic document and a source that generated the target electronic document. The method further includes processing the target electronic document to determine a source perspective portion of the target electronic document, and searching one or more corpuses to identify a plurality of additional resources that are related to the source. The method further includes, for each of the identified additional resources that are related to the source: processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score. The corresponding relatedness score indicates a degree of relatedness between the source perspective portion and the corresponding one of the additional resources. In various implementations, the relatedness score between the source perspective portion(s) and a corresponding one of the additional electronic documents indicates relatedness in the sense that it provides a basis for understanding of source perspective(s) of the source perspective portion(s) as opposed to only providing more detail on the underlying topic(s) of the source perspective portion(s). Thus, the relatedness score can represent an explanatory extent of each of the additional electronic documents for the source perspective portion(s) of the target electronic document. For example, for a source perspective portion of “Thailand is great”, a first additional resource that describes how the source is funded by a tourism commission associated with Thailand can have a higher degree of relatedness than a second additional resource that provides factual information about Thailand. Various techniques can be utilized in determining such relatedness scores, such as machine learning based techniques and/or knowledge graph based techniques disclosed herein. The method further includes, responsive to determining that the relatedness score, of a given additional resource of the additional resources, satisfies a threshold: defining, in one or more databases, a relationship between the target electronic document and additional content generated based on the given additional resource. The method further includes, subsequent to defining the relationship, and responsive to the relationship being defined: causing a computing device that is rendering the target electronic document to render at least a portion of the additional content and/or a link to the additional content, simultaneous with the rendering of the target electronic document at the computing device. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, defining the relationship between the target electronic document and the additional content generated based on the given additional resource includes defining a relationship between the source perspective portion of the target electronic document and the additional content. In some of those implementations, causing the computing device that is rendering the target electronic document to render the at least a portion of the additional content simultaneous with the rendering of the target electronic document at the computing device includes: causing the computing device to render the at least a portion of the additional content along with rendering an indication that the at least a portion of the additional content is relevant to the source perspective portion. In some versions of those implementations, for the target electronic document, the at least a portion of the additional content is defined as having a relationship to only the source perspective portion. In some of those versions, the indication that the at least a portion of the additional content is relevant to the source perspective portion is an indication that the additional content is relevant to only the source perspective portion. In some implementations, causing the computing device that is rendering the target electronic document to render the at least a portion of the additional content simultaneous with the rendering of the target electronic document at the computing device includes: causing the computing device to initially render a selectable interface element that indicates additional content relevant to source perspective is available, without initially rendering the at least a portion of the additional content; and causing the computing device to render the at least a portion of the additional content responsive to affirmative user interface input directed to the selectable interface element. In some implementations, the method further includes generating the additional content based on the given additional resource. In some of those implementations, generating the additional content includes: including a link to the given additional resource in the additional content, including a phrase from the given additional resource in the additional content, and/or including a summary of the given additional resource in the additional content. In some implementations, the method further includes generating the additional content based on the given additional resource and based on a further additional resource of the additional resources. Generating the additional content based on the further additional resource can be responsive to the corresponding relatedness score of the further additional resource satisfying the threshold. In some implementations, processing the corresponding additional resource features of the corresponding one of the additional resources and the features of the source perspective portion to generate a corresponding relatedness score includes: applying the corresponding additional resource features and the features of the source perspective portion as input to a trained machine learning model; and generating the corresponding relatedness score based on processing the corresponding additional resource features and the features of the source perspective portion using the trained machine learning model. In some implementations, the additional resources related to the source include: other documents written by the source; documents that include references to the source; and/or one or more entries, in a knowledge graph, that are mapped to a source entry, of the knowledge graph, that corresponds to the source; and/or documents that include references to one or more terms in the target electronic document. In some implementations, the method further includes: processing the target electronic document to determine an additional source perspective portion of the target electronic document; and generating an additional relatedness score that indicates a degree of relatedness between the additional source perspective portion and the given additional resource. Generating the additional relatedness score is based on processing of the corresponding additional resource features and additional features of the additional source perspective portion. In some of those implementations, the method further includes determining that the additional relatedness score fails to satisfy the threshold, and defining the relationship between the target electronic document and the additional content generated based on the given additional resource includes: defining the relationship between the source perspective portion of the target electronic document and the additional content, based on the relatedness score satisfying the threshold, and refraining from defining any relationship between the additional source perspective portion of the target electronic document and the additional content, based on the additional relatedness score failing to satisfy the threshold. In some implementations, causing the computing device that is rendering the target electronic document to render the at least a portion of the additional content simultaneous with the rendering of the target electronic document at the computing device includes: causing the computing device to render the at least a portion of the additional content along with rendering an indication that the additional content is relevant to the source perspective portion. In some implementations, the source is an author, a creator, and/or a publisher. In some implementations, a method implemented by one or more processors is provided and includes: identifying a target electronic document and a source that generated the target electronic document; processing the target electronic document to determine a source perspective portion of the target electronic document; searching one or more corpuses to identify a plurality of additional resources that are related to the source; determining a relatedness score between each of the additional resources and the source perspective portion of the target electronic document; and generating a source perspective summary for the source perspective portion of the target electronic document. The source perspective summary is generated based on one or more of the additional resources and the relatedness scores of the corresponding one or more additional resources. The method further includes responsive to a request, from a computing device, for the target electronic document: causing the computing device to render an interface that includes the target electronic document with a selectable portion that, when selected, causes the source perspective summary to be rendered along with the target electronic document. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, the selectable portion includes the source perspective portion, and further includes causing the source perspective portion to be graphically demarcated from non-source perspective portions of the target electronic document. In some implementations, the source perspective summary, when rendered, is rendered in a separate interface portion when a user selects the selectable portion of the target electronic document. In some implementations, the selectable portion consists of the source perspective portion. In some implementations, the source perspective summary, when rendered, is rendered in a separate section of the interface from the target electronic document, and selecting the source perspective summary, when rendered, causes at least a portion of the one or more additional resources to be rendered. In some implementations, the source perspective summary is generated based on at least a first additional resource and a second additional resource of the one or more of the additional resources. In some implementations, a method implemented by one or more processors is provided and includes identifying a target electronic document, processing the target electronic document to determine a source perspective portion of the target electronic document, identifying at least one source of the target electronic document, and searching, based on identifying the at least one source of the target electronic document one or more corpuses to identify a plurality of additional resources that are explanatory of the source perspective portion of the target electronic document. The method further includes, for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document, processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. The method further includes selecting, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of the source perspective portion of the target electronic document. The method further includes generating, based on at least first content of the first additional resource and second content of the second additional resource, a source perspective summary for the at least one source. The method further includes, subsequent to generating the source perspective summary for the at least one source, causing a computing device that is rendering the target electronic document to render the source perspective summary for the at least one source simultaneous with the rendering of the target electronic document at the computing device. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, the at least one source includes an author that penned the target electronic document, and the source perspective summary includes an author perspective summary. In some versions of those implementations, generating the bias summary for the at least one source based at least the first content of the first additional resource and the second content of the second additional resource includes analyzing at least the first content of the first additional resource and the second content of the second additional resource to determine an explanation of the source perspective portion of the target electronic document for the author, and including, in the author perspective summary, the explanation of the source perspective portion of the target electronic document for the author. In some further versions of those implementations, the explanation of the source perspective portion of the target electronic document for the author includes a natural language explanation of the source perspective portion of the target electronic document or source perspective metrics for the source perspective portion of the target electronic document. In yet further versions of those implementations, generating the source perspective summary for the at least one source based on at least the first content of the first additional resource and the second content of the second additional resource includes generating a first portion of the author perspective summary based on the natural language explanation of the source perspective portions of the target electronic document, generating a second portion of the author perspective summary based on the source perspective metrics for the source perspective portions of the target electronic document, and including, in the author perspective summary, both the first portion of the author perspective summary and the second portion of the author perspective summary. In some further versions of those implementations, at least the first additional resource and the second additional resource that are explanatory of the source perspective portion of the target electronic document include other documents penned by the author, social media posts of a social media account associated with the author, or social media interactions of the social media account associated with the author. In yet further versions of those implementations, generating the bias summary for the at least one source based on at least the first content of at least the first additional resource and the second content of the second additional resource includes generating a first portion of the author perspective summary based on additional content of the other documents penned by the author, and generating a second portion of the author perspective summary based on the social media posts and the social media interactions of the social media account associated with the author. In some versions of those implementations, generating the source perspective summary for the at least one source based on at least the first content of the first additional resource and the second content of the second additional resource includes including, in the author perspective summary, a listing of links to at least the first additional resource and the second additional resource that are explanatory of the source perspective portion of the target electronic document. In some versions of those implementations, the at least one source further includes a creator that collated the target electronic document, and the source perspective summary further includes a separate creator perspective summary. In some further versions of those implementations, the method further includes searching, based on identifying the at least one source further includes the creator of the target electronic document, one or more of the corpuses to identify a plurality of further additional resources that are explanatory of the source perspective portion of the target electronic document. In some further versions of those implementations, the method further includes, for each of the identified further additional resources that are explanatory of the biased portion of the target electronic document, processing corresponding further additional resource features of a corresponding one of the further additional resources and the features of the biased portion to generate a corresponding additional relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. In some further versions of those implementations, the method further includes selecting, based on the additional relatedness scores, and from the identified further additional resources, at least a third additional resource and a fourth additional resource that are explanatory of the biased portion of the target electronic document. In some further versions of those implementations, the method further includes generating, based on at least the third content of the third additional resource and fourth content of the fourth additional resource, the creator perspective summary for the at least one source. In yet further versions of those implementations, generating the source perspective summary for the at least one source based on at least the third content of the third additional resource and the fourth content of the fourth additional resource includes analyzing at least the third content of the third additional resource and the fourth content of the fourth additional resource to determine an explanation of the source perspective portion of the target electronic document for the creator, and including, in the creator perspective summary, the explanation of the source perspective portion of the target electronic document for the creator. In even further versions of those implementations, the explanation of the source perspective portion of the target electronic document for the creator includes a natural language explanation of the source perspective portion of the target electronic document or source perspective metrics for the source perspective portion of the target electronic document. In yet even further versions of those implementations, generating the source perspective summary for the at least one source based on at least the third content of the third additional resource and the fourth content of the fourth additional resource includes generating a first portion of the creator perspective summary based on the natural language explanation of the source perspective portion of the target electronic document, generating a second portion of the creator perspective based on the source perspective metrics for the source perspective portion of the target electronic document, and including, in the creator perspective summary, both the first portion of the creator perspective summary and the second portion of the creator perspective summary. In some implementations, searching one or more of the corpuses to identify the plurality of additional resources that are explanatory of the source perspective portion of the target electronic document includes applying one or more de-duping techniques to the identified plurality of additional resource to determine a subset of the identified plurality of additional resources, generating, based on features of the subset of the identified plurality of additional resource, source perspective metrics for the source perspective portions of the target electronic document, and including, in the source perspective summary, the source perspective metrics for the source perspective portions of the target electronic document. In some implementations, one or more of the corpuses include a knowledge graph having source nodes corresponding to the at least one source connected to at least resource nodes corresponding to the plurality of additional resources. In some versions of those implementations, processing the corresponding additional resource features of the corresponding one of the additional resources and the features of the source perspective portion to generate the corresponding relatedness score for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document includes applying the knowledge graph as input across a graph neural network to generate embedding nodes corresponding to the source nodes and resource nodes of the knowledge graph, and generating, based on information included in the embedding nodes, the relatedness scores. In some implementations, a method implemented by one or more processors is provided and includes: identifying a target electronic document, processing the target electronic document to determine a source perspective portion of the target electronic document, identifying a publisher that published the target electronic document, and searching, based on identifying the publisher that published the target electronic document, one or more corpuses to identify a plurality of additional resources that are explanatory of the source perspective portion of the target electronic document and that are also published by the publisher. The method further includes, for each of the identified additional resources that are explanatory of the source perspective portion of the target electronic document, processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. The method further includes selecting, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of the source perspective portion of the target electronic document, and generating, based on at least first content of the first additional resource and second content of the second additional resource, a publisher perspective summary. The method further includes, subsequent to generating the publisher perspective summary, causing a computing device that is rendering the target electronic document to render the publisher perspective summary simultaneous with the rendering of the target electronic document at the computing device. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, generating the publisher perspective summary based on at least the first content of the first additional resource and the second content of the second additional resource includes analyzing at least the first content of the first additional resource and the second content of the second additional resource to determine an explanation of the source perspective portion of the target electronic document for the publisher, and including, in the publisher perspective summary, the explanation of the source perspective portion of the target electronic document for the author. In some versions of those implementations, the explanation of the source perspective portion of the target electronic document for the publisher includes a natural language explanation of the source perspective portion of the target electronic document or source perspective metrics for the source perspective portions of the target electronic document. In some further versions of those implementations, generating the publisher perspective summary based on at least the first content of the first additional resource and the second content of the second additional resource includes generating a first portion of the publisher perspective summary based on the natural language explanation of the source perspective portion of the target electronic document, generating a second portion of the publisher perspective based on the source perspective metrics for the source perspective portions of the target electronic document, and including, in the publisher perspective summary, both the first portion of the publisher perspective summary and the second portion of the publisher perspective summary. In some implementations, a system is provided and includes a database, memory storing instructions, and one or more processors executing the instructions, stored in the memory, to cause the one or more processors to: identify a target electronic document, process the target electronic document to determine one or more source perspective portions of the target electronic document, identify at least one source of the target electronic document, and search, based on identifying the at least one source of the target electronic document, one or more corpuses to identify a plurality of additional resources that are explanatory of one or more of the source perspective portions of the target electronic document. The instructions further cause the one or more processors to, for each of the identified additional resources that are explanatory of one or more of the source perspective portions of the target electronic document, process corresponding additional resource features of a corresponding one of the additional resources and features of one or more of the source perspective portions to generate a corresponding relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. The instructions further cause the one or more processors to select, based on the relatedness scores, and from the identified additional resources, at least a first additional resource and a second additional resource that are explanatory of one or more of the source perspective portions of the target electronic document, and generate, based on at least first content of the first additional resource and second content of the second additional resource, a source perspective summary for the at least one source. The instructions further cause the one or more processors to, subsequent to generating the source perspective summary for the at least one source, and responsive to receiving an indication to view the source perspective summary from a user consuming the target electronic document, cause a computing device that is rendering the target electronic document to render the source perspective summary for the at least one source along with the rendering of the target electronic document at the computing device. In some implementations, a method implemented by one or more processors is provided and includes identifying a target electronic document, processing the target electronic document to determine a source perspective portion of the target electronic document, identifying a publisher that published the target electronic document, and searching, based on identifying the publisher that published the target electronic document, one or more corpuses to identify a plurality of additional resources that are also published by the publisher. The method further includes, for each of the identified additional resources that are also published by the publisher, processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. The method further includes, selecting, based on the relatedness scores, at least a first additional resource that is published by the publisher and a second additional resource that is published by the publisher. The method further includes, responsive to selecting at least the first additional resource and the second additional resource, generating a publisher perspective summary based on first content from the first additional resource and based on second content from the second additional resource, and defining, in one or more databases, a relationship between the target electronic document and the publisher perspective summary generated based on at least the first content from the first additional resource and based on the second content from the second additional resource. The method further includes, subsequent to defining the relationship, and responsive to the relationship being defined, causing a computing device that is rendering the target electronic document to render the publisher perspective summary simultaneous with the rendering of the target electronic document at the computing device. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, generating the publisher perspective summary based on at least the first content from the first additional resource and the second content from the second additional resource includes including, in the publisher perspective summary, both the first content and the second content. In some versions of those implementations, the first content is first text and the second content is second text. In some further versions of those implementations, the publisher perspective summary includes a single sentence that incorporates the first text and the second text. In some versions of those implementations, causing the computing device that is rendering the target electronic document to render the publisher perspective summary simultaneous with the rendering of the target electronic document at the computing device includes causing the computing device to render the publisher perspective summary along with rendering an indication that the publisher perspective summary is relevant to the source perspective portion. In some implementations, causing the computing device that is rendering the target electronic document to render the publisher perspective summary simultaneous with the rendering of the target electronic document at the computing device includes causing the computing device to initially render a selectable interface element that indicates additional content relevant to a source perspective is available, without initially rendering the publisher perspective summary, and causing the computing device to render the publisher perspective summary responsive to affirmative user interface input directed to the selectable interface element. In some implementations, a method implemented by one or more processors is provided and includes identifying a target electronic document, processing the target electronic document to determine a source perspective portion of the target electronic document, identifying an author that authored the target electronic document, searching, based on identifying the author that authored the target electronic document, one or more corpuses to identify a plurality of additional resources that are also authored by the author. The method further includes, for each of the identified additional resources that are also authored by the author, processing corresponding additional resource features of a corresponding one of the additional resources and features of the source perspective portion to generate a corresponding relatedness score. For example, the corresponding relatedness score can represent an explanatory extent of the corresponding one of the additional resources for the source perspective portion of the target electronic document. The method further includes selecting, based on the relatedness scores, at least a first additional resource that is authored by the author and a second additional resource that is authored by the author. The method further includes, responsive to selecting at least the first additional resource and the second additional resource, generating an author perspective summary based on at least first content from the first additional resource and second content from the second additional resource, and defining, in one or more databases, a relationship between the target electronic document and the author perspective summary generated based on the first content from the first additional resource and based on the second content from the second additional resource. The method further includes, subsequent to defining the relationship, and responsive to the relationship being defined, causing a computing device that is rendering the target electronic document to render the author perspective summary simultaneous with the rendering of the target electronic document at the computing device. These and other implementations of technology disclosed herein can optionally include one or more of the following features. In some implementations, generating the author perspective summary based on at least the first content from the first additional resource and the second content from the second additional resource includes including, in the author perspective summary, both the first content and the second content. In some versions of those implementations, the first content is first text and the second content is second text. In some further versions of those implementations, the author perspective summary includes a single sentence that incorporates the first text and the second text. In yet further versions of those implementations, generating the author perspective summary further includes including, in the author perspective summary, a first link to the first additional resource and a second link to the second additional resource. The first link and the second link are included in the author perspective summary based on the first additional resource and the second additional resource being utilized in generating the author perspective summary. In yet further versions of those implementations, causing the computing device that is rendering the target electronic document to render the author perspective summary simultaneous with the rendering of the target electronic document at the computing device includes causing the computing device to render the author perspective summary along with rendering an indication that the author perspective summary is relevant to the source perspective portion. In yet further versions of those implementations, causing the computing device that is rendering the target electronic document to render the author perspective summary simultaneous with the rendering of the target electronic document at the computing device includes causing the computing device to initially render a selectable interface element that indicates additional content relevant to an author perspective is available, without initially rendering the author perspective summary, and causing the computing device to render the author perspective summary responsive to affirmative user interface input directed to the selectable interface element. In addition, some implementations include one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s), and/or tensor processing unit(s) (TPU(s)) of one or more computing devices, where the one or more processors are operable to execute instructions stored in associated memory, and where the instructions are configured to cause performance of any of the aforementioned methods. Some implementations also include one or more non-transitory computer readable storage media storing computer instructions executable by one or more processors to perform any of the aforementioned methods. It should be appreciated that all combinations of the foregoing concepts and additional concepts described in greater detail herein are contemplated as being part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the subject matter disclosed herein. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of an example environment in which implementations disclosed herein can be implemented. FIG. 2 illustrates a target electronic document with biased portions highlighted. FIG. 3 illustrates an example interface that includes a target electronic document rendered with biased portions that are associated with additional content highlighted. FIG. 4A illustrates an example of the additional content rendered along with the target electronic document. FIG. 4B illustrates an example of additional content rendered based on affirmative input by the user. FIG. 5 illustrates a flowchart of an example method for rendering an electronic document with additional content and/or a bias summary associated with a biased portion of the target electronic document. FIG. 6 illustrates an example architecture of a computing device. FIG. 7A illustrates an example interface that includes a bias summary. FIG. 7B illustrates another example interface that includes a bias summary. DETAILED DESCRIPTION Now turning to FIG. 1, an example environment in which techniques disclosed herein can be implemented is illustrated. The example environment includes a client device 105 and a remote computer 110. Although both the client device 105 and the remote computer 110 are each illustrated in FIG. 1 as single components, it is understood that one or more modules and/or aspects of either can be implemented, in whole or in part, by one or more other devices. For example, in some implementations a first set of modules and/or aspects are implemented by one or more processors of a first remote system, and a second set of modules and/or aspects are implemented by one or more processors of one or more separate remote server device(s) that are in network communication with the remote computer 110. The remote server system(s) can be, for example, a cluster of high performance remote server device(s) that handle requests from one or more client devices, as well as requests from additional devices. Client device 105 can be a mobile phone computing device, a tablet computing device, and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/or alternative client devices can be provided. Further, one or more components of client device 105 can be implemented on separate devices. For example, application(s) 107 can be implemented on one or more alternate computing devices that are in communication with client device 105. Components of client device 105 and components of remote computer 110 can communicate via a communication network. The communication network can include, for example, a wide area network (WAN) (e.g., the Internet). Further, components of client device 105 can communicate with one or more other components via a communication network. For example, communication network can include a local area network (LAN) and/or BLUETOOTH and can communicate with one or more other devices via the LAN and/or BLUETOOTH (e.g., an automated assistant device communicating with a handheld computing device of a user). Client device 105 includes one or more applications 107 that can each be utilized to render content to a user of the client device. For example, a user can utilize one of the application(s) 107 (e.g., a web browser application or an automated assistant application) to provide a search query to search engine, and the search engine can provide result(s) responsive to the search query. The user can view results provided by the search engine, and click on (or otherwise select) one of the results to cause the application 107 to render a corresponding document and/or other content corresponding to the search query. The user can interact with the application 107 via one or more input devices of the client device 105, such as a keyboard, a mouse and/or other input device that can select an area of the interface 107, voice controls, touchscreen controls, and/or other input methods that allow the user to submit input and select content to be rendered. In some implementations, one or more modules of client device 105 and/or remote computer 110 can render a document via one of the application(s) 107. For example, the user can interact with a search engine by providing a search query and the search engine can provide the user with one or more documents (or selectable indications of documents) that can be rendered to the user. The user can then view the rendered content via the application 107 and can, in some instances, interact with the rendered content to be provided with additional content (e.g., selecting links in documents, selecting graphical user interface buttons). As another example, the user can navigate, within one of the application(s) 107, to the document directly. As an example, a user can be provided, via one of the application(s) 107, with a search result that is responsive to a submitted query of “Articles about travel to Asia”. The user can select one of the search results and one of the application(s) 107 can render the document that is associated with the selected link. As used herein, the document of interest to the user will be referred to as the “target electronic document.” This can be a document that is rendered based on a search query, as previously described and/or a document that is otherwise rendered via one or more application(s) executing on the client device 105. In many instances, a target electronic document is associated with at least one source. The at least one source can include an author of the document, the publisher of the document, and/or a creator of the document. The publisher of a document can be, for example, a website that hosts the document and/or a corporation that prepares and/or publishes the document. For example, a news agency that prepares and/or publishes a news article can be the publisher of the news article document. The creator of a document can be one or more individuals that collated content of the document, but that didn't necessarily originally author the content of the document. The author of a document can be the individual that penned the textual content of the document and/or generated other portions (e.g., images) of the target electronic document. For example, a target electronic document that is a news story can specify the source in the text of the document, can specify the source in metadata associated with the document, and/or the source can be identified based on content of another related document. Because the author is human, the creator includes one or more humans, and humans act on behalf of the publisher, and those human(s) have unique experiences and opinions, one or more portions of the target electronic document may include a source perspective based on those opinions and/or experiences. In some instances, the reader may not be aware of these experiences and/or opinions and may accept the content of the document as objective even if one or more portions may in fact be skewed by the opinion of the source. To determine whether portion(s) of a target electronic document include a source perspective, a user may have to view multiple resources to determine the source perspective, if one even exists. Further, the user may navigate through multiple documents to determine the source perspective and may not find an additional resource that is related to a source perspective (and further, may not know when to stop searching for content that explains a source perspective). Thus, additional computing resources and time may be expended, sometimes uselessly (i.e., if no source perspective can be determined from additional resources) for a user to determine whether a source perspective exists. Thus, by providing the user with indications in a target electronic document of portions that may include a source perspective and further providing the user with additional resources to allow the user to determine whether a particular portion actually includes a source perspective, it is unnecessary for the user to perform additional searching. Further, the user can be provided with a summary of additional resources within the target electronic document, which reduces the need for the user to navigate away from a target electronic document to assess additional resource(s) that can indicate a potential source perspective. Moreover, implementations present an objective and uniform process for determining whether portion(s) of a document include a source perspective and/or for determining additional document(s) and/or additional content that are related to portion that includes a source perspective. Accordingly, whether portions of a document are considered to include a source perspective and/or additional content that is presented for source perspective portion(s) can be determined independent of subjective considerations of a user to which the additional content is presented. Source perspective identification engine 115 determines whether one or more portions of a target electronic document include source perspective content and, if so, flags such portion(s) as including a source perspective. Source perspective portions of a target electronic document are portions of the document that indicate that the source may have included such portions based on specific perspectives, basis or prior positions, predispositions, experiences, biases, inclinations, preferences, specific assumptions, opinions, and/or other perspectives that alter a representation of content from a purely objective perspective toward a subjective perspective, and not on objective facts. As described herein, source perspective identification engine 115 can utilize various techniques to determine that a portion of a target electronic document includes a source perspective. It is noted that, in various implementations, a determination that a portion of a target document includes a source perspective does not necessarily conclusively mean that the portion is a source perspective. Rather, it means that source perspective identification engine 115 has determined, utilizing one or more objective techniques such as those disclosed herein, that a feature of the portion and/or a measure determined based on the portion, indicates that the portion has at least a threshold probability of including a source perspective. Referring to FIG. 2, an example of a target electronic document is provided. The target electronic document 200 includes portions 205 and 210 that may include a source perspective. Source perspective identification engine 115 can determine that portions 205 and 210 include a source perspective based on one or more terms included in the portions, based on similarity between the portions and one or more other documents that have been annotated to indicate that a portion includes a source perspective, and/or based on other methods that determine that a portion includes a source perspective. Portion 205 includes one or more terms that source perspective identification engine 115 can identify as terms that likely indicate a source perspective. For example, the portion 205 states that “Thailand is the best country in Asia to visit.” In some implementations, source perspective identification engine 115 can identify one or more terms, such as “best,” as terms that are often indicative of a source perspective and not based wholly on objective facts. Thus, in some of those implementations, source perspective identification engine 115 can determine that portion 205 includes a source perspective, based at least in part on presence of the term “best”. Other terms that can be indicative of source perspective are “I” and/or “I think,” other superlatives (“greatest,” “worst,” etc.), and/or other terms that indicate that the corresponding portion of the document is influenced by the author's opinions, assumptions, biases, and/or other subjective criteria. In some implementations, source perspective identification engine 115 can additionally or alternatively determine that a portion of the target electronic document is a source perspective portion based on comparison between the portion and one or more annotated documents (e.g., human annotated documents). For example, one or more humans can be provided with a number of documents and the user can annotate each document with an indication of whether the document includes a source perspective, a score indicative of the level of source perspective in a document, and/or other annotations that can be utilized by source perspective identification engine 115 to determine whether a portion is similar to other documents that include source perspective portion(s). For example, source perspective identification engine 115 can compare portion 210, which states that “Famous food travel expert Anthony Example has traveled numerous times to Thailand,” with other annotated documents and, based on the similarities between the portion 210 and documents that are annotated as including source perspective portion(s), determine that portion 210 is a source perspective portion. Portion 210 can be determined to be a source perspective portion based on a declaration by the author that Anthony Example is “a famous food travel expert.” In some implementations, source perspective identification engine 115 additionally or alternatively utilizes a trained machine learning model in determining whether a portion of a document includes a source perspective. For example, the trained machine learning model can be trained based on training instances that each include training instance input of a portion of text (and/or a representation of the portion of text), and training instance output that indicates whether the portion of text is a source perspective portion. As one particular example, the trained machine learning model can be a feed forward neural network and the training instance inputs can each be an embedding (e.g., a Word2Vec embedding) of a corresponding portion of text, and the training instance outputs can each be a human labeled indication of whether the corresponding portion of text is a source perspective portion. For instance, the training instance outputs can be “1” if a corresponding portion of text is deemed “highly likely to include” a source perspective and an explanation thereof, “0.75” if the corresponding portion of text is deemed “highly likely to include” a source perspective but no explanation thereof, “0.5” if the corresponding portion of text is deemed “possibly includes” a source perspective or an explanation thereof, “0.25” if the corresponding portion of text is deemed “likely does not include” a source perspective or an explanation thereof, and “0” if the corresponding portion of text is deemed “highly likely does not include” a source perspective or an explanation thereof. As another example, the trained machine learning model can be a recurrent neural network that accepts portions of text on a term-by-term or token-by-token basis, the training instance inputs can each be a corresponding portion of text, and the training instance outputs can each be a human labeled indication of whether the corresponding portion of text is a source perspective portion. In use, source perspective identification engine 115 can process a portion of text, using the trained machine learning model, to generate a measure that indicates whether the portion is a source perspective portion, and determine whether the portion is actually a source perspective portion based on the measure. For example, if the measure satisfies a threshold (e.g., greater than 0.5), the source perspective identification engine 115 can determine the corresponding portion of text includes a source perspective. The machine learning model can also be updated based on training instances generated by a user (e.g., described with respect to FIGS. 7A and 7B). Additional resource engine 120 searches to identify additional resources that are related to at least one source of the target electronic document. In some implementations, to conserver network and/or computation resources, additional resource engine 120 searches to identify additional resources for the target electronic document only if source perspective portion(s) of the target electronic document have been identified by source perspective identification engine 115. In some implementations, additional resource engine 120 can identify documents that are associated with the source(s), such as documents that were written by an author of the target document, documents that mention and/or quote the author, documents that are published by a publisher of the target document, documents that mention the publisher, documents that are created by a creator of the target document, and/or other documents that can indicate a source perspective of source(s) of the target document. In some implementations, additional resource engine 120 can utilize a search query that includes one or more terms from source perspective portion(s) of the target electronic document (or based on the source perspective portion(s)) to identify additional resources that may be pertinent to the source perspective portion(s). Such a search query can also include a name of source(s) of the target electronic document, or be restricted to a search of document(s) by and/or related to one or more of the source(s), to identify additional resources that are generated (e.g., authored, published, and/or created) by the source and that may be pertinent to one or more of the source perspective portions of the document. For example, referring to FIG. 2, additional resource engine 120 can submit a search query of “author Jim Smith” to be provided with documents related to the author. Also, for example, additional resource engine 120 can additionally or alternatively submit a search query of “Thailand”, with a restriction of “author: Jim Smith” to a search engine to be provided with documents related to the author that are also related to the subject matter of the document. Also, for example, additional resource engine 120 can additionally or alternatively submit a search query of “Thailand” with a restriction of “author: Jim Smith” to identify document(s) related to portion 205, and submit a search query of “Anthony Example Thailand” to identify document(s) related to portion 210. As yet another example, if target electronic document 200 is published by Hypothetical News Corporation, additional resource engine 120 can additionally or alternatively submit a search query with a restriction of “publisher: Hypothetical News Corporation”, and optionally with a restriction of “author: Jim Smith”. If both restrictions are included, identified additional resources will be restricted to those that are by “Jim Smith” and published by “Hypothetical News Corporation”. As mentioned above, in some implementations, the additional resources can include other documents written by the same author as the target electronic document, published by the same publisher as the target electronic document, and/or created by the same creator as the target electronic document. For example, a search query seeking additional resources for a target electronic document penned by “Jim Smith” can include “Jim Smith”, or a restrict identifier of “Jim Smith”, or the search corpus can be restricted to document(s) penned by “Jim Smith”. For example, additional resource engine 120 can search one or more databases, such as a database that includes author names and authored documents, to identify the documents that were penned by the author. In some implementations, the additional resources can include one or more documents that include a reference to the source of the target electronic document. For example, one or more documents can include a biography of an author and/or otherwise reference the author (but are not necessarily penned by the author). To identify additional resource(s) about a source, a search query seeking the additional resources can include the source's name, or the search corpus can be restricted to document(s) that have a defined relationship to the source (e.g., in a database that maps documents to corresponding entities referenced in the documents). Referring again to FIG. 2, portion 205 states that “Thailand is the best country in Asia to visit.” Additional resource engine 120 can identify a website and/or other document as an additional resource based on the document including a biography of the author that indicates “Jim Smith is a travel agent that specializes in trips to Thailand,” which can be utilized by a reader to assess whether a statement in the target electronic document is a source perspective. In some implementations, additional resource engine 120 can identify one or more documents that include references to one or more terms included in the target electronic document. For example, portion 210 includes a reference to “Anthony Example” and the author may be basing a statement on the opinion of another person and/or may be making a statement about a person and/or other subject that reflects their own perspective. For example, the statement “Thailand is a huge country” may be an opinion of the author. Thus, additional information related to Thailand's population and/or area may assist a reader is determining whether the country is in fact “huge.” To identify the source perspective of another author and/or person, additional resource engine 120 can search to identify additional resources that can indicate a source perspective of the author regarding another subject. In some implementations, additional resource engine 120 can identify one or more additional resources based on entries in a knowledge graph. For example, additional resource engine 120 can identify an entry for the source in a knowledge graph. Further, the entry for the source can be further mapped, in the knowledge graph (directly and/or indirectly), to one or more additional entries that are related to document(s) that have been generated by the source of the target electronic document. The additional resource engine 120 can identify the document(s) for the one or more additional entries based on those entries being mapped, in the knowledge graph, to the entry for the source. As another example, the entry for the source can be further mapped, in the knowledge graph, to one or more additional entries that each define a corresponding curated resource for the source, and one or more of the corresponding curated resources can be utilized as an additional resource. For instance, a curated resource for an author can include information indicating topic(s) for which the author is considered an expert, topic(s) about which the author has written, and/or other information. Also, for instance, a curated resource for a publisher can include information indicating topic(s) for which the publisher is considered an expert, topic(s) about which the publisher has published, verified biases of the publisher, and/or other information. Such a curated resource for a source can be utilized as an additional resource. For each of the identified additional resources, additional resource scorer 125 can optionally determine one or more relatedness scores that are each indicative of relatedness between the additional resource (or a portion of the additional resource) and the electronic document. For example, the additional resource scorer 125 can determine, for a given additional resource, a first relatedness score between the given resource and a first source perspective portion of the target electronic document, a second relatedness score between the given resource and a second source perspective portion of the target electronic document, etc. For instance, the additional resource scorer 125 can determine the first relatedness score based on comparison of the first source perspective portion to one or more aspects of the given resource, can determine the second relatedness based on comparison of the second source perspective portion to one or more aspects of the given resource (the same and/or alternative aspect(s)), etc. Further, as described herein, based on the multiple relatedness scores for the given resource, the given resource can be determined to be relevant to (and stored in association with) only some of multiple source perspective portions (e.g., only one source perspective portion). Additional resource scorer 125 can determine a relatedness score based on comparison of features of a given additional resource to identified source perspective portion(s) of the target electronic document. For example, additional resource engine 120 can identify an additional resource that mentions “Anthony Example” and additional resource scorer 125 can determine a relatedness score for the additional portion 210 and the additional resource that is more indicative of relatedness than is a relatedness score for the portion 205 and the additional resource. This can be based on term(s) in the additional resource matching (soft and/or exact) term(s) in the additional portion 210, but failing to match term(s) in the portion 205 (e.g., the portion 205 does not mention “Anthony Example”, and the additional resource may not include any content related to “Thailand”). Also, for example, additional resource engine 120 can identify a document that includes the term “Thailand” and additional resource scorer 125 can determine a relatedness score for the additional resource and portion 205 that is more indicative of relatedness than is a relatedness score for the additional resource and additional portion 210 (e.g., the additional resource may include the terms “Thailand” and “Asia”, that are included in portion 205, but lack the term “Anthony Example”). In some implementations, in determining a relatedness score between portion(s) of a target electronic document and an additional resource, additional resource scorer 125 can process features of the additional resource and features of the source perspective portion(s) using a trained machine learning model, and generate the relatedness score based on such processing. For example, features of a given portion and features of the additional resource can be processed to generate a relatedness score between the given portion and the additional resource. In some implementations, the machine learning model can be trained based on training instances that each include training instance input of: a source perspective portion of text (and/or a representation of the source perspective portion of text), and content from a corresponding additional resource (and/or a representation of the content); and training instance output that indicates whether the content from the corresponding additional resource provides additional context for the source perspective portion of text. As one particular example, the trained machine learning model can be a feed forward neural network and the training instance inputs can each be: an embedding (e.g., a Word2Vec embedding) of a corresponding source perspective portion of text, and an embedding of corresponding content from a corresponding additional resource (e.g., a Word2Vec or other embedding of a snippet of text identified based on including one or more term(s) in common with the source perspective portion). The training instance outputs can be a human labeled indication of whether the corresponding content from the corresponding additional resource provides additional context for the source perspective portion of text. For instance, the training instance outputs can be “1” if a corresponding portion of text is deemed “fully explanatory of the source perspective”, “0.5” if the corresponding portion of text is deemed “somewhat explanatory of the source perspective”, and “0” if the corresponding portion of text is deemed “unrelated to the source perspective”. Thus, the training instance outputs can not only be weighted based on whether the additional resources are related to the corresponding source perspective portion of text, but also weighted based on an extent that the additional resource explains a source perspective portion of a target electronic document. Additional and/or alternative machine learning models can be utilized, such as those having architectures utilized in determining whether two pieces of content are similar, but using “whether one piece of content explains source perspective in the other piece of content” as a supervisory signal instead of similarity. In use, additional resource scorer 125 can process source perspective portion(s) (or features thereof) and content from an additional resource (or features thereof), using the trained machine learning model, to generate relatedness score that indicates whether the content from the additional portion is explanatory of a source perspective in the source perspective portion, and determine whether the portion includes an actual source perspective based on the measure. In various implementations, the relatedness score between the source perspective portion(s) and a corresponding one of the additional resources (e.g., documents) indicates relatedness in the sense that it provides a basis for understanding of source perspective(s) of the source perspective portion(s) as opposed to only providing more detail on the underlying topic(s) of the source perspective portion(s). Thus, the relatedness score can represent an explanatory extent of each of the additional electronic documents (or a portion thereof) for the source perspective portion(s) of the target electronic document. For example, for a source perspective portion of “Thailand is great”, a first additional resource that describes how the source is funded by a tourism commission associated with Thailand can have a higher degree of relatedness than a second additional resource that provides factual information about Thailand. Various techniques can be utilized in determining such relatedness scores, such as machine learning based techniques and/or knowledge graph based techniques disclosed herein. As described herein, both the source perspective identification engine 115 and the additional resource scorer 125 utilize machine learning models. In various implementations, the source perspective identification engine 115 utilizes a first trained machine learning model, and the additional resource scorer 125 utilizes a distinct, second trained machine learning model. The source perspective identification engine 115 can process a portion(s) of text of electronic document(s), using a trained machine learning model, to generate a measure that indicates whether the portion(s) of the document(s) is a source perspective portion, and to determine whether the portion(s) is actually a source perspective portion based on the measure. For example, features of a potential source perspective portion of “University of Blue basketball is the best” from a news article authored by John Smith can be applied as input to the trained machine learning model, and the measure generated based on output of the machine learning model can indicate whether the potential source perspective portion of “the University of Blue basketball team is the best” is in fact a source perspective of the author John Smith. In some implementations, the additional resource scorer 125 can process a source perspective portion of a target electronic document and candidate segment(s) of an additional documents, using a trained machine learning model, to generate a relatedness score for the additional document (or at least for the candidate segment(s) of the additional document). In some versions of those implementations, the relatedness score indicates relatedness, between the source perspective portion of the target electronic document and the additional document, in the sense that it indicates a degree to which the candidate segment(s) of the additional document provide for understanding of source perspective(s) of the source perspective portion(s). Put another way, in those versions it is not relatedness in the sense of including only matching or similar content but, rather, relatedness in the sense of explanatory relatedness. The additional resource scorer 125 can, for each of a plurality of additional documents identified by the additional resource engine 120, process text corresponding to a source perspective portion identified by source perspective identification engine 115, along with portion(s) of the additional document, to generate a corresponding relatedness score the additional document. Accordingly, N separate relatedness scores can be generated, with each being generated for a corresponding one of N separate additional documents, based on processing feature(s) of the source perspective portion and feature(s) of the corresponding one of the N separate additional documents. Notably, an additional document that provides an explanation of source perspective(s) can include content that is unrelated to content of source perspective portion(s) of a target electronic document, and that content can be included (e.g., all of) in the content that is processed in determining a relatedness score for the additional document. Further, additional resource engine 120 can search to identify additional resources related to a target electronic document. In some implementations, the additional resource engine 120 can search one or more corpuses of electronic resources (e.g., documents) that include electronic document(s) and/or can access node(s), of a knowledge graph, that are associated with source(s) of the target electronic document. The additional resource engine 120 can identify documents that: originate from the source(s) of the target electronic document (and optionally restrict searching to the source(s)); explain source perspective portion(s) of the target electronic document; and/or are related to content of the source perspective portion(s) of the target electronic document. In some versions of those implementations, candidate segments can be generated based on portions of additional documents that are identified by the additional resource engine 120. The additional resource scorer 125 can apply, as input across the trained machine learning model, each of the candidate segments and the source perspective portion(s) for each of the source(s) to generate the relatedness scores. For example, assume a news article authored by John Smith includes a portion of “the University of Blue basketball team is the best team in the nation”, and that the portion is identified as a source perspective portion by source perspective identification engine 115. Further assume that 100 news articles, blog posts, and social media posts authored by John Smith are identified by the additional resource engine 120. Various candidate segments from the news articles, blog posts, and social media posts authored by John Smith that explain a source perspective for John Smith with respect to the source perspective portion “the University of Blue basketball team is the best team in the nation” can be generated and processed to identify a basis for understanding of source perspective(s) of the source perspective portion(s) as opposed to only providing more detail on the underlying topic(s) of the source perspective portion(s). In this example, further assume that one of the news articles includes a candidate segment of “the University of Blue basketball team has the best recruiting class ever”, that one of the blog posts includes a candidate segment of “the University of Blue basketball team beat the #1 basketball team in the nation”, that one of the social media posts includes a candidate segment of “I have courtside seats because of my donation to University of Blue” along with a photo of the seats. The candidate segments from each of these documents (or candidate segments can be generated based on the these portions (e.g., “John Smith is a donor of University of Blue” based on the social media post)) can be processed (e.g., iteratively or as a batch), along with the source perspective portion of “the University of Blue basketball team is the best team in the nation”, using the trained machine learning model, to determine a relatedness score. In some implementations, the relatedness score can be for a corresponding one of the additional document(s). For example, a relatedness score can be determined for each of the 100 news articles, blog posts, and social media posts authored by John Smith. In some implementations, the relatedness score can be for portion(s) of a corresponding one of the additional document(s). For example, a relatedness score can be determined for only candidate segments of a corresponding additional document (e.g., relatedness score for only “the University of Blue basketball team has the best recruiting class ever” from the news articles), as opposed for the corresponding additional document as a whole. In some further versions of those implementations, the relatedness score can be a total relatedness score for a corresponding additional document, where the total relatedness score is based on a combination on relatedness scores for candidate segments in the corresponding additional documents. For example, if the news article authored by John Smith includes candidate segments of “the University of Blue basketball team has the best recruiting class ever” and “the University of Blue basketball team will not lose a game this year”, then a relatedness score for each of these candidate segments can be determined and combined to determine a total relatedness score for the news article. Thus, the relatedness score for the candidate segment of “I have courtside seats because of my donation to University of Blue” (or the candidate segment of “John Smith is a donor of University of Blue” generated based on the social media post) may indicate that it best explains John Smith's perspective (e.g., “the University of Blue basketball team is the best team in the nation”) even though it is not directly related to the topic of John Smith's perspective, whereas the relatedness score for the candidate segment of “the University of Blue basketball team beat the #1 basketball team in the nation” may not indicate that it provides an explanation for John Smith's perspective even though it is directly related to the topic of John Smith's perspective. As another example, again assume a news article authored by John Smith includes a portion of “the University of Blue basketball team is the best team in the nation”, and that the portion is identified as a source perspective portion by source perspective identification engine 115. Further assume that a knowledge graph including nodes for “John Smith”, “University of Blue”, “Example Sports Radio Network”, “basketball team”, “basketball rankings”, and “news article” is identified by the additional resource engine 120. In this example, the node for “John Smith” can be connected to the node for “University of Blue” by edges of an “alumnus of” and “donor of”, the node for “John Smith” can also be connected to the node for “Example Sports Radio Network” by an edge of an “works for”, the node for “John Smith” can also be connected to the node for “news article” by an edge of an “authored by”, the node for “University of Blue” can be connected to the node for “basketball team” by an edge of an “has a”, the node for “Example Sports Radio Network” can be connected to a node for “University of Blue” by a “writes about”, the node for “University of Blue” and the node for “basketball rankings” can be connected by an edge of “#10”, and so on to define relationships between source(s) (e.g., John Smith), documents (e.g., news article), and/or other entities (e.g., University of Blue, Example Sports Radio Network, basketball team, and so on). Various candidate segments from the knowledge graph that explain a source perspective for John Smith with respect to the source perspective portion “the University of Blue basketball team is the best team in the nation” can be generated and processed to identify a basis for understanding of source perspective(s) of the source perspective portion(s) as opposed to only providing more detail on the underlying topic(s) of the source perspective portion(s). In this example, candidate segments can be generated from the knowledge graph, and can include, for example, candidate segments of “John Smith is an alumnus of University of Blue”, “John Smith is a donor of University of Blue”, “University of Blue is #10 in basketball rankings”, and so on. The candidate segment of “John Smith is an alumnus of University of Blue” can be generated based on the connection of the node for “John Smith” being connected to the node for “University of Blue” by an edge of “alumnus of”, the candidate segment of “John Smith is a donor of University of Blue” can be generated based on the connection of the node for “John Smith” being connected to the node for “University of Blue” by an edge of “donor of”, and so on. Each of these candidate segments can be processed (e.g., iteratively or as a batch), along with the source perspective portion of “the University of Blue basketball team is the best team in the nation”, using the trained machine learning model, to determine a relatedness score for each of the candidate segments and the corresponding additional resources. Thus, the relatedness score for the candidate segment of “John Smith is a donor of University of Blue” may indicate that it best explains John Smith's perspective (e.g., “the University of Blue basketball team is the best team in the nation”) even though it is not directly related to the topic of John Smith's perspective, whereas the relatedness score for the candidate segment of “University of Blue is #10 in basketball rankings” may not indicate that it provides an explanation for John Smith's perspective even though it is directly related to the topic of John Smith's perspective. For each of the additional resources with a relatedness score that satisfies a threshold, additional content determination engine 135 defines a relationship between additional content generated from each of those additional resources and the target electronic document. A relationship between additional content from an additional resource and a target electronic document can be stored in a database, such as database 112. For example, referring again to FIG. 2, additional content from a document that indicates that the author (i.e., “Jim Smith”) is a travel agent that specializes in travel to Thailand can be stored with a relationship to the target electronic document and/or to source perspective portion 205. Storing the relationship in the database 112 can occur prior to a subsequent retrieval of the target electronic document by a computing device of a user, and enable quick and efficient retrieval of the additional content for provisioning of the additional content (for rendering along with the target electronic document). Moreover, storing the relationship in the database 112 enables the relationship to be stored once, but utilized for many subsequent retrievals of the target electronic document. This can conserve significant resources compared to, for example, if the relationship was not stored and additional content not rendered—and manual searches for determining whether the target electronic document included source perspective content instead occurred. In various implementations, additional content determination engine 135 only stores a relationship between the additional content of an additional resource and the target electronic document if the relatedness score satisfies a threshold. For example, additional resource scorer 125 can determine a relatedness score between additional content from an additional resource and the target electronic document that is a binary score (e.g., “1” for related and “0” for unrelated), and store the relationship if the relatedness score is a “1”. Also, for example, a determined relatedness score can include a range of values, with a higher value indicating that the additional content is more indicative of relatedness than a relatedness score that is a lower number (e.g., “0.9” indicating additional content that is more related to a target electronic document than additional content with a score of “0.3”). In such an example, the additional resource scorer 125 can store the relationship if the relatedness score is greater than “0.6”, or other value. In some implementations, additional content engine 135 defines a relationship between additional content and the target electronic document as a whole. For example, additional content engine 135 can associate document 200 with additional content that is identified by additional resource engine 120. In some implementations, additional content determination engine 135 can define a relationship in database 112 that is between a source perspective portion of the target electronic document and additional content. For example, referring again to FIG. 2, content determination engine 135 can define a relationship between portion 205 and additional content from a first additional resource. Further or alternatively, additional content determination engine 135 can define a second relationship between portion 210 and additional content from a second additional resource (or a relationship between portion 210 and additional content from the first additional resource). Each of the defined relationships can be stored in database 112 and later accessed to render the additional content with the target electronic device. In some implementations, additional content can be the entire additional resource. For example, the additional content can be the entire resource such that the entire additional resource can be rendered with the target electronic document, as described herein. In some implementations, additional content can include a portion of the related additional resource. For example, rather than associating an entire additional resource with the target electronic document, the relationship between a phrase from the additional resource that is related to the target electronic document (or a source perspective portion of the target electronic document) can be stored in database 112. In some implementations, additional content can include a selectable portion, such as a link, to the additional resource. A link can be associated with, for example, a location of the additional resource. For example, the link can be associated with web address of an additional resource and by selecting the link, at least a portion of the additional resource can be rendered. Alternatively or additionally, the link can be a reference to a database entry, a directory on a computing device, and/or other link that allows a user to access the specific additional resource. In some implementations, additional content can include a summary of the related additional resource. For example, one or more phrases and/or portions of the additional resource can be utilized to generate a summary of the contents of the additional resource. In some implementations, an additional resource can include a summary, which can then be identified as the additional content of the additional resource. For example, the additional resource can be an article that includes a summary at the start of the article. Also, for example, an additional resource can include a biography of the author at the end of the document (e.g., a short biography of the reporter at the end of a news story), and the biography utilized as the summary. In some implementations, additional content can be generated from two or more additional resources. For example, a first additional resource and a second additional resource can both have relatedness scores that satisfy a threshold. Additional content determination engine 135 can generate additional content that is based on first content from the first additional resource and second content the second additional resource. For example, additional content can include a source perspective summary that is generated based on content from two or more additional resources. As an example, a first additional resource can include the phrase “Jim Smith is a travel agent specializing in trips to Thailand.” Further, a second additional resource can be a biography of the author and include the phrase “He has been to Thailand over 20 times.” Additional resource scorer 125 can determine relatedness scores for both additional resources that satisfy a threshold. Content from the first additional resource and content from the second additional resource can be utilized to generate additional content (e.g., a source perspective summary) that can be associated with the target electronic document and/or source perspective portions of the target electronic document in database 112. As an example, referring again to FIG. 2, for the portion 205 of document 200, additional content determination engine 135 can identify all additional resources (or portions of additional resources) that satisfy a threshold as related to the portion 205. Further, additional content determination engine 135 can generate a summary that includes content from each of the additional resources, such as a source perspective summary for portion 205 that indicates “Jim Smith” may be biased towards Thailand because the only foreign country that he has been known to visit is Thailand. Further, Mr. Smith is a travel agent that specializes in booking trips to Thailand.” The resulting source perspective summary includes content from a first resource (e.g., the author has traveled to only Thailand) and content from a second resource (e.g., the author is a travel agent specializing in Thailand travel). Additional content renderer 130 causes the computing device of the user to render the additional content that is associated with the target electronic document in the database 112. The additional content is rendered simultaneously with the target electronic document so that the user, upon viewing the target electronic document, can view the additional content. Additional content renderer 130 causes a computing device to render, along with the target electronic document, corresponding additional content determined by the additional content determination engine 135. For example, a user can select a document to view, and the document can be associated with additional content. Additional content renderer 130 can render the selected document (i.e., the target electronic document) along with the associated additional content, as described herein. In some implementations, the additional content can be related to a particular source perspective portion of the target electronic document and the additional content renderer 130 can render the target electronic document with an indication that additional content is available and relevant to the particular source perspective portion of the target electronic document. The source perspective portion of the target electronic document can be rendered such that it is distinguishable from the rest of the document (e.g., underlined, bold-faced, capitalized, rendered in a different color) so that the reader can recognize that additional content is available for that portion of the document. In some implementations, additional content renderer 130 can render a selectable interface element that indicates additional content relevant to a source perspective is available without rendering the additional content. For example, referring to FIG. 3, a target electronic document is provided with indications that additional content is available without rendering the additional content. As illustrated, the document 300 includes a first source perspective portion 305 that is highlighted in bold face font to indicate that additional content related to the source perspective portion 305 and/or to the entire target electronic document is available. The document 300 further includes a second source perspective portion 310 that is also highlighted to indicate that the statement may be include a source perspective and that additional content is available. In some implementations, the first portion 305 can be associated with different additional content than the second portion 310 (i.e., each source perspective portion is associated with different additional resources). In some implementations, multiple source perspective portions of a target electronic document can be associated with the same additional content. For example, additional content can be associated with the entire target electronic document (and not to a specific source perspective portion of the document). Thus, multiple source perspective portions in the document can be rendered with the same additional content that is relevant to all of the additional content. In some implementations, the source perspective portions can be selectable and the additional content may render upon selecting, as described herein with regard to FIG. 4. In some implementations, additional content renderer 130 can render the target electronic document graphically associated with a source perspective summary that is generated based on identified related additional resources. For example, multiple additional resources can be related to a source perspective portion of the target electronic document and a source perspective summary can be generated based on the content of the additional resources, as previously described. Additional content renderer 130 can render the target electronic document with one or more graphical indications that a summary is available for a source perspective portion of the document. In some implementations, the target electronic document can include one or more selectable portions that, when selected, cause at least a portion of one or more of the associated additional resources to be rendered. For example, a source perspective portion of the target electronic document can be associated with additional content that is generated from Document A and a portion of Document A, such as the most relevant portion, can be rendered with the target electronic document. Additionally, the target electronic document can include a link in the additional content to allow a reader to select the link and be provided with the entire Document A or an expanded portion of Document A. As another example, a source perspective summary can be generated based on content of Document A and Document B. Additional content renderer 130 can render the source perspective summary or a portion of the source perspective summary with the target electronic document and the additional content can include the source perspective summary rendered with links to Document A and Document B. Thus, the user can select one of the links and be provided with the corresponding document and/or a relevant portion of the corresponding document. In some implementations, the source perspective summary or additional content can be rendered in a separate section of the interface as the target electronic document. For example, referring to FIG. 4A, the target electronic document 400 is rendered by additional content renderer 130 in a first section of the interface. Additional content interface 405 includes rendered versions of the additional content and/or of a source perspective summary generated from additional resources. In some implementations, as illustrated, the additional content and/or source perspective summary can be provided with an indication of the source perspective portion of the target electronic document that is associated with the content and/or summary. For example, source perspective portion 410 is aligned with additional content 415 to inform the reader that the additional content 415 is relevant to the source perspective portion 410. As illustrated, the source perspective portion 410 is further highlighted to indicate that the statement may include a source perspective, further informing the reader that additional content is included with the target electronic document. In some implementations, additional and/or alternative indications can be utilized to indicate which source perspective portion is associated with additional content and/or a source perspective summary (e.g., an arrow and/or other indicator extending from the additional content and pointing to a source perspective portion, color coding of additional content and source perspective portions). As another example, source perspective portion 420 is associated with source perspective summary 425 based on alignment of the source perspective summary 425 with the source perspective portion 420. Source perspective summary 425 includes a textual summary as well as a listing of links 435 to documents that were utilized to generate the source perspective summary 430. Thus, the reader can select one of the links to be provided with the corresponding document and/or a portion of the corresponding document that is relevant to the source perspective portion 420. In some implementations, the source perspective summary and/or additional content may be rendered in a separate interface from the target electronic document. For example, referring to FIG. 4B, the same target electronic document as illustrated in FIG. 4A (i.e., document 400) is rendered without the additional content and/or the source perspective summary rendered with the document. As illustrated, a cursor 445 is hovering over source perspective portion 410 and a pop-up window 440 is rendered upon hovering over (or selecting) the source perspective portion. The pop-up window provides additional content that is associated with the source perspective portion 410. In some implementations, a source perspective summary can be rendered in the same manner. For example, a source perspective summary can be rendered with a summary or rendered with both a summary and additional selectable portions that, when selected, render at least a portion of one or more of the additional resources that were utilized to generate the source perspective summary (e.g., a pop-up window that includes the same information and links as additional content 425 of FIG. 4A). In various implementations, additional content renderer 130 can be implemented (in whole or in part) by a corresponding one of the application(s) 107, can be installed as an extension of a corresponding one of the application(s) 107, and/or can interface (e.g., via an API) with a corresponding one of the application(s) 107. In response to accessing a given target electronic document via one of the application(s) 107, the additional content renderer 130 can access database 112 to determine whether the given target electronic document includes an entry in the database 112. For example, the database 112 can include an index of the entries based on URLs and/or other identifiers, and the additional content renderer 130 can search the index to determine whether an entry is present for the given target electronic document. If so, the additional content renderer can modify rendering of the given target electronic document, utilizing one or more techniques described herein. For example, the entry can include an indication of the source perspective portion(s) of the given electronic document, and such an indication utilized by the additional content renderer 130 to alter those source perspective portions such that they are highlighted, bolded, or otherwise demarcate as a queue to the user that they may potentially include a source perspective. Also, for example, the entry can include an indication of additional content related to the source perspective portion(s) of the given electronic document, and the additional content renderer can cause the additional content to be automatically rendered, or cause it to be rendered in response to certain user input (e.g., a selection or hovering over a source perspective portion). The additional content rendered 130 can modify the rendering of the target electronic document to cause rendering of the additional content and/or can monitor for certain user input and cause the rendering to occur in response to the certain user input. FIG. 5 illustrates a flowchart of an example method for rendering additional content related to a biased portion of a document. The steps of FIG. 5 can be performed by one or more processors, such as one or more processors of a client device. Other implementations may include additional steps than those illustrated in FIG. 5, can perform step(s) of FIG. 5 in a different order and/or in parallel, and/or may omit one or more of the steps of FIG. 5. The steps of FIG. 5 are described with respect to a source that is an author of a target electronic document. However, implementations of FIG. 5 can be performed with respect to other source(s) such as a publisher, a creator, or a combination of a publisher, creator, and/or author. Although FIG. 5 is described herein as rendering additional content related to a biased portion of a document, it should be understood that is for exemplary purposes and is not meant to be limiting. Further, it should be understood that the steps of FIG. 5 can be performed in rendering additional content related to any source perspective of a document for any number of different sources of the document. At step 505, a target electronic document and an author of the document are identified. The target electronic document can be identified based on a user navigating to the document. For example, the user can utilize one or more components of computing device 105 to select a document to view. The target electronic document can additionally or alternatively be identified as part of a crawling procedure, or based on being previously crawled and identified by the crawling procedure. Based on the content of the document and/or based on metadata associated with the document, one or more components can determine an author that generated the target electronic document. For example, a document can include a header and/or footnote that identifies a person as the author. Also, for example, metadata associated with the document can include author information. At step 510, the target electronic document is processed to determine a biased portion of the document. The biased portion can be determined by a component that shares one or more characteristics with source perspective identification engine 115. For example, source perspective identification engine 115 can identify as biased based on term(s) included in the portion (e.g., statements with “best,” “greatest,” “I think,” etc.). Also, for example, source perspective identification engine 115 can additionally or alternatively determine a portion of the document is biased by processing the portion utilizing a machine learning model, generating a measure based on the processing, and determining the measure satisfies a threshold that indicates likely bias. At step 515, one or more corpuses are searched to identify a plurality of additional resources that are related to the author. The additional resources can be identified by a component that shares one or more characteristics with additional resource engine 120. The additional resources can include, for example, other documents generated by the author, other documents that mention the author, documents related to others that are mentioned by the author, and/or other resources that have a relation to the author. At step 520, features of each of the additional resources and the biased portion of the target electronic document are processed to generate a relatedness score for each of the additional resources. The relatedness score can be generated by a component that shares one or more characteristics with additional resource scorer 125. For example, additional resource scorer 125 can provide the biased portion of the target electronic document and one or more of the resources as input to a machine model and utilize the output of the trained machine learning model to generate a relatedness score between the biased portion and the additional resource. In some implementations, additional resource scorer 125 can generate a relatedness score that is a binary score (e.g., “1” for related, “0” for unrelated). In some implementations, additional resource scorer 125 may can a relatedness score that is non-binary and that is representative of a level of relatedness between the additional resource and the biased portion. At step 525, relationships between additional content generated from the additional resources and the biased portion of the target electronic document are stored in a database for those additional resources with relatedness scores that satisfy a threshold. The relationships can be stored in a database that shares one or more characteristics with database 112. In some implementations, the relationship can be between the entire target electronic document and additional content generated from one or more additional resources. In some implementations, the relationship may be between a particular biased portion of the target electronic document and the additional content. At step 530, one or more components cause a computing device that is rendering the target electronic document to render the additional content simultaneously with the target electronic document. In some implementations, a component that shares one or more characteristics with additional content renderer 130 can cause the computing device to render the additional content with the target electronic document. For example, additional content renderer 130 can cause the client device 105 to render the additional content along with the target electronic document, such as illustrated in FIG. 4A. In some implementations, additional content renderer 130 can cause the client device 105 to render the target electronic document with selectable portions associated with biased portions such that, upon selecting the selectable portion, the corresponding additional content is rendered in a separate interface, as illustrated In FIG. 4B. FIG. 6 is a block diagram of an example computing device 610 that may optionally be utilized to perform one or more aspects of techniques described herein. Computing device 610 typically includes at least one processor 614 which communicates with a number of peripheral devices via bus subsystem 612. These peripheral devices may include a storage subsystem 624, including, for example, a memory subsystem 625 and a file storage subsystem 626, user interface output devices 620, user interface input devices 622, and a network interface subsystem 616. The input and output devices allow user interaction with computing device 610. Network interface subsystem 616 provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices. User interface input devices 622 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 610 or onto a communication network. User interface output devices 620 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 610 to the user or to another machine or computing device. Storage subsystem 624 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 624 may include the logic to perform selected aspects of the methods described herein, as well as to implement various components depicted in FIG. 1. These software modules are generally executed by processor 614 alone or in combination with other processors. Memory 625 used in the storage subsystem 624 can include a number of memories including a main random access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored. A file storage subsystem 626 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 626 in the storage subsystem 624, or in other machines accessible by the processor(s) 614. Bus subsystem 612 provides a mechanism for letting the various components and subsystems of computing device 610 communicate with each other as intended. Although bus subsystem 612 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computing device 610 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 610 depicted in FIG. 6 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device 610 are possible having more or fewer components than the computing device depicted in FIG. 6. As described herein (e.g., with respect to FIGS. 4A and 4B), an additional content renderer (e.g., additional content renderer 130 of FIG. 1) can cause a computing device of a user to render additional content that is associated with a target electronic document. The additional content can be determined by an additional content determination engine (e.g., additional content determination engine 135 of FIG. 1), and can be rendered simultaneously with the target electronic document so that the user, upon viewing the target electronic document, can also view the additional content. In some implementations, the additional content can be related to one or more source perspective portions of the target electronic document, and the additional content renderer can render the target electronic document with an indication that additional content is available and relevant to one or more of the source perspective portions of the target electronic document. The one or more source perspective portions of the target electronic document can be rendered such that they are distinguishable from the rest of the document (e.g., underlined, bold-faced, capitalized, encircled, rendered in a different color, etc.) so that the reader can recognize that additional content is available for one or more of the source perspective portions of the target electronic document. In some implementations, the additional content renderer can render a selectable interface element that indicates additional content relevant to one or more of the source perspective portions is available without rendering the additional content. In some versions of those implementations, one or more of the source perspective portions of the target electronic document can be selectable and the additional content may be rendered upon the selecting. For example, as described with respect to FIG. 3, a target electronic document can be provided along with indications that additional content is available without rendering the additional content simultaneously with the target electronic documents. In some implementations, multiple source perspective portions of a target electronic document can be associated with the same additional content. For example, additional content can be associated with a publisher and/or creator of the entire target electronic document (and not to a specific source perspective portion of the document penned by an author). Thus, the target electronic document can be rendered along with multiple source perspective portions that are relevant to an author, publisher, and/or creator of the target electronic document. In some implementations, the additional content renderer can render the target electronic document graphically along with a source perspective summary for an author, publisher, and/or creator of the target electronic document, where the source perspective summary is generated based on identified additional resources that are related to one or more source perspective portions of the target electronic document. For example, multiple additional resources can be related to a source perspective portion of the target electronic document penned by an author, a source perspective of a publisher that published the target electronic document, and/or a source perspective of a creator that collated the targeted electronic document, and a corresponding source perspective summary for the author, publisher, and/or creator can be generated based on the content of the additional resources. The additional content renderer can render the target electronic document along with one or more graphical indications that a corresponding source perspective summary is available for one or more biased portions of the document penned by an author, a source perspective of a publisher that published the target electronic document, and/or a source perspective of a creator that collated the targeted electronic document. In some implementations, a source perspective summary can be generated based on features and/or content of a plurality of additional resources that are related to the target electronic document. The plurality of additional resources can be stored in one or more corpuses that are searchable. The additional content renderer can render the source perspective summary or a portion of the source perspective summary along with the target electronic document, and the source perspective summary can be rendered with links to one or more of the plurality of additional resources used in generating the source perspective summary. Thus, the user can select one of the links and be provided with a corresponding one of the plurality of additional resources. In some implementations, the source perspective summary can be rendered in a separate section of an interface as the target electronic document (e.g., as described with respect to FIGS. 4A and 7A). In other implementations, the source perspective summary can be rendered in a separate interface from the target electronic document (e.g., as described with respect to FIGS. 4B and 7B). Moreover, the source perspective summary can provide an explanation of a perspective for at least one source of a target electronic document. The at least one source can include an author of the target electronic document, the publisher of the target electronic document, and/or a creator of the target electronic document. The author of the target electronic document can be the individual that penned content of the target electronic document and/or generated other portions (e.g., images) of the target electronic document. For example, a person who pens a news article or creates a caricature for the news article can be considered an author of the news article. The publisher of the target electronic document can be, for example, a website, magazine, news outlet, corporation, and/or other entities that hosts, prepares, and/or facilitates dissemination of the target electronic document. For example, a news agency that prepares and/or publishes a news article can be considered the publisher of the news article. The creator of the target electronic document can be one or more individuals that collated content of the document (e.g., re-publishes the target electronic document on a corresponding web site, shares the target electronic document via a social media account associated with the creator, and/or other forms of collating the target electronic document), but that didn't necessarily originally author and/or originally publish the target electronic document. For example, a second news agency that publishes a news article (or slight variation thereof) that was originally published and/or prepared by a first news agency collates the news article can be considered the creator of the news article even though the second news agency did not originally author and/or originally publish the news article. Further, the source perspective summary for the at least one source of the target electronic document can be based on content and/or additional resource features of a plurality of additional resources that are related to the at least one source and/or related to a source perspective portion of the target electronic document. In other words, the source perspective summary for a source of the target electronic document can be based on other related documents penned by the author (when the author is the source), other related documents published by the publisher (when the publisher is the source), and/or other related documents collated by the creator (when the creator is the source). For example, an author perspective summary for an author can provide an explanation of a source perspective portion (e.g., biased portion, opinionated portion, and/or other subjective portion) of a target electronic document with respect to the author, and can be generated based on features of other documents penned by the author that include content related to the biased portion of the target electronic document. As another example, a publisher perspective summary for a publisher can provide an explanation of source perspective portion (e.g., biased portion, opinionated portion, and/or other subjective portion) of a target electronic document with respect to the publisher, and can be generated based on features of other documents published by the publisher that include content related to the target electronic document. In this example, the other documents can be penned by the same author or different authors. As yet another example, a creator perspective summary for a creator can provide an explanation of a source perspective portion (e.g., biased portion, opinionated portion, and/or other subjective portion) of a target electronic document with respect to the creator, and can be generated based on features of other documents collated by the creator that include content related to the target electronic document. Accordingly, the source perspective summary can provide an explanation of perspective from the at least one source of the electronic target document based on other documents associated with the at least one source, as opposed to being based on other documents that include content related to the source perspective portions of the target electronic document, but are not associated with the at least one source. In various implementations, the additional resources that are related to the at least one source and/or related to the source perspective portion of the target electronic document can be identified by searching one or more corpuses. The one or more corpuses can include different types of additional resources, such as new articles, blog posts, social media posts, and/or types of documents. In some implementations, multiple source perspective summaries can be generated for the at least one source for each of the different types of additional resources. In some versions of those implementations, a first portion of the source perspective summary for the at least one source can be generated based on a first type of additional resources and a second portion of the source perspective summary for the at least one source can be generated based on a second type of additional resources. For example, a first portion of the source perspective summary for a publisher can be generated based on features of news article that are related to a target news article published by the publisher, and a second portion of the source perspective summary for the publisher can be generated based on social media posts and/or interactions therewith for a social media account associated with the publisher. In some implementations, a single source perspective summary can be generated for the at least one additional source. In some versions of those implementations, a source perspective of the at least one source can be weighted by one or more weighting factors based a type of the additional resources. For example, if an author is a travel agent that pens an article about international travel destinations and that includes a source perspective portion of “Thailand is the best country in Asia to visit”, then the article can be weighted by a weighting factor of 1.0 to indicate the article is highly indicative of the author's perspective. In contrast, if the author pens a social media post that includes a source perspective portion of “Thailand is the coolest country in Asia to visit”, then the social media post can be weighted by a weighting factor of 0.7 to indicate the article is indicative of the author's perspective, but not as indicative of the author's perspective as the news article. In other implementations, each type of the additional resources can be weighted equally. Moreover, the source perspective summary can include an explanation of the source perspective for the at least one source of the target electronic document that informs a user consuming the target electronic document as to a perspective of the at least one source (e.g., an author that penned the target electronic document, a publisher that published the target electronic document, and/or a creator that collated the target electronic document). In some implementations, the source perspective summary includes various portions for the at least one source. Each of these portions of the source perspective summary can include different manners of explaining the perspective for the at least one source of the target electronic document. In some versions of those implementations, a first portion of the source perspective summary can be presented to the user consuming the target electronic document as one or more natural language explanations (e.g., words, phrases, and/or sentences) generated based on features of additional resources related to the target electronic document. In additional and/or alternative versions of those implementations, a second portion of the source perspective summary can be presented to the user consuming the target electronic document as one or more source perspective metrics generated based on features of additional resources related to the target electronic document and from the same source(s). The one or more source perspective metrics can include, for example, one or more source perspective percentages that indicate how often the at least one source portrays content included in source perspective portion(s) of the target electronic document in a particular manner (e.g., positively, negatively, and/or other manner), one or more source perspective statistics (e.g., mean, median, standard deviation, and/or or other source perspective statistics) that indicate how content included in source perspective portion(s) of the target electronic document includes a given perspective compared to source perspective portions of other documents from the at least one source, one or more visual representations (e.g., pie graph, bar graph, and/or other visual representations) that indicate how often the at least one source portrays content included in the source perspective portions of the target electronic document in a particular manner, and/or other source perspective metrics. In some additional and/or alternative versions of those implementations, a third portion of the source perspective summary can include a listing of links to one or more of the additional resources that were utilized to generate a natural language explanation for the source perspective summary and/or utilized to generate source perspective metrics for the source perspective summary. Each link included of the listing of links can, when selected, cause a computing device navigate to a corresponding additional resource (or a particular portion of the corresponding additional resource associated with the explanation for the source perspective) that was utilized to generate the natural language explanation for the source perspective summary and/or utilized to generate the source perspective metrics for the source perspective summary. The links in the listing of links can also be represented as hyperlinked text that, when selected, causes the computing device to navigate to the corresponding additional resource. Thus, the source perspective summary can include various explanations of different perspectives for the at least one source of the target electronic document. In some implementations, the source perspective summary may be rendered in a separate interface from the target electronic document. For example, referring to FIG. 7A, a target electronic document 700 is rendered by an additional content renderer (e.g., additional content renderer 130 of FIG. 1) in a first section of an interface. The target electronic document 700 of FIG. 7A is entitled “Basketball Season Tips Off!” authored by “John Smith”, and published by “Example Sports Radio Network”. Notably, the target electronic document 700 is a news article that includes a bias towards University of Blue's basketball team as indicated by biased portions 710 and 720. In some implementations, as illustrated, the target electronic document 700 can be provided with an indication of the source perspective portions 710 and 720 of the target electronic document 700. For example, source perspective portions 710 and 720 are bolded to inform the user consuming the target electronic document that statements included in the target electronic document 700 may be influenced by a source's perspective. Other techniques for graphically demarcating the source perspective portions 710 and 720 can be utilized, and can include, for example, encircling, underlining, highlighting, and/or other manners of graphically demarcating the source perspective portion(s) of the target electronic document. Additional content interface 705 includes rendered versions of a Publisher Perspective Summary 715 and an Author Perspective Summary 725 generated based on features of additional resources related to content of the source perspective portions 710 and 720. In some implementations, the additional content interface 705 can be rendered simultaneously with the target electronic document 700. In other implementations, the additional content interface 705 can be rendered responsive to receiving an indication to view the additional content interface 705. For example, a user may click, highlight, underline, or otherwise select one or more of the source perspective portions 710 and 710, and the additional content interface 705 can be rendered responsive to the selection. As another example, the target electronic document can be rendered along with one or more selectable elements that, upon selection, renders the additional content interface 705 along with the target electronic document 700, and that, upon an additional selection, remove the additional content interface 705. In this manner, the additional content interface 705 can be toggled on and off for consumption by a user. For the Publisher Perspective Summary 715, an additional content determination engine (e.g., additional content determination engine 135 of FIG. 1) can search one or more corpuses to identify additional resources related to content the source perspective portions 710 and 720. The identified additional resources can include (or be restricted to) other documents prepared and/or published by Example Sports Radio Network, social media posts and/or interactions of a social media account associated with Example Sports Radio Network, and/or other additional resources associated with Example Sports Radio Network. The features of the identified additional resources enable a source perspective identification engine (e.g., source perspective identification engine 115 of FIG. 1) to generate a natural language explanation 733A for the Publisher Perspective Summary 715 that explains Example Sports Radio Network's perspective on the content, such as biases, opinions, assumptions, predispositions, and/or other perspectives (e.g., Example Sports Radio specializes in publishing articles about University of Blue sports). For example, the natural language explanation 733A for the Publisher Perspective Summary 715 can be generated based on content from additional resources that are related the source perspective portions 710 and 720 and/or explain the biases, opinions, and/or other subjective measures of Example Sports Radio Network. Further, the Publisher Perspective Summary 715 can include a listing of links 735A to the additional resources that were utilized to generate the natural language explanation 733A for the Publisher Perspective Summary 715. The listing of links 735A includes a first link to Document A, a second link to Document B, and a selectable interface element that, when selected, enables a user consuming the target electronic document 700 to view more of the additional resources utilized in generating the natural language explanation 733A. In some implementations, the links included in the listing of links 735A include links that provide the greatest explanatory extent in generating the natural language explanation 733A (e.g., as described with respect to additional resource scorer 125). Thus, the user can select one of the links to be provided with the corresponding document and/or a particular portion of the corresponding document that is relevant to the source perspective portions 710 and 720. Further, the Publisher Perspective Summary 715 can also include publisher perspective metrics 737A (e.g., shown as publisher perspective percentages in FIG. 7A) for the publisher Example Sports Radio Network. The publisher perspective metrics 737A indicate that 90% of documents prepared and/or published by Example Sports Radio Network, social media posts shared/liked by a social media account associated with Example Sports Radio Network, and/or other features of additional resources related to the source perspective portions 710 and 720 portray the University of Blue basketball team in a positive manner, whereas only 10% portray the University of Blue basketball team in a negative manner. Further, the publisher perspective metrics 737A can be rendered along with corresponding hyperlinked text 739A. As shown in FIG. 7A, the hyperlinked text 739A enables a user to navigate to additional resources utilized in generating the publisher perspective metrics 737A for Example Sports Radio Network's positive portrayal of University of Blue (e.g., a news article titled “University of Blue is The Greatest”) and for Example Sports Radio Network's negative portrayal of University of Blue (e.g., a social media post of “University of Blue Loses . . . Again”). Although the corresponding hyperlinked text 739A is depicted as hyperlinked text to a single additional resource for each of the publisher perspective metrics 737A, it should be understood that is for exemplary purposes and is not meant to be limiting. For example, the corresponding hyperlinked text 739A can also be rendered with a selectable interface element that, when selected, enables a user consuming the target electronic document 700 to view more of the additional resources utilized in generating the publisher perspective metrics 737A. For the Author Perspective Summary 725, the additional content determination engine can search one or more corpuses to identify additional resources related to content the source perspective portions 710 and 720. The identified additional resources can include (or be restricted to) other documents penned by John Smith, social media posts and/or interactions of a social media account associated with John Smith, and/or other additional resources associated with John Smith. The features of the identified additional resources enable a source perspective identification engine to determine and generate a natural language explanation 733B for the Author Bias Summary 725 explaining John Smith's perspective on the content, such as biases, opinions, assumptions, predispositions, and/or other perspectives (e.g., John Smith is a distinguished alumnus of University of Blue, a booster for University of Blue sports, and frequently attends University of Blue sporting events). For example, the natural language explanation 733B for the Author Perspective Summary 725 can be generated based on content from additional resources that are related the source perspective portions 710 and 720 and/or explain the biases, opinions, and/or other subjective measures of John Smith. Further, the Author Perspective Summary 725 can include a listing of links 735B to the additional resources that were utilized to generate the natural language explanation 733B for the Author Perspective Summary 725. The listing of links 735B includes a first link to Document A, a second link to Document C, and a selectable interface element that, when selected, enables a user consuming the target electronic document 700 to view more of the additional resources utilized in generating the natural language explanation 733B. In some implementations, the links included in the listing of links 735B include links that provide the greatest explanatory extent in generating the natural language explanation 733A (e.g., as described with respect to additional resource scorer 125). Thus, the user can select one of the links to be provided with the corresponding document and/or a particular portion of the corresponding document that is relevant to the source perspective portions 710 and 720. Further, the Author Perspective Summary 725 can also include author perspective metrics 737B (e.g., shown as author perspective percentages in FIG. 7A) for the author John Smith. The author perspective metrics 737B indicate that 95% of documents penned by John Smith, social media posts shared/liked by a social media account associated with John Smith, and/or other features of additional resources related to the source perspective portions 710 and 720 portray the University of Blue basketball team in a positive manner, whereas only 5% portray the University of Blue basketball team in a negative manner. Further, the author perspective metrics 737B can be rendered along with corresponding hyperlinked text 739B. As shown in FIG. 7A, the hyperlinked text 739B enables a user to navigate to additional resources utilized in generating the author perspective metrics 737B for John Smith's positive portrayal of University of Blue (e.g., a news article titled “University of Blue is Unstoppable”) and for John Smith's negative portrayal of University of Blue (e.g., a social media post of “University of Blue Has Worst Year in Team History”). Although the corresponding hyperlinked text 739B is depicted as hyperlinked text to a single additional resource for each of the author perspective metrics 737B, it should be understood that is for exemplary purposes and is not meant to be limiting. For example, the corresponding hyperlinked text 739B can also be rendered with a selectable interface element that, when selected, enables a user consuming the target electronic document 700 to view more of the additional resources utilized in generating the author perspective metrics 737B. Notably, in the example of FIG. 7A, the natural language explanations 733A for the Publisher Perspective Summary 715 is generated based on features (e.g., content, metadata, and/or other features) included in at least Document A and Document B, and the natural language explanation 733B for the Author Perspective Summary 725 is generated based on features (e.g., content, metadata, and/or other features) included in at least Document A and Document C. Thus, Document A is an additional resource that is published by Example Sports Radio Network and that is also authored by John Smith. However, Document B is an additional resource that is published by Example Sports Radio Network, but not authored by John Smith. Further, Document C is an additional resource that is authored by John Smith, but not published by Example Sports Radio Network. Even though Document B and Document C in FIG. 7A do not include the same author (e.g., John Smith) and the same publisher (e.g., Example Sports Radio Network) as the target electronic document 700 (like Document A in FIG. 7A), an additional content determination engine (e.g., additional content determination engine 135 of FIG. 1) can still identify each of these additional resources because they include content related to the source perspective portions 710 and 720 for the respective sources of the target electronic document. Moreover, implementations that provide natural language explanations in the source perspective summary (e.g., natural language explanation 733A based on features of Document A and Document B, and natural language explanation 733B based on features of Document A and Document C), can result in a reduced quantity of user inputs (or even no user inputs) being needed to identify additional resources that explain source perspective portions of electronic documents. Those implementations additionally or alternatively result is conservation of client and/or network resources by rendering the natural language explanation along with a target electronic document in a single interface, and also allow “one-click” navigation to the additional resources utilized in generating the natural language explanation. Absent these techniques, further user input to conduct additional searches, opening of new tabs based on that search, and/or navigating to additional interfaces would be required. Although FIG. 7A is depicted as including only the Publisher Perspective Summary 715 and the Author Perspective Summary 725, it should be understood that is for exemplary purposes and not meant to be limiting. For example, if the target electronic document 700 was also collated by a creator (e.g., as described with respect to FIG. 7B), then the additional interface 705 could additionally and/or alternatively include a creator perspective summary. The creator perspective summary can be generated and rendered in any manner described herein (e.g., with respect to FIGS. 1, 4A, 4B, 7A, and 7B). Moreover, although the Publisher Perspective Summary 715 and the Author Perspective Summary 725 of FIG. 7A are depicted as including various natural language explanations, source perspective metrics, and listings of links, it should be understood that the Publisher Perspective Summary 715 and/or the Author Perspective Summary 725 can include one of natural language explanations, source perspective metrics, listings of links, and/or any combination thereof to inform the user consuming the target electronic document of potential biases and explanations for the potential perspectives included in the target electronic document 700 and/or potential perspectives of a publisher and/or creator of the target electronic document 700. Further, in some implementations, the target electronic document 700 can include metadata indicative of source perspective(s) included in the target electronic document 700. In some implementations, the source perspective summary and/or additional content can be rendered in a separate interface from the target electronic document. For example, referring to FIG. 7B, the same target electronic document as illustrated in FIG. 7A (i.e., target electronic document 700) is rendered without the additional interface 705. Further, the target electronic document 700 is also rendered as a collated version of the target electronic document 700 having a creator of “The Example-Journal”. As illustrated, a cursor 745A is hovering over the creator “The Example-Journal”, and a Creator Perspective Summary 750 is rendered as a pop-up window upon hovering over (or selecting) the creator The Example-Journal. In comparison to the source perspective summaries of FIG. 7A, the Creator Perspective Summary 750 of FIG. 7B only includes creator perspective metrics 737C (e.g., shown as creator perspective percentages in FIG. 7B) that indicate 45% of documents collated by The Example-Journal, social media posts shared/liked by a social media account associated with The Example-Journal, and/or other features of additional resources related to the source perspective portions 710 and 720 portray the University of Blue basketball team in a positive manner, whereas 55% portray the University of Blue basketball team in a negative manner. Further, the creator perspective metrics 737C can be rendered along with corresponding hyperlinked text 739C. As shown in FIG. 7B, the hyperlinked text 739C enables a user to navigate to additional resources utilized in generating the creator perspective metrics 737C for The Example Journal's positive portrayal of University of Blue (e.g., a news article titled “University of Blue Lands the Nation's Best Recruit”) and for The Example-Journal's negative portrayal of University of Blue (e.g., another news article titled “University of Blue Finishes as the Worst Team in the State”). Although the corresponding hyperlinked text 739C is depicted as hyperlinked text to a single additional resource for each of the creator perspective metrics 737C, it should be understood that is for exemplary purposes and is not meant to be limiting. For example, the corresponding hyperlinked text 739C can also be rendered with a selectable interface element that, when selected, enables a user consuming the target electronic document 700 to view more of the additional resources utilized in generating the creator perspective metrics 737C. Similarly, a cursor 745B is hovering over the first source perspective portion 710, and the Author Perspective Summary 715 is rendered as a pop-up window upon hovering over (or selecting) the first source perspective portion 710. In comparison to the Author Perspective Summary 725 of FIG. 7A, the Author Perspective Summary 725 of FIG. 7B only includes the author perspective metrics 737B (e.g., shown as author perspective percentages in FIG. 7B) and the corresponding hyperlinked text 739B. Although FIG. 7B is depicted as only including the Author Perspective Summary 725 and the Creator Perspective Summary 750, it should be understood that is for exemplary purposes and not meant to be limiting. For example, upon hovering over (or selecting) the publisher Example Sports Radio Network, the Publisher Perspective Summary 715 can be rendered as a pop-up window. Moreover, although the Author Publisher Summary 725 and the Creator Perspective Summary 750 of FIG. 7B are depicted as including only source perspective percentages and related links, it should be understood that the Author Perspective Summary 725 and/or the Creator Perspective Summary 750 can also include one of a natural language explanations, other source perspective metrics, other links and/or listings of links, and/or any combination thereof to inform the user consuming the target electronic document of potential perspectives and explanations for the potential perspectives included in the target electronic document 700 and/or potential perspectives of a publisher and/or creator of the target electronic document 700. In some additional and/or alternative implementations, a user can select other content of the target electronic document 700 that is in addition to the source perspective portions 710 and 720 (e.g., via clicking, highlighting, underlining, or otherwise selecting). In some versions of those implementations, a user interface element can be rendered along with the target electronic document 700 in response to the user selecting the other content, and the user interface element, when selected, can cause a source perspective identification engine to analyze the selected other content. In some further versions of those implementations, the selected other content can be analyzed to determine whether the selected other content potentially includes a perspective that is associated with an author, publisher, and/or creator of the target electronic document 700 and that is related to the selected other content (e.g., using source perspective identification engine 115) and/or to determine whether there is additional document(s) that explain any source perspective included in the selected other content (e.g., using additional resource scorer 125). If the selected other content includes a source perspective and/or the source perspective can be explained, another source perspective summary can be rendered via additional content interface 705 and/or via a new interface (e.g., the pop-up window of FIG. 7B), and/or one or more of the rendered source perspective summaries can be updated. Additionally or alternatively, the user selection of the other content of the target electronic document 700 can be used as a training instance for updating one or more machine learning models (e.g., the machine learning model utilized by perspective identification engine 115 and/or the machine learning model utilized by additional resource scorer 125 described with respect to FIG. 1). For example, if a user highlights other content of “When the University of Blue team plays the University of Red team next week, the University of Blue team will win” and if this other content is determined to potentially include a subjective perspective of the author John Smith, then the Author Perspective Summary 715 can be rendered via the additional content interface 705 and/or updated based on the selected additional content. Further, the Author Perspective Summary 715 can include links to additional documents that explain any source perspective included in the selected other content. In this manner, the user can flag portion(s) of the target electronic document 700 that potentially include source perspective(s) that was not previously identified as including source perspective(s). In various implementations, if a target electronic document includes quote(s) and/or content that is collated by a creator, then the target electronic document can be analyzed (e.g., using source perspective identification engine 115) to determine whether the quote(s) and/or the content that is collated by the creator misrepresent original content from an original source. Further, the original content from the original source can also be identified. In some versions of those implementations, a source perspective summary can also include an indication of any content that is misrepresented in the target electronic document. For example, if The Example-Journal, the creator of the target electronic document 700 in FIG. 7B, included a quote from the coach of the University of Blue team in the target electronic document 700 that stated, “I think we have a chance to win the game if we rebound well”, but the University of Blue coach actually stated, “We will win the game if we rebound well”, then this misrepresentation can be included in the Creator Perspective Summary 750 of FIG. 7B. Although the misrepresentation in this example may seem negligible, it may portray the coach of the University of Blue team in a more likeable and/or respectable manner than that conveyed by the actual statement of the coach, and it may be illustrative of the creator's subjective perspective on the University of Blue team. In some further versions of those implementations, the additional content interface 705 can also include a source perspective summary for an original source of the quote. For example, the additional content interface 705 may also include a bias summary for the University of Blue coach as the original author of the quote. In various implementations, an additional content renderer (e.g., additional content renderer 130 of FIG. 1) can be implemented (in whole or in part) by a corresponding one of application(s) (e.g., application(s) 107 of FIG. 1), can be installed as an extension of a corresponding one of the application(s), and/or can interface (e.g., via an API) with a corresponding one of the application(s). In response to accessing a given target electronic document via one of the application(s), the additional content renderer can access one or more databases (e.g., database 112 of FIG. 1) to determine whether the given target electronic document includes an entry in one or more of the databases. For example, one or more of the databases can include an index of the entries based on URLs and/or other identifiers, and the additional content renderer can search the index to determine whether an entry is present for the given target electronic document. If so, the additional content renderer can modify rendering of the given target electronic document, utilizing one or more techniques described herein. For example, the entry can include an indication of the source perspective portion(s) of the given electronic document, and such an indication utilized by the additional content renderer to alter those source perspective portions such that they are highlighted, bolded, or otherwise demarcate as a queue to the user that they may potentially include a source perspective. Also, for example, the entry can include an indication of additional content related to the source perspective portion(s) of the given electronic document, and the additional content renderer can cause the additional content to be automatically rendered, or cause it to be rendered in response to certain user input (e.g., a selection or hovering over a source perspective portion). The additional content rendered can modify the rendering of the target electronic document to cause rendering of the additional content and/or can monitor for certain user input and cause the rendering to occur in response to the certain user input. In various implementations, an additional content determination engine (e.g., additional content determination engine 135 of FIG. 1) can utilize one or more de-duping techniques to ensure the source perspective summary more accurately reflects actual source perspective of at least one source of a target electronic document. The additional content determination engine can compare features of the additional resources, and can refrain from including, in determining the source perspective metric(s), certain additional resources that are duplicative of other additional resources, thereby resulting in a subset of the identified additional resources. For example, if a publisher publishes a news article on a web site associated with the publisher and then shares a link on a social media account associated with the publisher along with a quote from the news article, then only content of the original publication of the news article will be included in determining the source perspective metric(s) for the publisher. In contrast, if the publisher shares the link on the social media account associated with the publisher along with additional content that is not included in the news article (e.g., “the University of Blue team is also the best defensive team in the nation”), then both the content of the original publication of the news article and the additional content of the social media post will be included in determining the source perspective metric(s) for the publisher. Further, if an author that penned the news article shares the link from the social media account associated with the publisher along with additional content that is not included in the news article (e.g., “the University of Blue team is also the best offensive team in the nation”), then both the content of the news article and the additional content of the social media post will be included in determining the source perspective metric(s) for the author, but not in determining source perspective metric(s) the publisher. By using these de-duping techniques, the source perspective summary can more accurately reflect actual perspectives of the at least one source of the target electronic document since corresponding source perspective metric(s) are not skewed by duplicative resources. In various implementations, the additional content determination engine can utilize a graph neural network to identify additional resources related to source perspective portions of a target electronic document. Further, an additional resource scorer (e.g., additional resource scorer 125 of FIG. 1) can also utilize the graph neural network to determine relatedness scores that are indicative of relatedness between a given one of the identified additional resources (or a portion of the additional resources) and a target electronic document. A knowledge graph can include various nodes, such as author nodes, publisher nodes, creator nodes, and/or resource nodes, and edges connecting each of the nodes can define relationships between these various nodes. For example, an author node of “John Smith” can be connected to a resource node of “Basketball Season Tips Off!” by an “authored” edge; a publisher node of “Example Sports Radio Network” can be connected to the resource node of “Basketball Season Tips Off!” by a “published” edge; a creator node of “The Example-Journal” can be connected to the resource node of “Basketball Season Tips Off!” by a “created” edge; and so on. The knowledge graph can also include various edges related to social media interactions. For example, if an author (e.g., John Smith) shares a news article (e.g., “University of Blue team favorite to win national championship”), then the author node associated with the author (e.g., author node “John Smith”) can be connected a resource node associated with the news article (e.g., resource node “University of Blue team favorite to win national championship”) by a “shared” edge. Further, in some of those implementations, the knowledge graph can be iteratively applied as input across a graph neural network to generate one or more vectors that represent the nodes and/or edges of the knowledge graph. At each iteration, the vector that represents the nodes and/or edges can then be compared to the knowledge graph. Based on this comparison, the graph neural network embeds, in each of the nodes of the knowledge graph, information about neighboring nodes in the knowledge graph. Further, upon each iteration, each of the nodes is embedded with information about the neighboring nodes' neighboring nodes such that information about each node is propagated across the knowledge graph. Accordingly, each of the nodes of the knowledge graph are embedded with information about each of the other nodes of the knowledge graph by iteratively applying the knowledge graph with the embedded nodes as input across the graph neural network. For example, assume a knowledge graph includes an author node that is connected to both a publisher node and a creator node, but that the publisher node and the creator node are not connected. Further assume that the knowledge graph is applied as input across a graph neural network to generate a vector that represents the author node, the publisher node, the creator node, and/or corresponding edges between these nodes in the knowledge graph. In this example, the author node would be embedded with information from both the publisher node and the creator node, but both the publisher node and the creator node would only be embedded with information from the author node. However, by subsequently applying the knowledge graph with the embedded nodes as input across the graph neural network, the publisher node can be embedded with information from the creator node via the embedded author node, and the creator node can be embedded with information from the publisher node via the embedded author node. In this manner, additional resources related to source perspective portions of a target electronic document can be identified for use in generating source perspective summaries for at least one source of the target electronic document. Moreover, in some of those implementations, the additional resource scorer can determine relatedness scores for each of the identified additional resources based on the information embedded in each of the nodes of the knowledge graph. For example, the information embedded in a node can include an index of content included in each of the other nodes. This allows the additional content determination engine to quickly identify additional resources that are related to source perspective portions of a target electronic document without having to traverse edges of the knowledge graph to identify the additional resources. Further, this allows the additional resource scorer to determine the relatedness scores for some additional resources for a given source perspective portion of a target electronic document prior receiving any indication to view one or more source perspective summaries from a user consuming the target electronic document. In situations in which certain implementations discussed herein may collect or use personal information about users (e.g., user data extracted from other electronic communications, information about a user's social network, a user's location, a user's time, a user's biometric information, and a user's activities and demographic information, relationships between users, etc.), users are provided with one or more opportunities to control whether information is collected, whether the personal information is stored, whether the personal information is used, and how the information is collected about the user, stored and used. That is, the systems and methods discussed herein collect, store and/or use user personal information only upon receiving explicit authorization from the relevant users to do so. For example, a user is provided with control over whether programs or features collect user information about that particular user or other users relevant to the program or feature. Each user for which personal information is to be collected is presented with one or more options to allow control over the information collection relevant to that user, to provide permission or authorization as to whether the information is collected and as to which portions of the information are to be collected. For example, users can be provided with one or more such control options over a communication network. In addition, certain data may be treated in one or more ways before it is stored or used so that personally identifiable information is removed. As one example, a user's identity may be treated so that no personally identifiable information can be determined. As another example, a user's geographic location may be generalized to a larger region so that the user's particular location cannot be determined. While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure. 16730377 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Jul 26th, 2018 12:00AM https://www.uspto.gov?id=US11317417-20220426 Switching transmission technologies within a spectrum based on network load Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for switching between transmission technologies within a spectrum based on network load are described. In one aspect, a method includes obtaining first network load information that indicates network load for a first access point operating using listen before talk (LBT) and second network load information for a second access point using LBT, determining if at least one of the first network load information or the second network load information satisfies a network load threshold, and in response to determining that that the network load information satisfies the network load threshold, providing an instruction to the first access point to operate using frequency domain multiplexing. 11317417 1. A computer-implemented method, the method comprising: obtaining first network load information of a first access point operating using listen before talk (LBT) in a third spectrum that includes at least a portion of a first spectrum and at least a portion of a second spectrum, the second spectrum not overlapping any portion of the first spectrum; obtaining second network load information of a second access point operating using LBT in the second spectrum; determining if at least one of the first network load information or the second network load information satisfies a network load threshold; and in response to determining that the network load threshold is satisfied, providing an instruction to the first access point to operate using frequency domain multiplexing in the first spectrum. 2. The method of claim 1, wherein determining if at least one of the first network load information or the second network load information satisfies a network load threshold comprises: determining a number of devices connected to the first access point; and determining that the number of devices connected to the first access point satisfies a predetermined number. 3. The method of claim 1, wherein determining if the first network load information or the second network load information satisfies the network load threshold comprises: determining that a utilization of physical resource blocks satisfies a predetermined percentage. 4. The method of claim 1, wherein determining if at least one of the first network load information or the second network load information satisfies the network load threshold comprises: determining that a utilization of a physical downlink control channel satisfies a predetermined percentage. 5. The method of claim 1, wherein the third spectrum comprises an entirety of the first spectrum and the second spectrum. 6. The method of claim 1, wherein the third spectrum comprises frequencies between 3550 and 3700 GHz. 7. The method of claim 1, wherein providing the instruction to the first access point to operate using frequency domain multiplexing in the first spectrum comprises using Time Division Long Term Evolution (TD-LTE). 8. The method of claim 1, wherein the first access point is operated by a first carrier and the second access point is operated by a second carrier. 9. The method of claim 1, further comprising: providing an instruction to the second access point to operate using frequency domain multiplexing in the second spectrum. 10. The method of claim 9, further comprising: obtaining third network load information of the first access point using frequency domain multiplexing in the first spectrum; obtaining fourth network load information of the second access point operating using frequency domain multiplexing in the first spectrum; determining if at least one of the third network load information or the fourth network load information satisfies a second network load threshold; and in response to determining that the second network load threshold is satisfied, providing a second instruction to the first access point to operate using LBT in the third spectrum. 11. The method of claim 10, wherein determining if at least one of the third network load information or the fourth network load information satisfies the second network load threshold comprises: determining a number of devices connected to the first access point; and determining that the number of devices connected to the first access point satisfies a predetermined number. 12. The method of claim 10, wherein determining if at least one of the third network load information or the fourth network load information satisfies the second network load threshold comprises: determining that a utilization of physical resource blocks satisfies a predetermined percentage. 13. The method of claim 10, wherein determining if at least one of the third network load information or the fourth network load information satisfies the second network load threshold comprises: determining that a packet collision probability satisfies a predetermined probability. 14. The method of claim 10 wherein determining if at least one of the third network load information or the fourth network load information satisfies the second network load threshold comprises: determining that a packet random backoff satisfies a predetermined length of time. 15. The method of claim 1, wherein operating using LBT comprises using Carrier Sense Multiple Access (CSMA). 16. A system comprising: a data processing apparatus; and a non-transitory computer readable storage medium in data communication with the data processing apparatus and storing instructions executable by the data processing apparatus and upon such execution direct the data processing apparatus perform operations that: obtain first network load information of a first access point operating using listen before talk (LBT) in a third spectrum that includes at least a portion of a first spectrum and at least a portion of a second spectrum, the second spectrum not overlapping any portion of the first spectrum; obtain second network load information of a second access point operating using LBT in the second spectrum; determine if at least one of the first network load information or the second network load information satisfies a network load threshold; and in response to the determination that the network load threshold is satisfied, provide an instruction to the first access point to operate using frequency domain multiplexing in the first spectrum. 17. The system of claim 16, wherein the operation of determining if at least one of the first network load information or the second network load information satisfies the network load threshold comprises the data processing apparatus performing operations that: determine a number of devices connected to the first access point; and determine that the number of devices connected to the first access point satisfies a predetermined number. 18. The system of claim 16, wherein the operation of determining if at least one of the first network load information or the second network load information satisfies the network load threshold comprises the data processing apparatus performing operations that: determines that a utilization of physical resource blocks satisfies a predetermined percentage. 19. The system of claim 16, wherein the operation of determining if at least one of the first network load information or the second network load information satisfies the network load threshold comprises the data processing apparatus performing operations that: determine that a utilization of a physical downlink control channel satisfies a predetermined percentage. 20. A non-transitory computer readable storage medium storing instructions executable by a data processing apparatus and upon such execution cause the data processing apparatus to perform operations that: obtain first network load information of a first access point operating using listen before talk (LBT) in a third spectrum that includes at least a portion of a first spectrum and at least a portion of a second spectrum, the second spectrum not overlapping any portion of the first spectrum; obtain second network load information of a second access point operating using LBT in the second spectrum; determine if at least one of the first network load information or the second network load information satisfies a network load threshold; and in response to the determination that the network load threshold is satisfied, provide an instruction to the first access point to operate using frequency domain multiplexing in the first spectrum. 20 FIELD This specification relates to data transmission. BACKGROUND Networks can communicate using a variety of different technologies. For example, Global System for Mobile Communication (GSM), IEEE 802.11, 3G are all different technologies that can be used for wireless communication within a network. SUMMARY In general, an aspect of the subject matter described in this specification may involve a process for switching between transmission technologies within a spectrum based on network load. In a shared spectrum, for example the Citizens Broadband Radio Server (CBRS), multiple different techniques for wireless transmission may be used by different operators. For example, Time Division Long-Term Evolution (TD-LTE) may be used by one carrier in a first portion of the spectrum and Listen Before Talk (LBT) may be used by another carrier in a second different portion of the spectrum. As an example using particular technologies, between TD-LTE and LBT technologies, there may be no technology that is universally better than the other one in all scenarios. LBT may perform better than TD-LTE under low network load and TD-LTE may perform better than LBT under high network load. This may be due to an increase in overhead from packet random backoff in LBT corresponding to an increase in network load, making LBT less efficient than TD-LTE once network load gets high enough. Accordingly, there may be spectrum efficiency loss when using either technology in a certain non-desired scenario. For example, there may be spectrum efficiency loss in using LBT under high network load instead of using TD-LTE and using TD-LTE under low network load instead of using LBT. Dynamically switching between the two technologies based on network load may improve spectrum efficiency loss by limiting the use of each of the technologies in non-desired scenarios. Accordingly, a system may decrease spectrum efficiency loss by monitoring network load and dynamically switching between different transmission technologies based on the network load. The system may consider network load in terms of a number of factors including one or more of number of connected devices, physical resource block utilization, physical downlink control channel utilization, packet collision probability, packet random backoff, and other factors. In general, one innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of obtaining network load information that indicates network load for a first access point operating using frequency domain multiplexing in a first spectrum and network load for a second operator using frequency domain multiplexing in a second spectrum that does not overlap any portion of the first spectrum, determining that the network load information satisfies a network load threshold, and in response to determining that the network load information satisfies the network load threshold, providing an instruction to the first access point to operate using listen before talk (LBT) in a third spectrum the includes at least a portion of the first spectrum and at least a portion of the second spectrum. Other implementations of these aspects include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. For instance, in certain aspects, determining that the network load information satisfies a network load threshold includes determining that a number of devices connected to the first access point satisfies a predetermined number. In some aspects, determining that the network load information satisfies a network load threshold includes determining that a utilization of physical resource blocks satisfies a predetermined percentage. In some implementations, determining that the network load information satisfies a network load threshold includes determining that utilization of a physical downlink control channel satisfies a predetermined percentage. In certain aspects, using LBT in a third spectrum includes using LBT across an entirety of the first spectrum and the second spectrum. In some aspects, the third spectrum is between 3550 GHz and 3700 GHz. In some implementations, using frequency domain multiplexing includes using TD-LTE and using LBT comprises using Carrier Sense Multiple Access (CSMA). In certain aspects, the first access point is operated by a first carrier and the second access point is operated by the second carrier. In some aspects, actions include providing an instruction to the second access point to operate using LBT in the third spectrum. In some implementations, actions include obtaining network load information that indicates network load for the first access point operating using LBT in the third spectrum and network load for the second operator using LBT in the third spectrum, determining that the network load information satisfies a second network load threshold, and in response to determining that the network load information satisfies a second network load threshold, providing a second instruction to the first access point to operate using frequency domain multiplexing in the first spectrum. In certain aspects, determining that the network load information satisfies a second network load threshold includes determining that a number of devices connected to the first access point satisfies a predetermined number. In some aspects, determining that the network load information satisfies a second network load threshold includes determining that a utilization of physical resource blocks satisfies a predetermined percentage. In some implementations, determining that the network load information satisfies a second network load threshold includes determining that utilization of a physical downlink control channel satisfies a predetermined percentage. In certain aspects, determining that the network load information satisfies a second network load threshold includes determining that packet collision probability satisfies a predetermined probability. In some aspects, determining that the network load information satisfies a second network load threshold includes determining that packet random backoff satisfies a predetermined length of time. Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. By switching technologies within a spectrum based on network load, the system may decrease spectrum efficiency loss and increase network spectrum efficiency. Decrease in spectrum efficiency and increase in network efficiency may result in both a reduction in latency and an increase in throughput in communications between devices within the network all without using additional spectrum. The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a block diagram of an example system that switches between transmission technologies within a spectrum based on network load. FIG. 2 is a flow diagram of an example process for switching between transmission technologies within a spectrum based on network load. FIG. 3 is a block diagram of a computing system that can be used in connection with computer-implemented methods described in this specification. Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIG. 1 illustrates a diagram of an example system 100 that switches between transmission technologies within a spectrum based on network load. Briefly, and as described in further detail below, the system 100 includes a first access point 120, a first user equipment 122, a second access point 130, a second user equipment 132, and a spectrum controller 110. The first access point 120 may include one or more transceivers to send and receive wireless transmissions between user equipment. For example, the first access point 120 may be a base station of a first carrier that sends and receives wireless transmissions between the first user equipment 122. The first user equipment 122 may be a device that sends and receives wireless transmission using one or more transceivers. For example, the first user equipment 122 may include a mobile computing device, a laptop, a tablet, a smart watch, a mobile hotspot device. The first access point 120 may receive instructions from the spectrum controller 110 that indicate whether the first access point 120 should use frequency domain multiplexing or LBT for communications with user equipment and within what frequency range the first access point 120 should use. For example, the first access point 120 may obtain a mode switch instruction from the spectrum controller 110 that indicates the first access point 120 should use LBT in a frequency range of 3550-3700 GHz and in response, then switch from using TD-LTE in a frequency range of 3550-3649 GHz to using LBT in the frequency range of 3550-3700 GHz. Frequency domain multiplexing may involve dividing bandwidth available in a communication medium into a series of non-overlapping frequency sub-bands, each of which is used to carry a separate signal. For example, the first access point 120 may communicate with the first user equipment 122 between a frequency of 3550-3555 GHz and communicate with another user equipment between a non-overlapping frequency of 3556-3560 GHz. TD-LTE may be a form of frequency domain multiplexing. LBT may involve transmissions across an entire frequency range. For example, the first access point 120 may communicate with the first user equipment 122 between a frequency range of 3550-3700 GHz and with another user equipment also between the frequency range of 3550-3700 GHz. Use of the same frequency range is possible in LBT as a device may determine whether another device is already transmitting prior to transmitting. If no other device is transmitting, the device transmits. If another device is transmitting, the device determines there is a collision and waits a random amount of time up to a maximum length of time, the length of time waited referred to as a collision backoff time and the maximum length of time referred to as a packet random backoff, before attempting to transmit again. The more times the device needs to wait to transmit, i.e., the more collisions, the more the device increases the packet random backoff Carrier Sense Multiple Access (CSMA) may be a form of LBT. Similarly, the second access point 130 may include one or more transceivers to send and receive wireless transmissions between user equipment. For example, the second access point 130 may be a base station of a second carrier that sends and receives wireless transmissions between the second user equipment 132. The second user equipment 132 may be a device that sends and receives wireless transmission using one or more transceivers. For example, the second user equipment 132 may include a mobile computing device, a laptop, a tablet, a smart watch, a mobile hotspot device. The second access point 130 may receive instructions from the spectrum controller 110 that indicate whether the second access point 130 should use frequency domain multiplexing or LBT for communications with user equipment and within what frequency range the second access point 130 should use. For example, the second access point 130 may obtain a mode switch instruction from the spectrum controller 110 that indicates the second access point 130 should use LBT in a frequency range of 3550-3700 GHz and in response, then use LBT in that frequency range. The spectrum controller 110 may control allocation of spectrum used by the first access point 120 and the second access point 130. For example, the spectrum controller 110 may instruct the first access point 120 to communicate with the first user equipment 122 using the frequency range of 3550-3649 Gigahertz (GHz) and instruct the second access point 130 to communicate with the second user equipment 132 using the frequency range of 3650-3700 GHz. Additionally or alternatively, the controller 110 may control communication technology used by the first access point 120 and the second access point 130. For example, the spectrum controller 110 may instruct the first access point 120 to switch from using frequency domain multiplexing to using LBT. The spectrum controller 110 may control the spectrum and the communication technology that the first access point 120 and the second access point 130 use to increase network transmission efficiency. For example, the spectrum controller 110 may instruct the first access point 120 and the second access point 130 to use TD-LTE within different respective frequency ranges under network load conditions where TD-LTE will perform better than LBT. In another example, the spectrum controller 110 may instruct the first access point 120 and the second access point 130 to use LBT across the same frequency range under network load conditions where LBT will perform better than TD-LTE. The spectrum controller 110 may control the spectrum and the communication technology that the first access point 120 and the second access point 130 based at least on determining network load information satisfies a network load threshold. The network load information may indicate network load for the first access point 120 and the second access point 130. For example, as shown in FIG. 1, initially the first access point 120 and first user equipment 122 are communicating using TD-LTE in a frequency range of 3550-3649 GHz and the second access point 130 and second user equipment 132 are communicating using TD-LTE in a frequency range of 3650-3700 GHz. The spectrum controller 110 obtains network load information, determines that the network load information satisfies a network load threshold, and, in response, then transmits mode switch instructions to both the first access point 120 and the second access point 130 that causes the first access point 120 and the second access point 130 to both switch to using LBT in a frequency range of 3550-3700 GHz. Satisfaction of the network load threshold may indicate that LBT provides more network spectrum efficiency than frequency domain multiplexing and non-satisfaction of the network load threshold may indicate that frequency domain multiplexing provides more network spectrum efficiency than LBT. For example, the spectrum controller 110 may determine that based at least on obtain network load information, TD-LTE will perform better. In response, spectrum controller 110 may determine non-overlapping frequency ranges for each access point and instruct each of the access points to use TD-LTE within the determined corresponding frequency ranges. In another example, the spectrum controller 110 may determine that based at least on obtain network load information, LBT will perform better. In response, spectrum controller 110 may determine combine non-overlapping frequency ranges for each access point into a single frequency range, that may or may not be contiguous, and instruct each of the access points to use LBT within the single frequency range. The network load information may include one or more of number of devices connected to each access point, utilization of physical resource blocks, utilization of physical downlink control channel, a packet collision probability, and a packet random backoff. A physical resource block may refer to a smallest unit of allocation for a communication in a particular communication technology. A physical downlink control channel may be a channel with a fixed capacity that includes control information for LTE. A packet collision probability may refer to a probability that when a device using LBT wants to transmit another device will be transmitting. The spectrum controller 110 may determine that the network load information satisfies a network load threshold based on determining that a number of devices connected to the first access point satisfies a predetermined number. For example, the spectrum controller 110 may determine that two user equipment are in communication with the first access point 120 which is less than a predetermined number of five, ten, fifteen, or some other number, and, in response, determine the network load threshold is satisfied. In another example, the spectrum controller 110 may determine that one hundred user equipment are in communication with the first access point 120 which is not less than a predetermined number of five, ten, fifteen, or some other number, and, in response, determine the network load threshold is not satisfied. The spectrum controller 110 may determine that the network load information satisfies a network load threshold based on determining that a utilization of physical resource blocks satisfies a predetermined percentage. For example, the spectrum controller 110 may determine that less than ten, fifteen, twenty, or some other predetermined percentage of physical resource blocks are being used and, in response, determine that the network load information satisfies a network load threshold. In another example, the spectrum controller 110 may determine that equal to or more than ten, fifteen, twenty, or some other predetermined percentage of physical resource blocks are being used and, in response, determine that the network load information does not satisfy a network load threshold. The spectrum controller 110 may determine that the network load information satisfies a network load threshold based on determining that utilization of a physical downlink control channel satisfies a predetermined percentage. For example, the spectrum controller 110 may determine that less than ten, fifteen, twenty, or some other predetermined percentage of a physical downlink control channel is being utilized and, in response, determine that the network load information satisfies a network load threshold. In another example, the spectrum controller 110 may determine that equal to or more than ten, fifteen, twenty, or some other predetermined percentage of a physical downlink control channel are being used and, in response, determine that the network load information does not satisfy a network load threshold. While the above may describe determining that network load information satisfies a network load threshold for switching from using frequency domain multiplexing to LBT, similar thresholds may be used for switching from LBT to frequency domain multiplexing. For example, instead of determining whether less than a predetermined number of five, ten, fifteen, or some other number of user equipment is in communication with the first access point 120, the spectrum controller may determine more than a predetermined number of five, ten, fifteen, or some other number of user equipment is in communication with the first access point 120 and, in response, determine a network load threshold for switching from LBT to TD-LTE is satisfied. The spectrum controller 110 may determine that the network load information satisfies a network load threshold based on determining that packet collision probability satisfies a predetermined probability. For example, the spectrum controller 110 may determine that a packet collision probability is greater than ten, fifteen, twenty, or some other predetermined percentage and, in response, determine that the network load information satisfies a network load threshold. In another example, the spectrum controller 110 may determine that a packet collision probability is not greater than ten, fifteen, twenty, or some other predetermined percentage and, in response, determine that the network load information does not satisfy a network load threshold. The spectrum controller 110 may determine that the network load information satisfies a network load threshold based on determining that packet random backoff satisfies a predetermined length of time. For example, the spectrum controller 110 may determine that a packet random backoff is greater than four, five, six hundred milliseconds, or some other predetermined length of time and, in response, determine that the network load information satisfies a network load threshold. In another example, the spectrum controller 110 may determine that a packet random backoff is not greater than four, five, six hundred milliseconds, or some other predetermined length of time and, in response, determine that the network load information does not satisfy a network load threshold. In some implementations, the spectrum controller 110 may separately determine whether the network load information satisfies a network load threshold for switching transmission technology to both the first access point 120 and second access point 130 and only determine that there is a satisfaction when the network load threshold is satisfied for each access point separately. For example, the spectrum controller 110 may determine that while the first access point 120 has fewer than fifty user equipment connected, the second access point 130 has more than fifty user equipment connected, so determine that the network threshold is not satisfied. In another example, the spectrum controller 110 may determine that the first access point 120 has fewer than fifty user equipment connected and the second access point 130 has fewer than fifty user equipment connected, so determine that the network threshold is satisfied. In some implementations, the spectrum controller 110 determines that the network load information satisfies a network load threshold based on one or more of number of devices connected to each access point, utilization of physical resource blocks, utilization of physical downlink control channel, a packet collision probability, and a packet random backoff. For example, the spectrum controller 110 may assign a weight to each of these factors, determine a score based on the weights, and determine that the network load information satisfies a network load threshold if the score satisfies a score threshold. Different configurations of the system 100 may be used where functionality of the first access point 120, a first user equipment 122, a second access point 130, a second user equipment 132, and a spectrum controller 110 may be combined, further separated, distributed, or interchanged. For example, the spectrum controller 110 may be incorporated into one of the access points. FIG. 2 is a flow diagram of an example process 200 for switching between transmission technologies within a spectrum based on network load. For example, the process 200 may be performed by the spectrum controller 110. The process 200 includes obtaining network load information for a first spectrum and a second spectrum (210). For example, the spectrum controller 110 may obtain network load information that indicates five percent of physical resource blocks for a frequency range of 3550-3649 GHz is used by the first access point 120 and eight percent of physical resource blocks for a frequency range of 3650-3700 GHz is used by the second access point 130. The process 200 includes determining that the network load information satisfies a network load threshold (220). For example, the spectrum controller 110 may determine that five percent of physical resource blocks being used by the first access point 120 and eight percent of physical resource blocks being used by the second access point 130 are both individually less than a predetermined threshold of ten percent of physical resource blocks being used and, in response, determine that the network load information satisfies a network load threshold. The process 200 includes providing an instruction to operate using LBT using a third spectrum that includes at least a portion of the spectrum and a portion of the second spectrum (230). For example, the spectrum controller 110 may provide a mode switch instruction to the first access point 120 to switch to using LBT in a frequency range of 3550-3700 GHz includes the entirety of the frequency range of 3550-3649 GHz used by the first access point 120 in TD-LTE and the frequency range of 3650-3700 GHz used by the second access point 130 in TD-LTE. FIG. 3 shows an example of a computing device 300 and a mobile computing device 350 that can be used to implement the techniques described here. The computing device 300 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device 350 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting. The computing device 300 includes a processor 302, a memory 304, a storage device 306, a high-speed interface 308 connecting to the memory 304 and multiple high-speed expansion ports 310, and a low-speed interface 312 connecting to a low-speed expansion port 314 and the storage device 306. Each of the processor 302, the memory 304, the storage device 306, the high-speed interface 308, the high-speed expansion ports 310, and the low-speed interface 312, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 302 can process instructions for execution within the computing device 300, including instructions stored in the memory 304 or on the storage device 306 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as a display 316 coupled to the high-speed interface 308. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory 304 stores information within the computing device 300. In some implementations, the memory 304 is a volatile memory unit or units. In some implementations, the memory 304 is a non-volatile memory unit or units. The memory 304 may also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device 306 is capable of providing mass storage for the computing device 300. In some implementations, the storage device 306 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices (for example, processor 302), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums (for example, the memory 304, the storage device 306, or memory on the processor 302). The high-speed interface 308 manages bandwidth-intensive operations for the computing device 300, while the low-speed interface 312 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high-speed interface 308 is coupled to the memory 304, the display 316 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 310, which may accept various expansion cards (not shown). In the implementation, the low-speed interface 312 is coupled to the storage device 306 and the low-speed expansion port 314. The low-speed expansion port 314, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device 300 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 320, or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer 322. It may also be implemented as part of a rack server system 324. Alternatively, components from the computing device 300 may be combined with other components in a mobile device (not shown), such as a mobile computing device 350. Each of such devices may contain one or more of the computing device 300 and the mobile computing device 350, and an entire system may be made up of multiple computing devices communicating with each other. The mobile computing device 350 includes a processor 352, a memory 364, an input/output device such as a display 354, a communication interface 366, and a transceiver 368, among other components. The mobile computing device 350 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 352, the memory 364, the display 354, the communication interface 366, and the transceiver 368, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor 352 can execute instructions within the mobile computing device 350, including instructions stored in the memory 364. The processor 352 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 352 may provide, for example, for coordination of the other components of the mobile computing device 350, such as control of user interfaces, applications run by the mobile computing device 350, and wireless communication by the mobile computing device 350. The processor 352 may communicate with a user through a control interface 358 and a display interface 356 coupled to the display 354. The display 354 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 356 may comprise appropriate circuitry for driving the display 354 to present graphical and other information to a user. The control interface 358 may receive commands from a user and convert them for submission to the processor 352. In addition, an external interface 362 may provide communication with the processor 352, so as to enable near area communication of the mobile computing device 350 with other devices. The external interface 362 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory 364 stores information within the mobile computing device 350. The memory 364 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 374 may also be provided and connected to the mobile computing device 350 through an expansion interface 372, which may include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory 374 may provide extra storage space for the mobile computing device 350, or may also store applications or other information for the mobile computing device 350. Specifically, the expansion memory 374 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, the expansion memory 374 may be provided as a security module for the mobile computing device 350, and may be programmed with instructions that permit secure use of the mobile computing device 350. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier that the instructions, when executed by one or more processing devices (for example, processor 352), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory 364, the expansion memory 374, or memory on the processor 352). In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver 368 or the external interface 362. The mobile computing device 350 may communicate wirelessly through the communication interface 366, which may include digital signal processing circuitry where necessary. The communication interface 366 may provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication may occur, for example, through the transceiver 368 using a radio-frequency. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 370 may provide additional navigation- and location-related wireless data to the mobile computing device 350, which may be used as appropriate by applications running on the mobile computing device 350. The mobile computing device 350 may also communicate audibly using an audio codec 360, which may receive spoken information from a user and convert it to usable digital information. The audio codec 360 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 350. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 350. The mobile computing device 350 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 380. It may also be implemented as part of a smart-phone 382, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs, computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs, also known as programs, software, software applications or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device, e.g., magnetic discs, optical disks, memory, Programmable Logic devices (PLDs) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component such as an application server, or that includes a front-end component such as a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication such as, a communication network. Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Further to the descriptions above, a user may be provided with controls allowing the user to make an election as to both if and when systems, programs or features described herein may enable collection of user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), and if the user is sent content or communications from a server. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, in some embodiments, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over what information is collected about the user, how that information is used, and what information is provided to the user. A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the invention. For example, various forms of the flows shown above may be used, with steps re-ordered, added, or removed. Also, although several applications of the systems and methods have been described, it should be recognized that numerous other applications are contemplated. Accordingly, other embodiments are within the scope of the following claims. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. In the following some examples are described. Example 1 A computer-implemented method, the method comprising: obtaining network load information that indicates network load for a first access point operating using frequency domain multiplexing in a first spectrum and network load for a second operator using frequency domain multiplexing in a second spectrum that does not overlap any portion of the first spectrum; determining that the network load information satisfies a network load threshold; and in response to determining that the network load information satisfies the network load threshold, providing an instruction to the first access point to operate using listen before talk (LBT) in a third spectrum the includes at least a portion of the first spectrum and at least a portion of the second spectrum. Example 2 The method of example 1, wherein determining that the network load information satisfies a network load threshold comprises: determining that a number of devices connected to the first access point satisfies a predetermined number. Example 3 The method of example 1 or 2, wherein determining that the network load information satisfies a network load threshold comprises: determining that a utilization of physical resource blocks satisfies a predetermined percentage. Example 4 The method of at least one of the preceding examples, wherein determining that the network load information satisfies a network load threshold comprises: determining that utilization of a physical downlink control channel satisfies a predetermined percentage. Example 5 The method of at least one of the preceding examples, wherein using LBT in a third spectrum comprises: using LBT across an entirety of the first spectrum and the second spectrum. Example 6 The method of at least one of the preceding examples, wherein the third spectrum is between 3550 GHz and 3700 GHz. Example 7 The method of at least one of the preceding examples, wherein using frequency domain multiplexing comprises using TD-LTE and using LBT comprises using Carrier Sense Multiple Access (CSMA). Example 8 The method of at least one of the preceding examples, wherein the first access point is operated by a first carrier and the second access point is operated by the second carrier. Example 9 The method of at least one of the preceding examples, comprising: providing an instruction to the second access point to operate using LBT in the third spectrum. Example 10 The method of at least one of the preceding examples, comprising: obtaining network load information that indicates network load for the first access point operating using LBT in the third spectrum and network load for the second operator using LBT in the third spectrum; determining that the network load information satisfies a second network load threshold; and in response to determining that the network load information satisfies a second network load threshold, providing a second instruction to the first access point to operate using frequency domain multiplexing in the first spectrum. Example 11 The method of example 10, wherein determining that the network load information satisfies a second network load threshold comprises: determining that a number of devices connected to the first access point satisfies a predetermined number. Example 12 The method of example 10 or 11, wherein determining that the network load information satisfies a second network load threshold comprises: determining that a utilization of physical resource blocks satisfies a predetermined percentage. Example 13 The method of at least one of examples 10 to 12, wherein determining that the network load information satisfies a second network load threshold comprises: determining that utilization of a physical downlink control channel satisfies a predetermined percentage. Example 14 The method of at least one of examples 10 to 13, wherein determining that the network load information satisfies a second network load threshold comprises: determining that packet collision probability satisfies a predetermined probability. Example 15 The method of at least one of examples 10 to 14, wherein determining that the network load information satisfies a second network load threshold comprises: determining that packet random backoff satisfies a predetermined length of time. Example 16 A system comprising: a data processing apparatus; and a non-transitory computer readable storage medium in data communication with the data processing apparatus and storing instructions executable by the data processing apparatus and upon such execution cause the data processing apparatus to perform operations comprising: obtaining network load information that indicates network load for a first access point operating using frequency domain multiplexing in a first spectrum and network load for a second operator using frequency domain multiplexing in a second spectrum that does not overlap any portion of the first spectrum; determining that the network load information satisfies a network load threshold; and in response to determining that the network load information satisfies the network load threshold, providing an instruction to the first access point to operate using listen before talk (LBT) in a third spectrum the includes at least a portion of the first spectrum and at least a portion of the second spectrum. Example 17 The system of example 16, wherein determining that the network load information satisfies a network load threshold comprises: determining that a number of devices connected to the first access point satisfies a predetermined number. Example 18 The system of example 16 or 17, wherein determining that the network load information satisfies a network load threshold comprises: determining that a utilization of physical resource blocks satisfies a predetermined percentage. Example 19 The system of at least one of examples 16 to 18, wherein determining that the network load information satisfies a network load threshold comprises: determining that utilization of a physical downlink control channel satisfies a predetermined percentage. Example 20 A non-transitory computer readable storage medium storing instructions executable by a data processing apparatus and upon such execution cause the data processing apparatus to perform operations comprising: obtaining network load information that indicates network load for a first access point operating using frequency domain multiplexing in a first spectrum and network load for a second operator using frequency domain multiplexing in a second spectrum that does not overlap any portion of the first spectrum; determining that the network load information satisfies a network load threshold; and in response to determining that the network load information satisfies the network load threshold, providing an instruction to the first access point to operate using listen before talk (LBT) in a third spectrum the includes at least a portion of the first spectrum and at least a portion of the second spectrum. 16756361 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Jun 15th, 2018 12:00AM https://www.uspto.gov?id=US11317289-20220426 Audio communication tokens A first device listens for a communication token across an audio bandwidth covering a transmit frequency one or more audio frequency broadcasting device. The first device receives at least one token broadcast from the one or more broadcasting devices. The first device demodulates and decodes each received token. One or more computing devices validates each decoded token. The first device determines, based on the validating, a broadcasting device of the broadcasting devices with which to establish a communications channel. The one or more computing devices generates a response token based on the token received from the determined device. The first device broadcasts the generated response token in a response band of the determined device. The determined device demodulates decodes, and validates the token broadcast from the first device. Upon determining the token broadcast from the first device valid, the first device and the determined device establish a wireless communication channel. 11317289 1. A computer-implemented method to establish a wireless communication channel between two computing devices, comprising: listening, by a first device, for communication tokens across an audio bandwidth covering a transmit frequency of two or more broadcasting devices; receiving, by the first device, communication tokens broadcast from the two or more broadcasting devices; demodulating and decoding, by the first device, each received communication token; validating, by one or more computing devices, each decoded communication token, the validating comprising determining whether a respective decoded communication token is found to have a valid signature from a trusted source; determining, by the first device, a broadcasting device from the two or more broadcasting devices with which to establish a communications channel based at least in part on the validating of a respective communication token received from the broadcasting device, wherein the determining of the broadcasting device comprises: presenting, by the first device, a validation result for each communication token from the two or more broadcasting devices to a user of the first device for selection of a particular broadcasting device corresponding to each communication token to pair with; and receiving, by the first device, from the user, a selection of the broadcasting device from the two or more broadcasting devices in response to the presenting of the validation results; generating, by one or more computing devices, a response token as a function of the communication token received from the determined broadcasting device; broadcasting, by the first device, the generated response token in an audio frequency response band of the determined broadcasting device; demodulating, decoding, and validating, by the determined broadcasting device, the generated response token broadcast from the first device; and upon determining that the generated response token broadcast from the first device is valid, establishing, by the first device and the determined broadcasting device, a wireless communication channel between the first device and the determined broadcasting device. 2. The computer-implemented method of claim 1, wherein: the listening comprises listening across the audio bandwidth of a plurality of the broadcasting devices, and each of at least two of the broadcasting devices respectively broadcasts in different bands. 3. The computer-implemented method of claim 1, wherein the one or more computing devices generating the response token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 4. The computer-implemented method of claim 1, wherein the one or more computing devices validating each decoded token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 5. The computer-implemented method of claim 1, wherein the wireless communication channel is a radio frequency communication channel. 6. The computer-implemented method of claim 1, wherein the wireless communication channel is an audio-based communication channel. 7. The computer-implemented method of claim 1, wherein the first device is a mobile user device and at least one of the two or more broadcasting devices is an ATM. 8. A non-transitory computer-readable storage device having instructions that, when executed by a computer, cause the computer to: listen, by a first device, for communication tokens across an audio bandwidth covering a transmit frequency of two or more broadcasting devices; receive, by the first device, at least two communication tokens broadcast from the two or more broadcasting devices; demodulate and decode, by the first device, each received communication token; validate, by one or more computing devices, each decoded communication token based at least in part on determining whether a respective decoded communication token is found to have a valid signature from a trusted source; determine, by the first device, a broadcasting device from the two or more broadcasting devices with which to establish a communications channel based at least in part on the validating of a respective communication token received from the broadcasting device, wherein the determining of the broadcasting device comprises: presenting, by the first device, a validation result for each communication token from the two or more broadcasting devices to a user of the first device for selection of a particular broadcasting device corresponding to each communication token to pair with; and receiving, by the first device, from the user, a selection of the broadcasting device from the two or more broadcasting devices in response to the presenting of the validation results; generate, by one or more computing devices, a response token as a function of the communication token received from the determined device; broadcast, by the first device, the generated response token in an audio frequency response band of the determined device; demodulate, decode, and validate, by the determined device, the generated response token broadcast from the first device; and upon determining that the generated response token broadcast from the first device valid, establish, by the first device and the determined device, a wireless communication channel between the first device and the determined device. 9. The non-transitory computer-readable storage device of claim 8, wherein: The listening comprises listening across the audio bandwidth of a plurality of broadcasting devices, and each of at least two of the broadcasting devices respectively broadcasts in different bands. 10. The non-transitory computer-readable storage device of claim 8, wherein the one or more computing devices generating the response token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 11. The non-transitory computer-readable storage device of claim 8, wherein the one or more computing devices validating each decoded token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 12. The non-transitory computer-readable storage device of claim 8, wherein the wireless communication channel is a radio frequency communication channel or an audio-based communication channel. 13. The non-transitory computer-readable storage device of claim 8, wherein the first device is a mobile user device and at least one of the two or more broadcasting devices is an ATM. 14. A system to establish a wireless communication channel between two computing devices, the system comprising: a storage device; and at least one processor communicatively coupled to the storage device, wherein the at least one processor executes instructions from the storage device that cause the system to: listen, by a first device, for communication tokens across an audio bandwidth covering a transmit frequency of two or more broadcasting devices; receive, by the first device, communication tokens broadcast from the two or more broadcasting devices; demodulate and decode, by the first device, each received communication token; validate, by one or more computing devices, each decoded communication token based at least in part on determining whether a respective decoded communication token is found to have a valid signature from a trusted source; determine, by the first device, a broadcasting device from the two or more broadcasting devices with which to establish a communications channel based at least in part on the validating of a respective communication token received from the broadcasting device, wherein the determining of the broadcasting device comprises: presenting, by the first device, a validation result for each communication token from the two or more broadcasting devices to a user of the first device for selection of a particular broadcasting device corresponding to each communication token to pair with; and receiving, by the first device, from the user, a selection of the broadcasting device from the two or more broadcasting devices in response to the presenting of the validation results; generate, by one or more computing devices, a response token as a function of the communication token received from the determined device; broadcast, by the first device, the generated response token in an audio frequency response band of the determined device; demodulate, decode, and validate, by the determined device, the generated response token broadcast from the first device; and upon determining that the generated response token broadcast from the first device valid, establish, by the first device and the determined device, a wireless communication channel between the first device and the determined device. 15. The system of claim 14, wherein: the listening comprises listening across the audio bandwidth of a plurality of broadcasting devices, and each of at least two of the broadcasting devices respectively broadcasts in different bands. 16. The system of claim 14, wherein the one or more computing devices generating a response token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 17. The system of claim 14, wherein the one or more computing devices validating each decoded token is a token verification server, other than the first device, and the one or more computing devices are reachable from the first device via a radio frequency communications network. 18. The system of claim 14, wherein the wireless communication channel is a radio frequency communication channel or an audio-based communication channel. 18 CROSS REFERENCE TO RELATED APPLICATIONS This application is based upon and claims the right of priority under 35 U.S.C. § 371 to International Application No. PCT/US2018/037776, filed on Jun. 15, 2018, which claims the benefit of U.S. Provisional Patent Application No. 62/555,434, filed Sep. 7, 2017 and entitled “Audio Communication Tokens,”. Applicant claims priority to and the benefit of each of such applications and incorporates all such applications herein by reference in their entirety. TECHNICAL FIELD The technology disclosed herein is related to using audio communication tokens for establishing a wireless communication channel between at least a first device and a second device. Examples relate to using audio communication tokens in conjunctions with mobile banking and mobile peer-to-peer payments. BACKGROUND Identity and access management (IAM) involves controlling access to resources (including e.g., computing resources and physical spaces), including access to functions of those resources. IAM addresses the need to ensure appropriate access to resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance requirements. As of 2016, only thirteen percent of the U.S. adult population does not have a mobile phone. In most places outside the U.S., especially jurisdictions where a landline telecommunications infrastructure was not ubiquitous, more people have mobile phones than have access to landline phones. For example, India has vast non-banking population, many of whom reside in the rural areas and are cut off from access to basic financial services from a trusted source. However, as of 2012, India had nearly a billion mobile phone customers, many of whom access financial services via their mobile phones. Throughout the world, mobile computing devices are being used to access functions and services, such as financial services. IAM directed to such access via mobile computing devices is an important aspect of offering such services. SUMMARY The technology described herein includes computer-implemented methods, computer program products, and systems to control wireless access to target devices. In some examples, a first device listens for a communication token across an audio bandwidth covering a transmit frequency one or more audio frequency broadcasting device. The first device receives at least one token broadcast from the one or more broadcasting devices. The first device demodulates and decodes each received token. One or more computing devices validates each decoded token. The first device determines, based on the validating, a broadcasting device of the broadcasting devices with which to establish a communications channel. The one or more computing devices generates a response token based on the token received from the determined device. The first device broadcasts the generated response token in a response band of the determined device (for example, after having determined in which broadcasting band the token received from the determined device has been transmitted). The determined device demodulates decodes, and validates the token broadcast from the first device. Upon determining the token broadcast from the first device valid, the first device and the determined device establish a wireless communication channel. In some examples, the response band may just a part/sub-channel of the audio bandwidth across which the first device listens for a communication token. In some examples, listening includes listening across the audio bandwidth a plurality of broadcasting devices, and at least two broadcasting devices broadcast in different bands. In some examples, the one or more computing devices generating a response token is a token verification server other than the receiving device and the listening device. The token server is reachable from the first device via a radio frequency communications network. In some examples, the one or more computing devices validating each decoded token is a token verification server. In some example the wireless communication channel is a radio frequency communication channel, while in others, it is an audio-based communication channel. In some embodiments, determining includes presenting, by the first device, the validation results for each token to a user of the first device for selection of a broadcasting device corresponding to each token to pair with. Is such examples, the first device receives, from the user, selection of a broadcast device in response to the presenting of validation results. In some embodiments, use of audio communication tokens is proposed to establish a wireless communication channel between a mobile user device and an ATM. These and other aspects, objects, features, and advantages of the example embodiments will become apparent to those having ordinary skill in the art upon consideration of the following detailed description of illustrated example embodiments. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram depicting an operating environment to establish wireless communication channels between two devices, in accordance with certain examples. FIG. 2 is a block diagram illustrating methods to establish wireless communication channels between two devices, in accordance with certain examples. FIG. 3 is a block diagram depicting a computing machine and a module, in accordance with certain examples. DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS Tokens can be used in IAM for authentication and validation of devices to be paired for communication over a sound/audio frequency communications channel. When an audio token is transmitted, it can be appended with a signature to indicate that it is a verified token. The signature can be a hash of the content of the token plus a secret that only the server possesses. A receiving device can be programmed to first check the signature to validate that the audio token is from a verified source before trying to take action or redirect the user. In some embodiments of the technology disclosed herein, the audio token and audio communication channel are used to initiate a radio frequency channel between devices, and a server is used to verify the signature of the audio token. By using and relying on the methods and systems described herein, the technology disclosed hereon uses audio communication channels, and tokens thereof, to establish another communication channel (either audio or radio frequency) between two devices based the audio tokens. As such, the systems and methods described herein may be employed to control the establishment of communication channels between devices and reduce the risk of unauthorized channels. Hence the technology described herein can be use to establish secure communications between devices such as a user's mobile phone and an automated teller machine (ATM). Embodiments of the present technology include methods, systems, and computer program products to use audio communication tokens for establishing a communication channel. Example System Architectures FIG. 1 is a block diagram depicting a communications and processing operating environment 100 to establish wireless communication channels between two devices, in accordance with certain examples. While each server, system, and device shown in the architecture is represented by one instance of the server, system, or device, multiple instances of each can be used. Further, while certain aspects of operation of the present technology are presented in examples related to FIG. 1 to facilitate enablement of the claimed invention, additional features of the present technology, also facilitating enablement of the claimed invention, are disclosed elsewhere herein. As depicted in FIG. 1, the example operating environment 100 includes network devices 110, 120, 130, and 140; each of which may be configured to communicate with one another via communications network 99. In some embodiments, a user associated with a device must install an application and/or make a feature selection to obtain the benefits of the technology described herein. In some embodiments, devices 110, 120, and 130 include both a speaker 110a, 120a, 130a and a microphone 110b, 120b, 130b. Network 99 includes one or more wired or wireless telecommunications means by which network devices may exchange data. For example, the network 99 may include one or more of a local area network (LAN), a wide area network (WAN), an intranet, an Internet, a storage area network (SAN), a personal area network (PAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a virtual private network (VPN), a cellular or other mobile communication network, a BLUETOOTH® wireless technology connection, a near field communication (NFC) connection, any combination thereof, and any other appropriate architecture or system, that facilitates the communication of signals, data, and/or messages. Throughout the discussion of example embodiments, it should be understood that the terms “data” and “information” are used interchangeably herein to refer to text, images, audio, video, or any other form of information that can exist in a computer-based environment. Each network device 110, 120, 130, and 140 can, where so configured, include a communication module capable of transmitting and receiving data over the network 99. For example, each network device can include a server, a desktop computer, a laptop computer, a tablet computer, a television with one or more processors embedded therein and/or coupled thereto, a smart phone, a handheld computer, a personal digital assistant (PDA), or any other wired or wireless processor-driven device. In some embodiments, network device 110 is not configured to communicate over network 99. The network connections illustrated are examples and other means of establishing a communications link between the computers and devices can be used. Moreover, those having ordinary skill in the art having the benefit of the present disclosure will appreciate that the network devices illustrated in FIG. 1 may have any of several other suitable computer system configurations. For example, computing devices 110 and 120 each may be embodied as a mobile phone or handheld computer and may not include all the components described above. In examples described herein, computing device 110 is a mobile phone with a speaker 110a and a microphone 110b; computing devices 120 and 130 are point of sale devices (which can be mobile phones also) with at least a speaker 120b, 130b; and token verification server 140 is a server (whether physical or virtual) with network 99 access. In some such example embodiments, computing devices 110, 120, and 130 have at least intermittent network 99 connectivity. In the examples described herein, sound-based peer-to-peer communications 150 are operative when at least one broadcasting device, such as device 120, is in the near vicinity, for example 2-10 feet, of at least one receiving device, such as device 110. In example embodiments, the network computing devices, and any other computing machines associated with the technology presented herein, may be any type of computing machine such as, but not limited to, those discussed in more detail with respect to FIG. 3. Furthermore, any modules associated with any of these computing machines, such as modules described herein or any other modules (scripts, web content, software, firmware, or hardware) associated with the technology presented herein may be any of the modules discussed in more detail with respect to FIG. 3. The computing machines discussed herein may communicate with one another as well as other computer machines or communication systems over one or more networks, such as network 99. The network 99 may include any type of data or communications network, including any of the network technology discussed with respect to FIG. 3. Example Processes The example methods illustrated in the figures are described hereinafter with respect to the components of the example operating environment 100. The example methods also can be performed with other systems and in other environments. The operations described with respect to any of the figures can be implemented as executable code stored on a computer or machine readable non-transitory tangible storage medium (e.g., floppy disk, hard disk, ROM, EEPROM, nonvolatile RAM, CD-ROM, etc.) that are completed based on execution of the code by a processor circuit implemented using one or more integrated circuits; the operations described herein also can be implemented as executable logic that is encoded in one or more non-transitory tangible media for execution (e.g., programmable logic arrays or devices, field programmable gate arrays, programmable array logic, application specific integrated circuits, etc.). Referring to FIG. 2, and continuing to refer to FIG. 1 for context, methods 200 to use audio communication tokens to establish wireless communication channels between two devices are illustrated, in accordance with certain examples. In such methods 200, a first device 110 listens for a communication token across an audio bandwidth covering a transmit frequency of at least one audio frequency broadcasting device, for example device 120 and device 130—Block 210. As a continuing example, consider: Device B 120 broadcasting within a frequency band of 5.0-10.0 kHz; Device C 130 broadcasting within a frequency band of 10.0-15.0 kHz; and Device A 110 listening in a frequency band of 1.0 to 20.0 kHz. Device B 120 and Device C 130 are side-by-side ATMs and Device A 110 is a user mobile telephone. The first device 110 receives token broadcasts from the one or more broadcasting devices, for example device 120 and device 130—Block 220. In the continuing example, the token being broadcast by ATM B 120 represents the word “apple”, converted into binary format and encoded using an audio encoding scheme, such as Direct Sequence Spread Spectrum (DSSS) or Binary Phase Shift Keying (BPSK). The token broadcast by ATM C 130 represents the word “banana,” and as with the token broadcast by ATM B, is converted into binary format and encoded using an audio encoding scheme, such as DSSS or BPSK. In the continuing example, user mobile telephone A 110 receives the “apple” token from ATM B 120 frequency band of 5.0-10.0 kHz, and the “banana” token from ATM C 130 in the frequency band of 10.0-15.0 kHz (and thus in both cases over different frequency bands which in each case are parts/sub-channel of the bandwidth across which the user mobile telephone A 110 listens for tokens). The first device 110 demodulates and decodes each received token—Block 230. In the continuing example, user mobile telephone A 110 demodulates and decodes the audio frequency tokens received from ATM B 120 and from ATM C 130. In some examples, one or both of demodulation and decoding of the token is performed by a separate device, for example, token validation server 140 if a communications channel is available between device 110 and device 140 via network 99. One or more computing devices validates each decoded token—Block 240. In a first variation, the decoding device, user mobile telephone A 110, determines the validity of the decoded token. In a second variation, the decoding device 110, user mobile telephone A 110, communicates the decoded token to a server, such as token validation server 140 (either in the clear or in a secure fashion, for example, encrypted or over an otherwise secure channel) over communication network 99 when such communication network is available. In the continuing example, using the first variation, user mobile telephone A 110, determines the validity of the decoded tokens on user mobile telephone A 110. The token from ATM B 120 is found to have an invalid signature (for example, not matching the signature originally added by the token creator), and the token from ATM C 130 is found to have a valid signature. In the continuing example, using the second variation, user mobile telephone A 110 makes a network call (encrypted) to token validation server 140 over network 99, sharing the decoded tokens, and allows the token validation server 140 to respond with the validity of the tokens. As with the first variation, the token from ATM B 120 is found (this time by token validation server 140) to have an invalid signature, and the token from ATM C 130 is found by the token validation server 140 to have a valid signature. In the second variation, the token verification server 140 transmits (via network 99), and the user mobile telephone A 110 receives (via network 99), the validation results. The receiving device 110 determines a broadcasting device of the at least one audio frequency broadcasting devices 120 and 130, with which to establish a communications channel—Block 250. In a first variation of the Block 250 process, user mobile telephone A 110, had determined that the ATM B 120 may not be a trusted source based on an invalid token, but that ATM C 130 is a trusted source based on a valid token. Based on predefined logic, user mobile telephone A 110 rejects a pairing with ATM B 120 and selects pairing with ATM C 130. In a second variation of the determining process of Block 250, user mobile telephone A 110 presents both “apple” (a human readable label for the token of ATM B 120) and “banana” (a human readable label for Device B's token) to the user via the user interface of user mobile telephone A 110. User mobile telephone A 110 also indicates, via the user interface of user mobile telephone A 110, that there is a discrepancy in the trustworthiness of the “apple” channel of ATM B 120. User mobile telephone A 110 receives input from the user to pair with ATM C 130. Note that in this second variation, the user could have chosen ATM B 120. The receiving device generates a response token, which is a function of the token received from the determined device of Block 250—Block 260. In the continuing example, ATM C 130 generates a response token based on the token from ATM B 120. Where the response token is generated as a function of the received audio token, the received audio token can be an input parameter to an algorithm for determining the response token. In a simple example, the received audio token is an input to an adder adding (bitwise) a value to the received audio token or increasing signal amplitudes of the received audio token in order to generate an response token. The receiving device broadcasts (encodes, modulates, and transmits) the generated response token in the broadcasting band of the determined device—Block 270. In the continuing example, user mobile telephone A 110 transmits the generated response token in the 10.0-15.0 kHz broadcasting band of ATM C 120, for example, after having determined in Block 250 in which broadcasting band the token received from the determined device (ATM C) has been transmitted. The device receiving the broadcast response token demodulates and decodes the token, and upon determining that the token is valid, establishes the communication channel between the user device and the determined device—Block 280. In the continuing example, ATM C 130 receives the response token transmission from user mobile telephone A 110 and demodulates the transmission, decodes the token, determines that the decoded token is valid, and establishes an audio frequency sound communications channel between user mobile telephone A 110 and ATM C 130, for example, user mobile telephone A 110 is “tuned” to ATM C 130 channel, and the devices can communicate. While in the example above, the receiving device 110 chose the channel and generated the response token, in some embodiments, either or both of channel choice and response token generation can be performed off the receiving device 110, for example, by the token verification server 140. In some examples, a radio frequency communications channel is established between the respective devices in Block 280. Other Example Embodiments FIG. 3 depicts a computing machine 2000 and a module 2050 in accordance with certain example embodiments. The computing machine 2000 may correspond to any of the various computers, servers, mobile devices, embedded systems, or computing systems presented herein. The module 2050 may comprise one or more hardware or software elements configured to facilitate the computing machine 2000 in performing the various methods and processing functions presented herein. The computing machine 2000 may include various internal or attached components such as a processor 2010, system bus 2020, system memory 2030, storage media 2040, input/output interface 2060, and a network interface 2070 for communicating with a network 2080. The computing machine 2000 may be implemented as a conventional computer system, an embedded controller, a laptop, a server, a mobile device, a smartphone, a set-top box, a kiosk, a router or other network node, a vehicular information system, one or more processors associated with a television, a customized machine, any other hardware platform, or any combination or multiplicity thereof. The computing machine 2000 may be a distributed system configured to function using multiple computing machines interconnected via a data network or bus system. The processor 2010 may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands. The processor 2010 may be configured to monitor and control the operation of the components in the computing machine 2000. The processor 2010 may be a general purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a graphics processing unit (“GPU”), a field programmable gate array (“FPGA”), a programmable logic device (“PLD”), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof. The processor 2010 may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, co-processors, or any combination thereof. According to certain embodiments, the processor 2010 along with other components of the computing machine 2000 may be a virtualized computing machine executing within one or more other computing machines. The system memory 2030 may include non-volatile memories such as read-only memory (“ROM”), programmable read-only memory (“PROM”), erasable programmable read-only memory (“EPROM”), flash memory, or any other device capable of storing program instructions or data with or without applied power. The system memory 2030 may also include volatile memories such as random access memory (“RAM”), static random access memory (“SRAM”), dynamic random access memory (“DRAM”), and synchronous dynamic random access memory (“SDRAM”). Other types of RAM also may be used to implement the system memory 2030. The system memory 2030 may be implemented using a single memory module or multiple memory modules. While the system memory 2030 is depicted as being part of the computing machine 2000, one skilled in the art will recognize that the system memory 2030 may be separate from the computing machine 2000 without departing from the scope of the subject technology. It should also be appreciated that the system memory 2030 may include, or operate in conjunction with, a non-volatile storage device such as the storage media 2040. The storage media 2040 may include a hard disk, a floppy disk, a compact disc read only memory (“CD-ROM”), a digital versatile disc (“DVD”), a Blu-ray disc, a magnetic tape, a flash memory, other non-volatile memory device, a solid state drive (“SSD”), any magnetic storage device, any optical storage device, any electrical storage device, any semiconductor storage device, any physical-based storage device, any other data storage device, or any combination or multiplicity thereof. The storage media 2040 may store one or more operating systems, application programs and program modules such as module 2050, data, or any other information. The storage media 2040 may be part of, or connected to, the computing machine 2000. The storage media 2040 may also be part of one or more other computing machines that are in communication with the computing machine 2000 such as servers, database servers, cloud storage, network attached storage, and so forth. The module 2050 may comprise one or more hardware or software elements configured to facilitate the computing machine 2000 with performing the various methods and processing functions presented herein. The module 2050 may include one or more sequences of instructions stored as software or firmware in association with the system memory 2030, the storage media 2040, or both. The storage media 2040 may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor 2010. Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor 2010. Such machine or computer readable media associated with the module 2050 may comprise a computer software product. It should be appreciated that a computer software product comprising the module 2050 may also be associated with one or more processes or methods for delivering the module 2050 to the computing machine 2000 via the network 2080, any signal-bearing medium, or any other communication or delivery technology. The module 2050 may also comprise hardware circuits or information for configuring hardware circuits such as microcode or configuration information for an FPGA or other PLD. The input/output (“I/O”) interface 2060 may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices. The I/O interface 2060 may include both electrical and physical connections for operably coupling the various peripheral devices to the computing machine 2000 or the processor 2010. The I/O interface 2060 may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine 2000, or the processor 2010. The I/O interface 2060 may be configured to implement any standard interface, such as small computer system interface (“SCSI”), serial-attached SCSI (“SAS”), fiber channel, peripheral component interconnect (“PCP”), PCI express (PCIe), serial bus, parallel bus, advanced technology attached (“ATA”), serial ATA (“SATA”), universal serial bus (“USB”), Thunderbolt, FireWire, various video buses, and the like. The I/O interface 2060 may be configured to implement only one interface or bus technology. Alternatively, the I/O interface 2060 may be configured to implement multiple interfaces or bus technologies. The I/O interface 2060 may be configured as part of, all of, or to operate in conjunction with, the system bus 2020. The I/O interface 2060 may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine 2000, or the processor 2010. The I/O interface 2060 may couple the computing machine 2000 to various input devices including mice, touch-screens, scanners, electronic digitizers, sensors, receivers, touchpads, trackballs, cameras, microphones, keyboards, any other pointing devices, or any combinations thereof. The I/O interface 2060 may couple the computing machine 2000 to various output devices including video displays, speakers, printers, projectors, tactile feedback devices, automation control, robotic components, actuators, motors, fans, solenoids, valves, pumps, transmitters, signal emitters, lights, and so forth. The computing machine 2000 may operate in a networked environment using logical connections through the network interface 2070 to one or more other systems or computing machines across the network 2080. The network 2080 may include wide area networks (WAN), local area networks (LAN), intranets, the Internet, wireless access networks, wired networks, mobile networks, telephone networks, optical networks, or combinations thereof. The network 2080 may be packet switched, circuit switched, of any topology, and may use any communication protocol. Communication links within the network 2080 may involve various digital or an analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth. The processor 2010 may be connected to the other elements of the computing machine 2000 or the various peripherals discussed herein through the system bus 2020. It should be appreciated that the system bus 2020 may be within the processor 2010, outside the processor 2010, or both. According to certain example embodiments, any of the processor 2010, the other elements of the computing machine 2000, or the various peripherals discussed herein may be integrated into a single device such as a system on chip (“SOC”), system on package (“SOP”), or ASIC device. Embodiments may comprise a computer program that embodies the functions described and illustrated herein, wherein the computer program is implemented in a computer system that comprises instructions stored in a machine-readable medium and a processor that executes the instructions. However, it should be apparent that there could be many different ways of implementing embodiments in computer programming, and the embodiments should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an embodiment of the disclosed embodiments based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use embodiments. Further, those skilled in the art will appreciate that one or more aspects of embodiments described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computing systems. Moreover, any reference to an act being performed by a computer should not be construed as being performed by a single computer as more than one computer may perform the act. The example embodiments described herein can be used with computer hardware and software that perform the methods and processing functions described herein. The systems, methods, and procedures described herein can be embodied in a programmable computer, computer-executable software, or digital circuitry. The software can be stored on computer-readable media. For example, computer-readable media can include a floppy disk, RAM, ROM, hard disk, removable media, flash memory, memory stick, optical media, magneto-optical media, CD-ROM, etc. Digital circuitry can include integrated circuits, gate arrays, building block logic, field programmable gate arrays (FPGA), etc. The example systems, methods, and acts described in the embodiments presented previously are illustrative, and, in alternative embodiments, certain acts can be performed in a different order, in parallel with one another, omitted entirely, and/or combined between different example embodiments, and/or certain additional acts can be performed, without departing from the scope and spirit of various embodiments. Accordingly, such alternative embodiments are included in the scope of the following claims, which are to be accorded the broadest interpretation to encompass such alternate embodiments. Although specific embodiments have been described above in detail, the description is merely for purposes of illustration. It should be appreciated, therefore, that many aspects described above are not intended as required or essential elements unless explicitly stated otherwise. Modifications of, and equivalent components or acts corresponding to, the disclosed aspects of the example embodiments, in addition to those described above, can be made by a person of ordinary skill in the art, having the benefit of the present disclosure, without departing from the spirit and scope of embodiments defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures. 16645319 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Mar 15th, 2013 12:00AM https://www.uspto.gov?id=US11315134-20220426 Redemption code auto-complete for online offers and tracking Auto-detecting an electronic shopping basket and auto-completing offer redemption codes on the shopping basket webpage. When the user selects an item to add to the shopping basket, the shopping basket webpage loads. A plug-in detects a load event and communicates that information to an offer system. The offer system reviews the information, identifies the merchant, and determines offer codes applicable to a purchase. The offer system communicates the offer code to the plug-in, which auto-completes the code on the electronic shopping basket. The user completes the online transaction and the merchant provides a notification of completed transaction webpage. The plug-in detects a load event for the completed transaction webpage and communicates information regarding the load event to the offer system. The offer system reviews the information, identifies the offer code previously transmitted for auto-completion, marks the offer code as redeemed, and calculates the redemption rate of the transmitted offer code. 11315134 1. A computer-implemented method to auto-complete form fields in electronic shopping baskets for online marketplaces with offer redemption codes, comprising, by an offer computing system: receiving, from a user computing device, information regarding a first load event detected by the user computing device in association with an electronic shopping basket and an online marketplace; identifying a merchant system associated with the online marketplace based at least in part on the information regarding the first load event in association with the electronic shopping basket and the online marketplace; determining that a user has placed an item in the electronic shopping basket for the online marketplace for a purchase transaction between the user and the online marketplace by periodically detecting key words in a first URL identified by the information regarding the first load event for the electronic shopping basket, the electronic shopping basket comprising a form field for one or more offer redemption codes; in response to determining that the user has placed the item in the electronic shopping basket for the online marketplace, identifying one or more offer redemption codes that correspond to the first URL and the merchant system, the one or more offer redemption codes comprising redemption conditions that are satisfied by the purchase transaction; selecting one of the one or more offer redemption codes determined to provide the user with greatest savings to apply to the purchase transaction; providing the selected offer redemption code from the offer computing system to the user computing system for auto-completing the form field with the selected offer redemption code and applying the offer redemption code from the autocompleted form field to the purchase transaction to complete the purchase transaction with the online marketplace; receiving, by the offer computing system from the user computing device, information regarding a second load event detected by the user computing device in association with the electronic shopping basket and the online marketplace; identifying, from the information regarding the second load event, the selected offer redemption code and a second URL indicating that the user has completed the purchase transaction; and marking the applied offer redemption code as redeemed in response to detecting that the user has completed the purchase transaction with the online marketplace. 2. The method of claim 1, further comprising saving one or more redemption codes to an account of the user managed by the offer computing system. 3. The method of claim 2, wherein identifying the one or more offer redemption codes that correspond to the first URL comprises reviewing the redemption codes saved to the account of the user managed by the offer computing system to identify one or more offer redemption codes that correspond to an identity of the online marketplace or the item placed in the electronic shopping basket. 4. The method of claim 1, wherein the redemption conditions comprise a number of times the one or more redemption codes can be redeemed. 5. The method of claim 1, further comprising reviewing a browser history for the user, wherein the one or more offer redemption codes comprise offer redemption codes corresponding to an item from the browser history of the user and an identity of the online marketplace. 6. The method of claim 1, further comprising calculating, by the offer computing system, a redemption rate of the applied offer redemption code based at least in part on marking the applied offer redemption code auto-completed in the form field in the electronic shopping basket for the online marketplace as redeemed. 7. The method of claim 6, wherein calculating the redemption rate of the applied offer redemption code comprises comparing a number of times the applied offer redemption code was auto-completed to a number of times the offer redemption code was redeemed. 8. A non-transitory computer-readable medium having computer-executable program instructions embodied therein that when executed by an offer computing system cause the offer computing system to auto-complete form fields in an electronic shopping basket for an online marketplace with an offer redemption code that provides a user with the greatest savings, the computer-executable program instructions comprising instructions to: receive, from a user computing device, information regarding a first load event detected by the user computing device in association with an electronic shopping basket and an online marketplace; identify a merchant system associated with the online marketplace based at least in part on the information regarding the first load event in association with the electronic shopping basket and the online marketplace; determine that a user has placed an item in the electronic shopping basket for the online marketplace for a purchase transaction between the user and the online marketplace by periodically detecting key words in a first URL for the electronic shopping basket, the electronic shopping basket comprising a form field for one or more offer redemption codes; identify one or more offer redemption codes that correspond to the first URL and the merchant system; select one of the one or more offer redemption codes that provides the user with greatest savings to apply to the purchase transaction; provide the selected offer redemption code to the user computing device for auto-completing the form field with the selected offer redemption code and applying the offer redemption code from the autocompleted form field to complete the purchase transaction with the online marketplace; receive, from the user computing device, information regarding a second load event detected by the user computing device in association with the electronic shopping basket and the online marketplace; identify, from the information regarding the second load event, the selected offer redemption code and that the user has completed the purchase transaction with the online marketplace; and mark the applied offer redemption code field as redeemed in response to detecting that the user has completed the purchase transaction. 9. The non-transitory computer-readable medium of claim 8, wherein determining that the user has completed the purchase transaction with the online marketplace comprises detecting a second URL for a transaction completed document. 10. The non-transitory computer-readable medium of claim 8, further comprising computer-executable program instructions to review a browser history for the user, wherein the one or more offer redemption codes comprise offer redemption codes corresponding to an item from the browser history of the user and an identity of the online marketplace. 11. A system to auto-complete form fields in electronic shopping baskets for online marketplaces with offer redemption codes that provides users with the greatest savings, comprising: a storage medium; and a processor communicatively coupled to the storage medium, wherein the processor executes application code instructions that are stored in the storage medium to cause the system to: receive, from a user computing device, information regarding a first load event detected by a user computing device in association with an electronic shopping basket and an online marketplace; identify a merchant system associated with the online marketplace based at least in part on the information regarding the first load event in association with the electronic shopping basket and the online marketplace; determine that a user has placed an item in the electronic shopping basket for the online marketplace for a purchase transaction between the user and the online marketplace by periodically detecting key words in a first URL for the electronic shopping basket, the electronic shopping basket comprising a form field for one or more offer redemption codes; in response to determining that the user has placed the item in the electronic shopping basket for the online marketplace, identify one or more offer redemption codes that correspond to the first URL and the merchant system; select one of the one or more offer redemption codes that provides the user with greatest savings to apply to the purchase transaction; provide the selected offer redemption code to the user computing device for auto-completing the form field with the selected offer redemption code and applying the offer redemption code from the autocompleted form field to complete the purchase transaction with the online marketplace; receive, from the user computing device, information regarding a second load event detected by the user computing device in association with the electronic shopping basket and the online marketplace; identify, from the information regarding the second load event, the selected offer redemption code and that the user has completed the purchase transaction with the online marketplace; and mark the applied offer redemption code as redeemed in response to detecting that the user has completed the purchase transaction with the online marketplace. 12. The system of claim 11, wherein the processor is further configured to execute application code instructions stored in the storage medium to cause the system to review a browser history for the user, wherein the one or more offer redemption codes comprise offer redemption codes corresponding to an item from the browser history of the user and an identity of the online marketplace. 12 TECHNICAL FIELD The present disclosure relates generally to an offer redemption code system, and more particularly to methods and systems that allow for auto-detecting an electronic shopping basket and auto-completing offer redemption codes on the shopping basket webpage. BACKGROUND Merchants offer coupons or rebates as incentives for purchasing particular products. Traditionally, coupons are distributed in a paper format. A user redeems the coupon by taking the physical coupon to a merchant and purchasing a product that satisfies the terms of the coupon. Other forms of traditional coupons include rebates for purchasing particular products, wherein after purchasing a product that satisfies the terms of the rebate offer, the user fills out and returns required forms to request the rebate. More recently, merchants have offered electronic offers. Such offers may be linked to merchant loyalty cards, wherein a user enrolls in a merchant's loyalty program and receives a loyalty card. A user then associates certain discounts to the loyalty card and redeems these discounts by presenting the loyalty card (or some form of identifying information, such as a telephone number) and the method of payment to the merchant when purchasing the discounted products. With the advent of online marketplaces, users can copy/paste, click a link, or otherwise manually enter offer redemption codes when completing a transaction with an online merchant to receive a discount associated with the code. The user is required to search for codes that apply to the online merchant and/or the items in the user's electronic shopping basket. SUMMARY In certain example aspects described herein, a method for auto-completing offer redemption codes on a merchant system shopping basket webpage comprises and offer system that detects the electronic shopping basket, determines whether an offer code is applicable, and auto-completes the offer code in the field. When the user selects an item to add to the electronic shopping basket, the merchant system shopping basket webpage loads. The shopping cart module detects a load event and communicates information regarding the load event to the offer system. The offer system reviews the load event information, identifies the merchant system, and determines offer codes applicable to a purchase with the merchant system and/or the items in the electronic shopping basket. The offer system communicates the offer code(s) to the shopping cart module, and the shopping cart module auto-completes the code(s) on the electronic shopping basket. The user completes the online transaction with the merchant system and the merchant system provides a notification of completed transaction webpage. The shopping cart module detects a load event for the completed transaction webpage and communicates information regarding the load event to the offer system. The offer system reviews the load event, identifies the offer code(s) previously transmitted for auto-completion by the shopping cart module, marks the offer code(s) as redeemed, and calculates the redemption rate of the transmitted offer code(s). These and other aspects, objects, features, and advantages of the example embodiments will become apparent to those having ordinary skill in the art upon consideration of the following detailed description of illustrated example embodiments. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram depicting an offer redemption code system, in accordance with certain example embodiments. FIG. 2 is a block flow diagram depicting a method for auto-completing offer redemption codes on a shopping basket webpage, in accordance with certain example embodiments. FIG. 3 is a block flow diagram depicting a method determining applicable offer redemption codes, in accordance with certain example embodiments. FIG. 4 is a block flow diagram depicting a method for determining a redemption rate for the offer redemption codes, in accordance with certain example embodiments. FIG. 5 is a block diagram depicting a user interface displaying an offer redemption code system, in accordance with certain example embodiments. FIG. 6 is a block diagram depicting a computer machine and module, in accordance with certain example embodiments. DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS Overview The example embodiments described herein provide computer-implemented techniques for auto-completing offer redemption codes on a merchant system shopping basket webpage. In an example embodiment, a user has browsed, via a user device, items on a merchant system online marketplace and indicated a desire to place an item in an electronic “shopping basket.” The electronic shopping basket comprises a field where the user may enter a code to redeem an electronic offer for the item and/or for the overall purchase with the merchant system. A browser plug-in installed on the user device detects the electronic shopping basket, determines whether an offer code is applicable, and auto-completes the offer code in the field. In some embodiments, the user must install the browser plug in and/or otherwise indicate that they would like to take advantage of the techniques described herein before the techniques will be provided to the user. The merchant system registers with the offer system and provides data that allows an offer system shopping cart module to detect when a user has placed an item in a merchant system shopping basket, for example, a uniform resource locator (“URL”) of the merchant system's shopping basket webpage. The merchant system also provides data that allows the offer system shopping cart module to detect when the user completes a transaction with the merchant system, for example, a URL indicating that the user is viewing the merchant system's notification of completed transaction webpage. In some embodiments, the user can place multiple items in an electronic shopping cart (or otherwise select multiple items for purchase) and only later view the shopping cart. The offer system may detect the placement of each item into the shopping cart and determine associated offers upon the detection of each. The offer system may also, or in the alternative, detect that the items are in the shopping cart (or otherwise selected for purchase) when the shopping cart is viewed (perhaps by detecting a URL that indicates that the user is viewing the shopping cart). In an alternative example embodiment, the offer system shopping cart module is capable of determining when the user has selected an item for purchase and/or completed the transaction without requiring the merchant system to register, for example, by detecting key words in the URL or key words on the merchant system webpage. The merchant system may provide offer codes to the offer system and redemption rules for the offer codes. In an alternative example embodiment, the offer system scans and gathers offer codes or obtains offer codes for products from the product manufacturer. The user browses the merchant shopping website and selects an item to add to the electronic shopping basket. The merchant system shopping basket webpage loads. The shopping cart module detects a load event and communicates information regarding the load event to an offer code module. The offer code module reviews the load event information, identifies the merchant system, and determines offer codes applicable to a purchase with the merchant system. The offer code module may identify items in the electronic shopping basket and determine offer codes applicable to the items in the electronic shopping basket. In an alternative example embodiment, the offer code module reviews the redemption terms of the offer codes and provides recommendations for offer codes that may be applicable if the user changes the items in the electronic shopping basket or includes additional items in the electronic shopping basket. If the merchant system does not permit redemption of more than one offer code, the offer code module determines the offer code that provides the greatest savings. The offer code module communicates the offer code(s) to the shopping cart module, and the shopping cart module auto-completes the code(s) on the merchant system shopping basket. The user completes the online transaction with the merchant system. Upon completing the checkout process, the merchant system provides a notification of completed transaction webpage. The shopping cart module detects a load event for the completed transaction webpage and communicates information regarding the load event to an offer redemption module. The offer redemption module reviews the load event, identifies the offer code(s) previously transmitted for auto-completion by the shopping cart module, marks the offer code(s) as redeemed, and calculates the redemption rate of the transmitted offer code(s). The inventive functionality of the invention will be explained in more detail in the following description, read in conjunction with the figures illustrating the program flow. Example System Architectures Turning now to the drawings, in which like numerals indicate like (but not necessarily identical) elements throughout the figures, example embodiments are described in detail. FIG. 1 is a block diagram depicting an offer redemption code system, in accordance with certain example embodiments. As depicted in FIG. 1, the exemplary operating environment 100 includes a merchant system 110, a user device 120, and an offer system 130 that are configured to communicate with one another via one or more networks 140. In an alternative example embodiment, two or more of these systems, or the components thereof, (including systems 110, 120, and 130) are integrated into the same system Each network 140 includes a wired or wireless telecommunication means by which network systems (including systems 110, 120, and 130) can communicate and exchange data. For example, each network 140 can be implemented as, or may be a part of, a storage area network (SAN), personal area network (PAN), a metropolitan area network (MAN), a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a virtual private network (VPN), an intranet, an Internet, a mobile telephone network, a card network, Bluetooth, near field communication network (NFC), or any combination thereof, or any other appropriate architecture or system that facilitates the communication of signals, data, and/or messages (generally referred to as data). Throughout this specification, it should be understood that the terms “data” and “information” are used interchangeably herein to refer to text, images, audio, video, or any other form of information that can exist in a computer-based environment. In an example embodiment, each network system (including systems 110, 120, and 130) includes a device having a communication module capable of transmitting and receiving data over the network 140. For example, each network system (including systems 110, 120, and 130) may comprise a server, personal computer, mobile device (for example, notebook computer, tablet computer, netbook computer, personal digital assistant (PDA), video game device, GPS locator device, cellular telephone, Smartphone, or other mobile device), a television with one or more processors embedded therein and/or coupled thereto, or other appropriate technology that includes or is coupled to a web browser or other application for communicating via the network 140. In the example embodiment depicted in FIG. 1, the network systems (including systems 110, 120, and 130) are operated by merchants (not shown), users 101 or consumers, and an offer system operator (not shown), respectively. The merchant system 110 comprises at least one point of sale (“POS”) terminal 113 that is capable of processing a purchase transaction initiated by a user 101. In an example embodiment, the merchant operates an online store and the user 101 indicates a desire to make a purchase by clicking a link or “checkout” button on a website. In an alternative example embodiment, the user device 120 is configured to perform the functions of the POS terminal 113. In this example, the user 101 scans and/or pays for the transaction via the user device 120 without interacting with the POS terminal 113. In an example embodiment, the user presents a form of payment and a loyalty program identification (for example, a loyalty program card, phone number, loyalty program number, biometric identification, or some other form of identifying information) when the transaction is processed. In an alternative example embodiment, the user presents a merchant system 110 account identifier when the transaction is processed. In an example embodiment, the merchant system 110 is capable of communicating with the user device 120 via an application 115. The application 115 may be an integrated part of the POS terminal 113 or a standalone hardware device (not shown), in accordance with alternative example embodiments. In an example embodiment, the user device 120 may be a personal computer, mobile device (for example, notebook, computer, tablet computer, netbook computer, personal digital assistant (“PDA”), video game device, GPS locator device, cellular telephone, Smartphone or other mobile device), television, or other appropriate technology that includes or is coupled to a web server, or other suitable application for interacting with web page files. The user 101 can use the user device 120 to access a merchant system online marketplace or webpage, browse items on the marketplace, and indicated a desire to place an item in an electronic “shopping basket” via a user interface 121 and an application 125. The application 125 is a program, function, routine, applet or similar entity that exists on and performs its operations on the user device 120. For example, the application 125 may be one or more of a shopping application, merchant system 110 application, an Internet browser, a digital wallet application, a loyalty card application, another value-added application, a user interface 121 application, or other suitable application operating on the user device 120. In an example embodiment, the data storage unit 127 and application 125 may be implemented in a secure element or other secure memory (not shown) on the user device 120. In an alternative example embodiment, the data storage unit 127 may be a separate memory unit resident on the user device 120. An example data storage unit 127 enables storage of user contact details for retrieval of a user offer system 130 account. In an example embodiment, the data storage unit 127 can include any local or remote data storage structure accessible to the user device 120 suitable for storing information. In an example embodiment, the data storage unit 127 stores encrypted information, such as HTML5 local storage. An example user device 120 comprises a shopping basket module 123. An example shopping basket module 123 is a browser plug-in corresponding to the offer system 130. The shopping basket module 123 may be an integrated part of the application 125, an integrated part of the offer system 130, or a standalone hardware device (not shown), in accordance with alternative example embodiments. The user 101 installs the shopping basket module 123 on the user device 120 to facilitate the auto-detection of the merchant system's 110 shopping basket webpage, communicate load event information to the offer system, auto-complete offer redemption codes communicated by the offer system 130, and auto-detect the merchant system's 110 notification of completed transaction webpage. In an example embodiment, the application 125 communicates with the shopping basket module 123. For example, the application 125 provides data to the shopping basket module 123 to allow for the detection of specific load events, such as the loading of the merchant system's 110 shopping basket webpage and notification of completed transaction webpage. In an example embodiment, the shopping basket module 123 comprises application programming interfaces (“APIs”) that allow the module 123 to interact and communicate with the application 125. An example shopping basket module 123 communicates with the offer system 130. For example, the shopping basket module 123 transmits load event information to the offer system 130 when the merchant system 110 shopping basket is detected. An example offer system 130 comprises an offer code module 133, an offer redemption module 135, and a data storage unit 137. The offer code module 133 receives the load event information for the merchant system 110 shopping basket from the shopping basket module 123 and retrieves applicable offer redemption codes. An example offer code module 133 receives offer redemption codes from merchant systems 110. In an alternative example embodiment, the offer code module 133 scans the Internet and retrieves offer redemption codes. In another alternative example embodiment, the offer code module 133 obtains offer redemption codes from users 101 and applications 125. The offer redemption codes are saved in the data storage unit 137. In an example embodiment, the data storage unit 137 can include any local or remote data storage structure accessible to the offer system 130 suitable for storing information. In an example embodiment, the data storage unit 137 stores encrypted information, such as HTML5 local storage. An example shopping basket module 123 receives the offer redemption codes transmitted by the offer code module 133 and auto-completes the codes on the merchant system 110 shopping basket webpage. An example shopping basket module 123 also communicates with the offer redemption module 135. For example, the shopping basket module 123 transmits load event information to the offer system 130 when the merchant system's 110 notification of completed transaction is detected. The offer redemption module 135 receives the load event information, determines which offer redemption codes were redeemed during the transaction, and determines a rate of redemption for the offer redemption codes. FIG. 5 is a block diagram depicting an example user interface 121 displaying an offer redemption code system, in accordance with certain example embodiments. As depicted in FIG. 5, the exemplary operating environment 500 includes a user device 120 and a user interface 121. Once the user 101 selects an item to be placed in the electronic shopping basket, the shopping basket module 123 detects the electronic shopping basket and auto-completes the offer redemption code. As depicted in FIG. 5, the user interface 121 displays the auto-completed offer redemption code in the electronic shopping basket. The components of the example operating environment 100 and 500 are described hereinafter with reference to the example methods illustrated in FIGS. 2-4. Example System Processes FIG. 2 is a block flow diagram depicting a method for auto-completing offer redemption codes on a shopping basket webpage, in accordance with certain example embodiments. The method 200 is described with reference to the components illustrated in FIG. 1. In block 210, the merchant system 110 registers with the offer system 130. In an example embodiment, the merchant system 110 provides the URL of the system's shopping basket webpage and the system's notification of completed transaction webpage. In an alternative example embodiment, the merchant system 110 also provides offer redemption codes when registering or at any time thereafter. In another alternative example embodiment, the merchant system provides a field name for an offer redemption code field on the system's shopping basket webpage. In an example embodiment, the offer system 130 communicates the merchant system's 110 registration information, or a portion thereof, to the shopping basket module 123. In an alternative example embodiment, the merchant system 110 is not required to register with the offer system 130. In this embodiment, the shopping basket module 123 reviews load events to determine when the merchant system's 110 shopping basket webpage loads. For example, the shopping basket module 123 scans for key words in the URL when a webpage is loaded, such as “checkout,” “basket,” “cart,” or other words or phrases indicating an item has been added to an electronic shopping basket. In an alternative example, the shopping basket module 123 scans for form fields when the webpage is loaded, such as those where redemption codes may be entered. In block 220, the user 101 installs the shopping cart module 123 on the user device 120. In an example embodiment, the user 101 may install the shopping cart module 123 at any time prior to the selection of an item to be placed in the user's 101 electronic shopping basket or the user 101 indicating a desire to check out. In an alternative example embodiment, the shopping cart module 123 is component of an application 125 installed on the user device 120. In an example embodiment, the user 101 is prompted to provide offer system 130 account information when the shopping cart module 123 is installed. If the user 101 does not have an offer system 130 account the user 101 is prompted to create an offer system 130 account. In an example embodiment, the user 101 may create the offer system 130 account at any time prior to accessing the shopping cart module 123. In an example embodiment, the user 101 accesses the offer system 130 via a website and a network 140. In an example embodiment, the user 101 submits registration information to the offer system 130, including, but not limited to, name, address, phone number, e-mail address, and information for one or more registered financial accounts, including bank account debit cards, credit cards, a loyalty rewards account card, or other type of account that can be used to make a purchase (for example, card type, card number, expiration date, security code, and billing address). In an example embodiment, the user's 101 offer system 130 account information is saved in the data storage unit 137 and is accessible to the offer code module 133 and offer redemption module 135. In an example embodiment, the offer system 130 account is a digital wallet account maintained by the offer system 130 or a third party system. In an alternative example embodiment, the user 101 may use a smart phone application to register with the offer system 130. In yet another alternative example embodiment, the user 101 accesses the offer system 130 via a smart phone application 125. In block 230 the user 101 browses the merchant system 110 online marketplace. In an example embodiment, the merchant system 110 online marketplace is an online shopping website wherein the user 101 can select and purchase items from the merchant system 110. In block 240, the user 101 selects an item from the merchant system 110. In an example embodiment, the user 101 indicates a desire to place the item in an electronic shopping basket. In an alternative example embodiment, the user 101 has previously selected one or more items to be placed in the electronic shopping basket and has selected an additional item to be placed in the electronic shopping basket. In another alternative example embodiment, the user 101 has previously selected one or more items to be placed in the electronic shopping basket and has indicated a desire to complete the purchase by clicking a “checkout” button in the electronic shopping basket. In block 245, the shopping cart module 123 determines whether the user 101 is logged into the offer system 130. In an example embodiment, the user 101 has previously logged into, or is otherwise automatically logged into the offer system 130. In an alternative example embodiment, the user's 101 login credentials are shared across other accounts (for example, social networking websites and user device 120 accounts) and the user 101 is automatically logged into the offer system 130 account using the shared login credentials. If the user 101 is not logged into the offer system 130, the method 200 proceeds to block 247. In block 247 the user 101 is prompted to log into the offer system 130. Returning to block 245, if the user 101 is logged into the offer system 130, the method 200 proceeds to block 250. In block 250, the offer system 130 determines which offer redemption codes are applicable to the electronic shopping basket. In an example embodiment, applicability of the offer redemption codes is determined by the items in the electronic shopping basket or by the identity of the merchant system 130. In an alternative example embodiment, additional rules and conditions apply, such as a total amount of the electronic shopping basket, a subtotal of particular items within the electronic shopping basket, an identity of the user 101, and a financial account used to pay for the items in the electronic shopping basket. The method for determining applicable offer codes is described in more detail hereinafter with reference to the methods described in FIG. 3. FIG. 3 is a block flow diagram depicting a method determining applicable offer redemption codes, in accordance with certain example embodiments, as referenced in block 250. The method 250 is described with reference to the components illustrated in FIG. 1. In block 310, the merchant system 110 shopping basket webpage loads. In an example embodiment, the merchant system 110 shopping basket webpage comprises one or more items the user 101 has selected to purchase. In an alternative example embodiment, the merchant system 110 shopping basket webpage is displayed after the user 101 has indicated a desire to checkout or otherwise complete the purchase of the selected items. In an example embodiment, the merchant system 110 shopping basket webpage comprises a form field that allows entry of one or more offer redemption codes. An example offer redemption code comprises a text code, numeric code, or some combination thereof, that when entered provides for the redemption of an offer. For example, the offer redemption code “25OFF” may provide a 25% discount on the total price of the shopping basket or $25 off the total price and “FREESHIP” may provide free shipping for the items in the shopping basket when purchased. In block 320, the shopping basket module 123 monitors for a shopping basket load event. In an example embodiment, the shopping basket module 123 receives an indication whenever a webpage is loaded in a browser on the user device 120. In an alternative example embodiment, the shopping basket module 123 continuously monitors or periodically monitors the browser for key words in the URL to determine when a shopping basket load event. In block 330, the shopping basket module 123 communicates the load event information to the offer system 130. In an example embodiment, load event information is communicated to the offer code module 133 resident on the offer system 130. In an example embodiment, the load event information comprises one or more of an identity of the merchant system 110, an identity of the user 101, a description of the items in the electronic shopping basket, a total price of the items in the electronic shopping basket, and a price associated with the items in the electronic shopping basket. In an alternative example embodiment, the shopping cart module 123 communicates these details in response to a request by the offer system 130 or in multiple communications to the offer system 130. In block 335, the offer code module 133 receives the load event information. In block 340, the offer code module 133 identifies the merchant system 110 based on the load event information received from the shopping basket module 123. In an example embodiment, the merchant system 110 has previously registered with the offer system 130 in block 210 and the offer system 130 can identify the merchant system 110 from the URL of the shopping basket webpage. In an alternative example embodiment, the merchant system 110 is not registered with the offer system 130 and that merchant system 110 is identified by keywords in the URL of the shopping basket webpage. For example, “merchantsystemA/shoppingbasket/” would identify Merchant System A. In an alternative example embodiment, the merchant system 110 is identified from prior load events captured by the shopping basket module 123. For example, the shopping basket module 123 may capture the load event for the merchant system's 110 main shopping page and then a load event for the electronic shopping basket. The shopping basket module 123 may transmit the merchant system 110 identification information captured from the main shopping page with the load event information for the electronic shopping basket. In an alternative example embodiment, the shopping basket module 123 may use smart logic to otherwise determine the identity of the merchant system 110 and transmit the identity with the load event information. In block 350, the offer code module 133 determines if offer redemption codes are applicable to the merchant system 110, as identified in block 340. In an example embodiment, the offer code module 133 cross-references the identity of the merchant system 110 determined in block 340 with the offer redemption codes saved in the data storage unit 137. In an example embodiment, one or more of the offer redemption codes is applicable to a transaction with a specific merchant system 110, as defined by the terms and conditions of the offer redemption code. The offer code module 133 reviews the terms and conditions of the offer redemption codes and determines whether the codes are applicable to a transaction with the merchant system 110. If the offer code module 133 identifies offer redemption codes applicable to a transaction with the merchant system 110, the method 250 proceeds to block 355 in FIG. 3. In block 355, the offer code module 133 determines which of the offer redemption codes applicable to a transaction with the merchant system 110 can be applied. In an example embodiment, each offer redemption codes will have one or more structured rules or conditions that the offer system 130 can understand without human intervention. These rules include, but are not limited to, a purchase threshold (for example, receive $10 back on a single purchase of more than $50 from the merchant system 110), a minimum number of purchases from the merchant (for example, receive $10 back on your tenth purchase from the merchant system 110), a time restriction (for example, receive $10 back for a purchase on Wednesday), a product or category restriction (for example, receive $10 back when you purchase a specific product or a product from a specific department), an expiration date, a product limitation, a user 101 limitation, and a limited number of redemptions. In an example embodiment, these rules are set by merchant system 110 at the time the redemption offer is created and reviewed by the offer system 130 before the offer redemption is applied. In an example embodiment, the offer code module 133 reviews the terms of the offer redemption code and the load event information to determine which of the offer redemption codes are applicable to the electronic shopping basket. In an alternative example embodiment, the offer code module 133 reviews the redemption terms of the offer codes and provides recommendations for offer redemption codes that may be applicable if the user changes the items in the electronic shopping basket or includes additional items in the electronic shopping basket. From block 355 in FIG. 3, the method 250 proceeds to block 360 in FIG. 3. Returning to block 350 in FIG. 3, if the offer code module 133 does not identify offer redemption codes applicable to a transaction with the merchant system 110, the method 250 proceeds to block 360 in FIG. 3. In block 360, the offer code module 133 identifies items in the electronic shopping basket. In an example embodiment, the offer code module 133 identifies items in the electronic shopping basket from the load event information transmitted by the shopping basket module 123. In an alternative example embodiment, the shopping basket module 123 may capture the load event for a particular item on the merchant system's 110 online marketplace and then a load event for the electronic shopping basket. The shopping basket module 123 may transmit the product identification information captured from the item page on the online marketplace page with the load event information for the electronic shopping basket. In an alternative example embodiment, the shopping basket module 123 reviews the user's 101 browser history or the load events for items viewed by the user 101 and transmit information regarding the items browsed with the load event information. The offer code module 133 may then determine and identity of items based on the browser history. In an alternative example embodiment, the shopping basket module 123 may use smart logic to otherwise determine the identity of the items and transmit the identity with the load event information. In block 370, the offer code module 133 determines if offer redemption codes are applicable to the items identified in the electronic shopping cart, as identified in block 360. In an example embodiment, the offer code module 133 cross-references the identity of the items determined in block 360 with the offer redemption codes saved in the data storage unit 137. In an example embodiment, one or more of the offer redemption codes is applicable to a specific item, as defined by the terms and conditions of the offer redemption code. The offer code module 133 reviews the terms and conditions of the offer redemption codes and determines whether the codes are applicable to an item in the electronic shopping basket. If the offer code module 133 identifies offer redemption codes applicable to one or more items in the electronic shopping basket, the method 250 proceeds to block 375 in FIG. 3. In block 375, the offer code module 133 determines which of the offer redemption codes applicable to the items in the electronic shopping basket can be. In an example embodiment, each offer redemption codes will have one or more structured rules or conditions that the offer system 130 can understand without human intervention. These rules include, but are not limited to, a purchase threshold (for example, receive $10 back on a single purchase of more than $50 from the merchant system 110), a minimum number of purchases from the merchant (for example, receive $10 back on your tenth purchase from the merchant system 110), a time restriction (for example, receive $10 back for a purchase on Wednesday), a product or category restriction (for example, receive $10 back when you purchase a specific product or a product from a specific department), an expiration date, a product limitation, a user 101 limitation, and a limited number of redemptions. In an example embodiment, these rules are set by merchant system 110 at the time the redemption offer is created and reviewed by the offer system 130 before the offer redemption is applied. In an alternative example embodiment, these rules are set by a product manufacturer or other third party system. In an example embodiment, the offer code module 133 reviews the terms of the offer redemption code and the load event information to determine which of the offer redemption codes are applicable to the electronic shopping basket. In an alternative example embodiment, the offer code module 133 reviews the redemption terms of the offer codes and provides recommendations for offer redemption codes that may be applicable if the user changes the items in the electronic shopping basket or includes additional items in the electronic shopping basket. Returning to block 370 in FIG. 3, if the offer code module 133 does not identify offer redemption codes applicable to the items in the electronic shopping basket, the method 250 proceeds to block 260 in FIG. 2. Returning to block 375 in FIG. 3, the method 250 proceeds to block 380 in FIG. 3. In block 380, the offer code module 133 determines whether the merchant system 110 electronic shopping basket permits multiple offer redemption codes. In an example embodiment, the merchant system 110 previously registered with the offer system 130 and provided form field information that allows the offer system to determine the number of offer redemption codes accepted by the merchant system 110. In an alternative example embodiment, the shopping basket module 123 determines the number of offer redemption codes permitted. If multiple offer redemption codes are not permitted, the method 250 proceeds to block 390 in FIG. 3. In block 390, the offer code module 133 determines which offer redemption code provide the greatest savings for the user 101. In an example embodiment, the offer code module 133 ranks the offer redemption codes in order of the greatest savings provided to the user 101. In this embodiment, the offer code module 133 transmits only the offer redemption code providing the greatest savings to the shopping basket module 123. In an alternative example embodiment, the offer code module 133 transmits more than one offer redemption code to the shopping basket module 123 and identifies the amount of savings provided to the user 101 or otherwise ranks the offer redemption codes in an order of savings provided. From block 390 in FIG. 3, the method 250 proceeds to block 260 in FIG. 2. Returning to block 380 in FIG. 3, if multiple offer redemption codes are permitted, the method 250 proceeds to block 260 in FIG. 2. Returning to FIG. 2, in block 260, the offer code module 133 transmits the offer redemption code(s) to the shopping basket module 123. In an example embodiment, the offer code module 133 transmits the offer redemption code that provides the greatest savings to the user 101. In an alternative example embodiment, the offer code module 133 transmits multiple offer redemption codes and provides an indication of the best savings for the user 101. In another alternative example embodiment, the shopping basket module 123 determines which offer redemption code provides the greatest savings to the user 101. In block 256, the offer code module 133 makes the offer redemption codes communicated to the shopping basket module as transmitted. In an example embodiment, the offer code module 133 saves the transmitted offer redemption codes in the user account maintained by the offer system. In block 270, the shopping basket module 123 receives the offer redemption code(s). In block 275, the shopping basket module 123 auto-completes the offer redemption code(s) on the merchant system 110 shopping basket webpage. In an example embodiment, the shopping basket module 123 determines the form field where offer redemption codes may be entered and auto-fills the code in the form field. In an alternative example embodiment, the merchant system 110 has previously registered with the offer system 130 and provided the form field information. In this embodiment, the offer system 130 provides a map of the form field information to the shopping basket module 123. In block 280, the user 101 completes the transaction with the merchant system 110. In an example embodiment, the user 101 provides financial account information to the merchant system 110 to pay for the transaction. In an alternative example embodiment, the user 101 uses a digital wallet or third party system to complete the financial transaction. In an example embodiment, the user 101 account maintained by the offer system 130 comprises a digital wallet and the financial information required to complete the transaction is transmitted to the shopping basket module 123 in block 260 and auto-completed by the shopping basket module 123 in block 275. In block 290, the offer system 130 determines the redemption rate for the offer redemption codes transmitted. In an example embodiment, the merchant system 110 provides notification that the transaction was completed. In an example embodiment, the notification comprises a notification of completed transaction webpage. The shopping basket module 123 detects the load event for the notification of completed transaction webpage and communicates the load event information to the offer system 130. The method for determining the redemption rate for the offer redemption codes is described in more detail hereinafter with reference to the methods described in FIG. 4. FIG. 4 is a block flow diagram depicting a method for determining the redemption rate for the offer redemption codes, in accordance with certain example embodiments, as referenced in block 290. The method 290 is described with reference to the components illustrated in FIG. 1. In block 410, the merchant system 110 notification of completed transaction webpage loads. In an example embodiment, the notification of a completed transaction webpage comprises an indication that the user 101 has completed the transaction with the merchant system 110. In block 420, the shopping basket module 123 monitors for a notification of completed transaction load event. In an example embodiment, the shopping basket module 123 receives an indication whenever a webpage is loaded in a browser on the user device 120. In an alternative example embodiment, the shopping basket module 123 continuously monitors or periodically monitors the browser for key words in the URL to determine a notification of completed transaction load event. In an alternative example embodiment, the shopping basket module 123 monitors the load events after auto-completing the offer redemption codes on the electronic shopping basket. In block 430, the shopping basket module 123 communicates the load event information to the offer system 130. In an example embodiment, load event information is communicated to the offer redemption module 133 resident on the offer system 130. In an example embodiment, the load event information comprises one or more of an identity of the merchant system 110, an identity of the user 101, and an identity of the offer redemption codes auto-completed on the electronic shopping basket. In an alternative example embodiment, the shopping cart module 123 communicates these details in response to a request by the offer system 130 or in multiple communications to the offer system 130. In block 440, the offer redemption module 135 receives the load event information. In block 450, the offer redemption module 135 reviews the load event information. In block 460, the offer redemption module 135 identifies the offer redemption codes transmitted for auto-completion on the merchant system 110 shopping basket webpage. In an example embodiment, the offer redemption module 135 identifies the user 101 account from the information communicated by the shopping basket module 123 in block 430 and retrieves the offer redemption codes saved in the user's 101 offer system account. In an alternative example embodiment, the offer redemption system 135 identifies the merchant system 110 from the load event information and determines which offer redemption codes correspond to the merchant system 110. In block 470, the offer redemption module 135 marks the offer redemption codes as redeemed. In an example embodiment, offer redemption module 135 determines that the offer redemption codes previously transmitted were redeemed based on the notification of completed transaction load event. In an example embodiment, the offer redemption module 135 maintains a record of the total number of times each offer redemption code has been transmitted and redeemed. In block 480, the offer redemption module 135 calculates the redemption rate of the offer redemption codes communicated to the shopping basket module 123 in block 260 in FIG. 2. In an example embodiment, the redemption rate comprises a percentage of the number of time the offer redemption code has been transmitted for auto-completion compared to the number of times the offer redemption code has been redeemed. In an example embodiment, the offer system 130 can provide the merchant system 110 with reports indicating the redemption rate of offer redemption codes submitted by the merchant system 110. OTHER EXAMPLE EMBODIMENTS FIG. 6 depicts a computing machine 2000 and a module 2050 in accordance with certain example embodiments. The computing machine 2000 may correspond to any of the various computers, servers, mobile devices, embedded systems, or computing systems presented herein. The module 2050 may comprise one or more hardware or software elements configured to facilitate the computing machine 2000 in performing the various methods and processing functions presented herein. The computing machine 2000 may include various internal or attached components such as a processor 2010, system bus 2020, system memory 2030, storage media 2040, input/output interface 2060, and a network interface 2070 for communicating with a network 2080. The computing machine 2000 may be implemented as a conventional computer system, an embedded controller, a laptop, a server, a mobile device, a Smartphone, a set-top box, a kiosk, a vehicular information system, one more processors associated with a television, a customized machine, any other hardware platform, or any combination or multiplicity thereof. The computing machine 2000 may be a distributed system configured to function using multiple computing machines interconnected via a data network or bus system. The processor 2010 may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands. The processor 2010 may be configured to monitor and control the operation of the components in the computing machine 2000. The processor 2010 may be a general purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a graphics processing unit (“GPU”), a field programmable gate array (“FPGA”), a programmable logic device (“PLD”), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof. The processor 2010 may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, co-processors, or any combination thereof. According to certain embodiments, the processor 2010 along with other components of the computing machine 2000 may be a virtualized computing machine executing within one or more other computing machines. The system memory 2030 may include non-volatile memories such as read-only memory (“ROM”), programmable read-only memory (“PROM”), erasable programmable read-only memory (“EPROM”), flash memory, or any other device capable of storing program instructions or data with or without applied power. The system memory 2030 may also include volatile memories such as random access memory (“RAM”), static random access memory (“SRAM”), dynamic random access memory (“DRAM”), synchronous dynamic random access memory (“SDRAM”). Other types of RAM also may be used to implement the system memory 2030. The system memory 2030 may be implemented using a single memory module or multiple memory modules. While the system memory 2030 is depicted as being part of the computing machine 2000, one skilled in the art will recognize that the system memory 2030 may be separate from the computing machine 2000 without departing from the scope of the subject technology. It should also be appreciated that the system memory 2030 may include, or operate in conjunction with, a non-volatile storage device such as the storage media 2040. The storage media 2040 may include a hard disk, a floppy disk, a compact disc read only memory (“CD-ROM”), a digital versatile disc (“DVD”), a Blu-ray disc, a magnetic tape, a flash memory, other non-volatile memory device, a solid sate drive (“SSD”), any magnetic storage device, any optical storage device, any electrical storage device, any semiconductor storage device, any physical-based storage device, any other data storage device, or any combination or multiplicity thereof. The storage media 2040 may store one or more operating systems, application programs and program modules such as module 2050, data, or any other information. The storage media 2040 may be part of, or connected to, the computing machine 2000. The storage media 2040 may also be part of one or more other computing machines that are in communication with the computing machine 2000 such as servers, database servers, cloud storage, network attached storage, and so forth. The module 2050 may comprise one or more hardware or software elements configured to facilitate the computing machine 2000 with performing the various methods and processing functions presented herein. The module 2050 may include one or more sequences of instructions stored as software or firmware in association with the system memory 2030, the storage media 2040, or both. The storage media 2040 may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor 2010. Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor 2010. Such machine or computer readable media associated with the module 2050 may comprise a computer software product. It should be appreciated that a computer software product comprising the module 2050 may also be associated with one or more processes or methods for delivering the module 2050 to the computing machine 2000 via the network 2080, any signal-bearing medium, or any other communication or delivery technology. The module 2050 may also comprise hardware circuits or information for configuring hardware circuits such as microcode or configuration information for an FPGA or other PLD. The input/output (“I/O”) interface 2060 may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices. The I/O interface 2060 may include both electrical and physical connections for operably coupling the various peripheral devices to the computing machine 2000 or the processor 2010. The I/O interface 2060 may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine 2000, or the processor 2010. The I/O interface 2060 may be configured to implement any standard interface, such as small computer system interface (“SCSI”), serial-attached SCSI (“SAS”), fiber channel, peripheral component interconnect (“PCI”), PCI express (PCIe), serial bus, parallel bus, advanced technology attached (“ATA”), serial ATA (“SATA”), universal serial bus (“USB”), Thunderbolt, FireWire, various video buses, and the like. The I/O interface 2060 may be configured to implement only one interface or bus technology. Alternatively, the I/O interface 2060 may be configured to implement multiple interfaces or bus technologies. The I/O interface 2060 may be configured as part of, all of, or to operate in conjunction with, the system bus 2020. The I/O interface 2060 may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine 2000, or the processor 2010. The I/O interface 2060 may couple the computing machine 2000 to various input devices including mice, touch-screens, scanners, biometric readers, electronic digitizers, sensors, receivers, touchpads, trackballs, cameras, microphones, keyboards, any other pointing devices, or any combinations thereof. The I/O interface 2060 may couple the computing machine 2000 to various output devices including video displays, speakers, printers, projectors, tactile feedback devices, automation control, robotic components, actuators, motors, fans, solenoids, valves, pumps, transmitters, signal emitters, lights, and so forth. The computing machine 2000 may operate in a networked environment using logical connections through the network interface 2070 to one or more other systems or computing machines across the network 2080. The network 2080 may include wide area networks (WAN), local area networks (LAN), intranets, the Internet, wireless access networks, wired networks, mobile networks, telephone networks, optical networks, or combinations thereof. The network 2080 may be packet switched, circuit switched, of any topology, and may use any communication protocol. Communication links within the network 2080 may involve various digital or an analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth. The processor 2010 may be connected to the other elements of the computing machine 2000 or the various peripherals discussed herein through the system bus 2020. It should be appreciated that the system bus 2020 may be within the processor 2010, outside the processor 2010, or both. According to some embodiments, any of the processor 2010, the other elements of the computing machine 2000, or the various peripherals discussed herein may be integrated into a single device such as a system on chip (“SOC”), system on package (“SOP”), or ASIC device. In situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with a opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a content server. Embodiments may comprise a computer program that embodies the functions described and illustrated herein, wherein the computer program is implemented in a computer system that comprises instructions stored in a machine-readable medium and a processor that executes the instructions. However, it should be apparent that there could be many different ways of implementing embodiments in computer programming, and the embodiments should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an embodiment of the disclosed embodiments based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use embodiments. Further, those skilled in the art will appreciate that one or more aspects of embodiments described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computing systems. Moreover, any reference to an act being performed by a computer should not be construed as being performed by a single computer as more than one computer may perform the act. The example embodiments described herein can be used with computer hardware and software that perform the methods and processing functions described previously. The systems, methods, and procedures described herein can be embodied in a programmable computer, computer-executable software, or digital circuitry. The software can be stored on computer-readable media. For example, computer-readable media can include a floppy disk, RAM, ROM, hard disk, removable media, flash memory, memory stick, optical media, magneto-optical media, CD-ROM, etc. Digital circuitry can include integrated circuits, gate arrays, building block logic, field programmable gate arrays (FPGA), etc. The example systems, methods, and acts described in the embodiments presented previously are illustrative, and, in alternative embodiments, certain acts can be performed in a different order, in parallel with one another, omitted entirely, and/or combined between different example embodiments, and/or certain additional acts can be performed, without departing from the scope and spirit of various embodiments. Accordingly, such alternative embodiments are included in the inventions described herein. Although specific embodiments have been described above in detail, the description is merely for purposes of illustration. It should be appreciated, therefore, that many aspects described above are not intended as required or essential elements unless explicitly stated otherwise. Modifications of, and equivalent components or acts corresponding to, the disclosed aspects of the example embodiments, in addition to those described above, can be made by a person of ordinary skill in the art, having the benefit of the present disclosure, without departing from the spirit and scope of embodiments defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures. 13837847 google llc USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Oct 2nd, 2019 12:00AM https://www.uspto.gov?id=USD0949887-20220426 Display screen with transitional graphical user interface D949887 The ornamental design for a display screen with transitional graphical user interface, as shown and described. 1 FIG. 1 is a front view of a display screen with transitional graphical user interface showing a first image in a sequence according to the claimed design; and, FIG. 2 is a front view of a second image thereof. The appearance of the transition is sequential from FIG. 1 to FIG. 2. The process or period in which an image transitions to another image forms no part of the claimed design. The shading is part of the claimed design. The outermost broken line illustrates the display screen and a boundary and the remaining broken lines illustrate portions of the transitional graphical user interface. None of the broken lines form part of the claimed design. 29708026 google llc USA S1 Design Patent Open D14/486 15 Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Apr 25th, 2016 12:00AM https://www.uspto.gov?id=US11314831-20220426 Allocating communication resources via information technology infrastructure Systems and methods to reduce latency in a graphical environment are described. The systems receives location information of a computing device and identifies content items that satisfy a boundary condition formed from the location information. The system selects content items from categories using a load balancing technique. The system selects, responsive to a request having no keywords, a content item object using values generate with an offline process. The system provides the content item object to the computing device to cause the computing device to render the content item object in the graphical environment. 11314831 1. A method, comprising: generating, by a data processing system comprising one or more processors and memory, based on a first frequency of first historic search queries received from a first plurality of computing devices that are located within a predetermined distance of a physical location, a first association between a first topic included in each of the first historic search queries and the physical location; generating, by the data processing system, based on a second frequency of second historic search queries received from a second plurality of computing devices that are located within the predetermined distance of the physical location, a second association between a second topic included in each of the second historic search queries and the physical location; receiving, by the data processing system, location information of a computing device executing an application presenting a map in a graphical environment, the location information indicating the computing device is within the predetermined distance of the physical location; identifying, by the data processing system, a first plurality of candidate content items each having an entry in a location field that satisfies a boundary condition formed from the location information; accessing, by the data processing system, an impression record data structure for the plurality of candidate content items to retrieve, from a view field, a priority value for each of the first plurality of candidate content items; determining, by the data processing system, a second plurality of candidate content items from among the first plurality of candidate content items, based at least in part on the priority value; determining, by the data processing system, a plurality of values corresponding to the second plurality of content items based on historical terms input by a plurality of different computing devices for a location associated with the location information; selecting, by the data processing system, from the first plurality of candidate content items,-a content item object associated with the first topic based on the first association, the first frequency of the first historic search queries, and the second frequency of the second historic search queries; and providing, by the data processing system via a network, the content item object to the computing device to cause the computing device to render the content item object on the map in the graphical environment. 2. The method of claim 1, comprising forming, by the data processing system, the boundary condition based on a resolution of the graphical environment. 3. The method of claim 1, comprising: identifying, by the data processing system, a zoom level of an electronic map in the graphical environment rendered by the computing device; and forming, by the data processing system, the boundary condition based on the zoom level to identify the first plurality of candidate content items viewable via the graphical environment rendered by the computing device. 4. The method claim 1, comprising: receiving, by the data processing system, an indication to zoom an electronic map in the graphical environment; and removing, by the data processing system responsive to the indication, one or more content items from the first plurality of candidate content items that are not viewable. 5. The method of claim 1, comprising: determining, by the data processing system, a duration and a resource availability for each of the first plurality of candidate content items; and inputting, by the data processing system, the duration and the resource availability into a load balancing technique to categorize the first plurality of candidate content items into a plurality of categories, wherein a first category of the plurality of categories ranks higher than a second category of the plurality of categories based on an output of the load balancing technique. 6. The method of claim 1, wherein determining the second plurality of candidate content items further using a load balancing technique. 7. The method of claim 1, comprising determining, by the data processing system, a plurality of values for each of the first plurality of candidate content items based on a number of HTML requests to access a resource corresponding to each of the first plurality of candidate content items. 8. The method of claim 1, comprising determining, by the data processing system, a plurality of values for each of the first plurality of candidate content items based on a Doppler radar forecast for a location associated with the location information. 9. The method of claim 1, comprising selecting, by the data processing system, the content item object using a Bayes classifier. 10. The method of claim 1, comprising inputting, by the data processing system, a plurality of values for each of the first plurality of candidate content items into a Bayes classifier to select a second content item object. 11. The method of claim 1, comprising: identifying, by the data processing system, a bandwidth availability for the computing device; generating, by the data processing system responsive to the bandwidth availability satisfying a threshold, a score for each of the first plurality of candidate content items indicating a likelihood of interaction with each of the first plurality of candidate content items; and inputting, by the data processing system, a plurality of values for each of the first plurality of candidate content items and the score into a Bayes classifier to select a second content item object. 12. The method of claim 1, comprising converting a portion of an electronic map in the graphical environment into structured content configured to display the content item object. 13. The method of claim 1, comprising receiving, by the data processing system, a content request from the computing device, the content request generated responsive to a location search query comprising a city, a town, or a state, the location search query having no keywords, wherein the content item object is provided to the computing device responsive to the content request. 14. A system comprising: a data processing system comprising one or more processors and a data repository in memory, the data processing system configured to: generate, based on a first frequency of first historic search queries received from a first plurality of computing devices that are located within a predetermined distance of a physical location, a first association between a first topic included in each of the first historic search queries and the physical location; generate, based on a second frequency of second historic search queries received from a second plurality of computing devices that are located within the predetermined distance of the physical location, a second association between a second topic included in each of the second historic search queries and the physical location; receive location information of a computing device executing an application presenting a map in a graphical environment, the location information indicating the computing device is within the predetermined distance of the physical location; identify a first plurality of candidate content items each having an entry in a location field that satisfies a boundary condition formed from the location information; access an impression record data structure for the plurality of candidate content items to retrieve, from a view field, a priority value for each of the first plurality of candidate content items; determine a second plurality of candidate content items from among the first plurality of candidate content items, based at least in part on the priority value; determine a plurality of values corresponding to the second plurality of content items based on historical terms input by a plurality of different computing devices for a location associated with the location information; select, from the first plurality of candidate content items, a content item object associated with the first topic based on the first association, the first frequency of the first historic search queries, and the second frequency of the second historic search queries; and provide, via a network, the content item object to the computing device to cause the computing device to render the content item object on the map in the graphical environment. 15. The system of claim 14, wherein the data processing system comprises a mapping engine, the data processing system further configured to: identify a zoom level of the map in the graphical environment rendered by the computing device; and form the boundary condition based on the zoom level. 16. The system of claim 14, wherein the data processing system further comprises a mapping engine, the data processing system further configured to: receive an indication to zoom the map; and remove, responsive to the indication, one or more content items from the first plurality of candidate content items that are not viewable. 17. The system of claim 14, wherein the data processing system is further configured to: determine a duration and a resource availability for each of the first plurality of candidate content items; and input the duration and the resource availability into a load balancing technique to categorize the first plurality of candidate content items into a plurality of categories, wherein first category of the plurality of categories ranks higher than a second category of the plurality of categories based on an output of the load balancing technique. 18. The system of claim 14, wherein the data processing system comprises a classifier, the data processing system further configured to select the content item object further based on a Bayes classification technique. 18 CROSS-REFERENCES TO RELATED APPLICATIONS This application claims the benefit of priority, and is a national stage entry under 35 U.S.C. § 371, of International Patent Application No. PCT/US2016/29209, titled ALLOCATING COMMUNICATION RESOURCES VIA INFORMATION TECHNOLOGY INFRASTRUCTURE and filed on Apr. 25, 2016, which is hereby incorporated by reference herein in its entirety. BACKGROUND Information can be displayed in a graphical environment, a web page or another interface by a computing device. The graphical environment or web pages can include text, images, video, or audio information provided by the entities via an application server or web page server for display on the Internet. Additional content item objects can also be provided by third parties for display on the web pages together with the information provided by the entities. Thus, an individual viewing a graphical environment can access the information that is the subject of the web page, as well as selected third party content item objects that may or may not be related to the subject matter of the web page. However, due to the large number of available content item objects and the resource intense nature of the electronic graphical environment, it may be challenging to efficiently select and provide content item objects for display in the graphical environment. SUMMARY Systems and methods of the present disclosure provide low-latency techniques of selecting and delivering content item objects in a graphical environment. When providing information resources such as a web page to a computing device, it is often desirable to provide additional content item objects that are appropriate and complementary to the resource. For example, on a web page about a particular location, it may be desirable to provide additional content item objects such as images of the location or points of interest at the location. In the case of textual resources, such content may be identified based upon an analysis of context provided by the text of the resource to identify that content which is likely to be most relevant. However, difficulties may arise where the information resource is not textual in nature. For example, where the resource is a graphical environment (e.g., a map, a two or three-dimensional simulation), context (as is provided by text in a textual resource) may not be available. Further, non-textual resources can have large bandwidth requirements relative to textual resources, and may be updated frequently and unexpectedly by the computing device during interaction with the resource. As such, a latency which may be acceptable for textual resources may not be acceptable for information resources which are graphical in nature. System and methods of the present solution provide low-latency techniques of selecting and delivering content item objects in a graphical environment. In one implementation, a data processing system of the present solution can select content item objects responsive to a search for a location, and provide the selected content item objects for display on or alongside a digital map. The data processing system can identify content items that correspond to locations that would be within an active viewing area or a certain extent of a digital map displayed on a display device. For example, if the map is centered on San Francisco, the viewable area can include several neighboring cities based on the zoom level. The data processing system can then identify the content items that correspond to locations within the viewable area. At least one aspect of the present disclosure is directed to a method of reducing latency in a graphical environment. In one implementation, the method includes a data processing system receiving location information of a computing device. The method can include the data processing system determining a first plurality of content items each having an entry in a location field that satisfies a boundary formed from the location information. The method can include the data processing system selecting, from the first plurality of content items, a second plurality of content items. The second plurality of content items can be assigned to a first category of a plurality of categories based on a load balancing technique. The method can include the data processing system retrieving, from a data structure stored in memory, a plurality of values corresponding to the second plurality of content items. The plurality of values can be generated using an offline process and indicating a likelihood of interaction. The method can include the data processing system selecting, responsive to a content request having no keywords, a content item object from the second plurality of content items based on the plurality of values. The content request may include or correspond to location information without additional keywords such as topical keywords, concepts, vertical information, or entities. The method can include the data processing system providing, via a network, the content item object to the computing device to cause the computing device to render the content item object on an electronic map in the graphical environment. In one implementation, the data processing system forms the boundary based on a resolution of the graphical environment. The data processing system can identify a zoom level of the electronic map in the graphical environment rendered by the computing device. The data processing system can form the boundary based on the zoom level to identify the first plurality of content items viewable via the graphical environment rendered by the computing device. In some cases, the data processing system receives an indication to zoom the electronic map. The data processing system can remove, responsive to the indication, one or more content items from the first plurality of content items that are not viewable. The data processing system can determine a duration and a resource availability for each of the first plurality of content items. The data processing system can input the duration and the resource availability into the load balancing technique to categorize the first plurality of content items into a plurality of categories. The first category can rank higher than a second category based on an output of the load balancing technique. The data processing system can access an impression record data structure for the first plurality of content items to retrieve, from a view field, a priority value for each of the first plurality of content items. The data processing system can determine the second plurality of content items from the first plurality of content items based on a combination of the priority value. In some cases, the data processing system can use an offline process to determine the plurality of values based on historical terms input by a plurality of different computing devices for the location. The data processing system can use the offline process to the plurality of values based on a number of HTML requests to access a resource corresponding to each of the second plurality of content items. The data processing system can use the offline process to determine the plurality of values based on a Doppler radar forecast for the location. The data processing system can select the content item object using a Bayes classifier. The data processing system can input only values generated by the offline process into a Bayes classifier to select the content item object. The data processing system can identify a bandwidth availability for the computing device. The data processing system can use an online process to generate, responsive to the bandwidth satisfying a threshold, a score for each of the second plurality of content items indicating a likelihood of interaction. The data processing system can input the plurality of values generated using the offline process and the score generated using the online process into a Bayes classifier to select the content item object. The data processing system can convert a portion of the electronic map into structured content configured to display the content item object. The data processing system can receive the content request from the computing device, the content request generated responsive to a location search query comprising a city, a town, or a state, the search query lacking keywords. Another aspect of the present disclosure is directed to a system to reduce latency in a graphical environment. The system can include a data processing system that includes one or more processors and a data repository in memory. The data processing system can further include one or more of a location engine, a content selector, a mapping engine, or a resource monitor. The location engine can receive location information of a computing device. The content selector can determine a first plurality of content items each having an entry in a location field that satisfies a boundary formed from the location information. The content selector can select, from the first plurality of content items, a second plurality of content items that are assigned to a first category of a plurality of categories based on a load balancing technique. The content selector can retrieve, from the data repository, a plurality of values corresponding to the second plurality of content items. The plurality of values generated using an offline process and indicating a likelihood of interaction. The content selector can select, responsive to a content request lacking a keyword, a content item object from the second plurality of content items based on the plurality of values. The content selector can provide, via a network, the content item object to the computing device to cause the computing device to render the content item object on an electronic map in the graphical environment. Another aspect of the present disclosure is directed to a method for displaying content within a graphical information resource displayed on a computing device. The method can include a data processing system receiving an indication of an extent associated with the graphical information resource. The data processing system can select from, a set of content items each having a location, a first subset of content items based on the extent. The data processing system can determine relative classifications for the content items within the first subset. The data processing system can select, from the first subset, a second subset of content items based upon the relative classifications. The data processing system can provide the second subset of content items for display within the graphical information resource. By selecting content items based upon the extent and further classifying only those content items within the first subset for selection of content items to be displayed, processing required by the data processing system to display relevant and appropriate content items within a graphical information resource is reduced. In some cases, the extent can refer to an area or a volume, and the graphical information resource can include a map. The data processing system can determine relative classifications for the content items within the first subset at least partially based upon comparisons between comparable parameters associated with the content items in the first subset. By providing content items with parameters which may be compared as between respective content items (i.e., comparable parameters), the data processing system can select relevant content items efficiently and in the absence of other, extrinsic context. In some cases, the data processing system determines a classification for each of the content items in the first subset by obtaining from a search term database a search term associated with the extent. At least two of the content items in the first subset can include a descriptive parameter, and a classification of a content item can be improved if that content item comprises a descriptive parameter with a value associated with the search term. By obtaining search terms based upon the extent, useful context may be obtained even where not natively provided within the information resource, so that relevant content may be more efficiently determined and provided. In some cases, the data processing system determines a classification of each content item by processing one or more of the set of parameters with a naïve Bayes classifier. The data processing system can select a first subset of content items based on the extent by selecting at least one content item having a location within the extent. The data processing system can selecting a first subset of content items based on the extent by selecting at least one content item having a location within a predetermined distance of the a boundary of the extent. BRIEF DESCRIPTION OF THE DRAWINGS The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims. FIG. 1 is an illustration of a system to reduce latency in a graphical environment in accordance with an implementation. FIG. 2 is an illustration of a non-textual graphical environment with content item objects selected and delivered by a data processing system in accordance with an implementation. FIG. 3 is an illustration of a method of reducing latency in a graphical environment in accordance with an implementation. FIG. 4 is an illustration of a method of reducing latency in a graphical environment in accordance with an implementation. FIG. 5 is a block diagram illustrating a general architecture for a computer system that may be employed to implement various elements of the systems shown in FIGS. 1 and 2, the interface shown in FIG. 3, and the method shown in FIG. 4, in accordance with an implementation. Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION Systems and methods of the present disclosure are directed to delivering content on a graphical environment. In particular, the systems and methods provide low-latency techniques of selecting content item objects responsive to a search for a location, and providing the selected content item objects for display on or alongside a digital map. Content item objects can be selected based on textual criteria such as keywords. For example, a content selection system can select content item objects responsive to a search query that includes keywords or keywords associated with main content in a webpage. However, non-textual resources such as graphical environments or digital maps may not include textual criteria such as keywords. Thus, a search for a location on the digital map may only include an address or a city without additional keywords. Due to the large graphical content of the digital map and limited computing and network resources, it may be challenging to select content item objects for display on digital maps without impacting latency. The present solution is directed to systems and methods of selecting content item objects responsive to a search for a location, and providing the selected content item objects for display on or alongside a digital map. In one implementation, a data processing system identifies content items that correspond to locations that would be within an active viewing area of a digital map displayed on a display device. For example, if the map is centered on San Francisco, the viewable area can include several neighboring cities based on the zoom level. The data processing system can then identify the content items that correspond to locations within the viewable area. Upon identifying the viewable content items, the data processing system can perform a lookup to determine whether these content item objects have been previously provided for display on the computing device. If the content item has not been previously provided for display on the computing device, then the data processing system can assign a higher selection priority to the content item. The data processing system can perform a load balancing technique to avoid loading a resource. The load balancing technique can avoid prematurely loading a resource to the determinant of another available resource. The load balancing technique can include a smoothing technique. For example, the smoothing technique can load balance by evenly distribute content item impressions (e.g., the number of times the content item has been viewed) based on a number of times that the content item is to be displayed and a remaining duration during which it is desired to display the content item. For example, the data processing system can determine a value based on a remaining number of times the content item is to be displayed divided by the time remaining. The data processing system can then group “campaigns” of content items into five groups based on these values, where the group with the highest value corresponds to the highest priority for selection purposes. It will be appreciated that a number of times a content item is to be displayed may be determined in any of a number of ways, and may be determined based upon a budget associated with that content item. The data processing system can receive a request for content from a computing device. When the data processing system receives the request, the data processing system can determine the highest level group with content items having locations that are viewable. The data processing system can further identify the high priority content items in this group (e.g., content items that have not yet been provided for display on the computing device). The data processing system can then input features or scores for the content items into a classifier (e.g., a Naïve Bayes Classifier) to select the content items from the highest group/priority pairing. To minimize latency in the process of providing content items in real-time for display on a graphical environment, the data processing system can precompute some of the scores and features in an offline process. The data processing system can then input a combination of predetermined scores and real-time scores into the classifier responsive to a request for the content item. The scores or features can be based on the following: Identify search terms that are searched for near a location. For example, if the data processing system historically receives a number or frequency of search queries for Gardens near Fendalton, then when the data processing system receives a search query for just “Fendalton”, the data processing system can increase the likelihood that content items for Garden shops are selected. Increase likelihood of selection for content items corresponding to the suburb/area around the location, and if the search area is too specific (e.g. a residential address) then distance to the search area can be used in the ranking content items. Increase likelihood of selection of content items corresponding to popular stores based on foot traffic. Number of times the user has seen the content item (lower is better) Number of users who have seen the content item (lower is better) Number of user visits to the website of the campaign in the last 30 days (higher is better) Number of user visits to a store owned by the campaign owner in the last 30 days (higher is better) Is the user interested in the category of the campaign (e.g. boats) (yes is better) When did the user last research the category of the campaign (e.g. boats) (sooner is better) Are the environmental conditions (Time of day, weather, etc.) suited to the campaign (more matches are better) An engagement rate on previous showings of the content item. The engagement rate may be, for example, the number of times that a user engages with the content item (e.g., clicks, mouse-overs, selections, etc.,) divided by the number of times the content item is shown. FIG. 1 illustrates an example system 100 for reducing latency in a graphical environment provided via information technology infrastructure. The system 100 can include content selection infrastructure. The system 100 can include a data processing system 120 communicating with one or more of a content provider 125, content publisher 115 or computing device 110 via a network 105. The network 105 can include computer networks such as the Internet, local, wide, metro, or other area networks, intranets, satellite networks, and other communication networks such as voice or data mobile telephone networks. The network 105 can be used to access information resources such as web pages, web sites, domain names, or uniform resource locators that can be displayed on at least one computing device 110, such as a laptop, desktop, tablet, personal digital assistant, smart phone, or portable computers. For example, via the network 105 a user of the computing device 110 can access web pages provided by at least one web site operator or content publisher 115. In this example, a web browser of the computing device 110 can access a web server of the web site operator or content publisher 115 to retrieve a web page for display on a monitor of the computing device 110. The web site operator or content publisher 115 generally includes an entity that operates the web page. In one implementation, the web site operator or content publisher 115 includes at least one web page server that communicates with the network 105 to make the web page available to the computing device 110. The network 105 may be any type or form of network and may include any of the following: a point-to-point network, a broadcast network, a wide area network, a local area network, a telecommunications network, a data communication network, a computer network, an ATM (Asynchronous Transfer Mode) network, a SONET (Synchronous Optical Network) network, a SDH (Synchronous Digital Hierarchy) network, a wireless network and a wireline network. The network 105 may include a wireless link, such as an infrared channel or satellite band. The topology of the network 105 may include a bus, star, or ring network topology. The network may include mobile telephone networks using any protocol or protocols used to communicate among mobile devices, including advanced mobile phone protocol (“AMPS”), time division multiple access (“TDMA”), code-division multiple access (“CDMA”), global system for mobile communication (“GSM”), general packet radio services (“GPRS”) or universal mobile telecommunications system (“UMTS”). Different types of data may be transmitted via different protocols, or the same types of data may be transmitted via different protocols. The system 100 can include at least one data processing system 120. The data processing system 120 can include at least one logic device such as a computing device having a processor to communicate via the network 105, for example with the computing device 110, the web site operator or content publisher computing device 115 (or content publisher 115), and at least one content provider computing device 125 (or provider device 125 or content provider 125). The data processing system 120 can include at least one server. For example, the data processing system 120 can include a plurality of servers located in at least one data center. The data processing system 120 can include multiple, logically-grouped servers and facilitate distributed computing techniques. The logical group of servers may be referred to as a server farm or a machine farm. The servers can also be geographically dispersed. A machine farm may be administered as a single entity, or the machine farm can include a plurality of machine farms. The servers within each machine farm can be heterogeneous—one or more of the servers or machines can operate according to one or more type of operating system platform. Servers in the machine farm can be stored in high-density rack systems, along with associated storage systems, and located in an enterprise data center. For example, consolidating the servers in this way may improve system manageability, data security, the physical security of the system, and system performance by locating servers and high performance storage systems on localized high performance networks. Centralizing the servers and storage systems and coupling them with advanced system management tools allows more efficient use of server resources. The data processing system 120 can include a content placement system having at least one server. The data processing system 120 can include at least one location engine 130, at least one content selector 135, at least one classifier 140, at least one resource monitor 145, at least one mapping engine 150, and at least one data repository 155. The location engine 130, content selector 135, classifier 140, resource monitor 145, and mapping engine 150 can each include at least one processing unit or other logic device such as programmable logic array engine, or module configured to communicate with the database repository or database 155. The location engine 130, content selector 135, classifier 140, resource monitor 145, mapping engine 150 and data repository 155 can be separate components, a single component, or part of the data processing system 120. The system 100 and its components, such as a data processing system 120, may include hardware elements, such as one or more processors, logic devices, or circuits. The data processing system 120 can obtain anonymous computer network activity information associated with a plurality of computing devices 110. A user of a computing device 110 can affirmatively authorize the data processing system 120 to obtain network activity information corresponding to the user's computing device 110. For example, the data processing system 120 can prompt the user of the computing device 110 for consent to obtain one or more types of network activity information. The identity of the user of the computing device 110 can remain anonymous and the computing device 110 may be associated with a unique identifier (e.g., a unique identifier for the user or the computing device provided by the data processing system or a user of the computing device). The data processing system can associate each observation with a corresponding unique identifier. For situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features that may collect personal information (e.g., information about a user's social network, social actions or activities, a user's preferences, or a user's current location), or to control whether or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that certain information about the user is removed when generating parameters (e.g., demographic parameters). For example, a user's identity may be treated so that no identifying information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a content server. The data processing system 120 may include a location engine 130. The location engine 130 can be designed and constructed to receive, identify or determine location information of a computing device 110. The data processing system 120 can receive, identify, or determine the location information based on, responsive to, or using a request for a content item object or a data ping. The location information of a computing device can include, for example, a geographic location of the computing device, a location term or information input into an input text box in a graphical user interface, historical location information, or travel mode (e.g., walking, driving, biking, flying, or train). In some cases, the location information can be a location search query input into a text box of a map application user interface rendered by the computing device 110 for display on a display device of the computing device 110. The location information can correspond to a location of the computing device, or a location different from or unrelated to the location of the computing device. For example, the computing device 110 can be location in San Jose, and a user of the computing device 110 can input a location search query of “Miami, Fla.” into a map application to request a map of Miami, Fla. In some cases, the data processing system 120 can determine a current location, a previous location or a historic location of the computing device 110. To determine the location of the computing device 110, the location engine 130 can receive geo-location data points associated with the computing device 110 to determine the location information. The data processing system 120 can receive the data points via a computer network 105 via a TCP/IP protocol, cell phone data network, or another communication protocol of computer network 105. The data points can include location information, time information, or the data processing system 120 can determine the location or time information associated with a received data point upon receiving the data point from the computing device 110. The data processing system 120 can also receive an identifier associated with the data point, such as a unique account identifier, computing device identifier, or a username associated with an application executing on the computing device 110. In one implementation, an application executing on the computing device 110 (e.g., a mobile application, a mobile operating system, a web browser, a map application, etc.) can transmit the geo-location data point that includes the location information. In one implementation, a mobile computing device 110 may periodically ping the data processing system 120 or other intermediate system to provide location or time information. In one implementation, a smartphone or other cellular enabled computing device 110 can ping a cell phone tower system, which may then provide location or time information to the data processing system 120. The data processing system 120 can receive geo-location data points or pings in real time, or a predetermined time interval such as a periodic basis (e.g., 10 minutes, 5 minutes, 1 minute, 30 seconds, or another period that can facilitate the systems and methods disclosed herein). The data processing system 120 can receive the data points in a batch process that runs periodically where multiple geo-location data points associated with a computing device 110 or multiple user devices 110 can be provided to the data processing system 120 in a single upload process. The computing device 110 can push the data points to the data processing system 120 in real-time, on a periodic basis, or in a batch process. The data points can include, or the data processing system 120 may determine, geographic location information of the geo-location data point based on, e.g., GPS, Wi-Fi, IP address, Bluetooth or cell tower triangulation techniques. The data processing system 120 can determine a latitude and longitude coordinate and identify a larger geographic area or cell comprising the latitude and longitude coordinate. The geographic location can correspond to a latitude or longitude coordinate, or the geographic location may correspond to a larger or smaller area, for example. The received data points can include, or the data processing 120 can determine, geographic location information including, e.g., latitude and longitude coordinates, geographic information system (“GIS”) information, country, state, city, county, town, or precinct. The data processing system 120 may receive or otherwise identify geographic location information associated with the computing device 110 via an application programming interface (“API”) that can provide scripted access to geographic location information associated with the computing device 110. For example, the geographic API specification may include a specification associated with the World Wide Web Consortium (“W3C”). In one implementation, a user of a computing device 110 proactively declares a location by checking-in to a location or otherwise declaring to an application executing on the computing device 110 or to the data processing system that the user is at a location. In some implementations, the geographic location of the computing device 110 can be determined via at least one of a global positioning system (“GPS”), cell tower triangulation, or Wi-Fi hotspots. In some implementation, the data processing system 120 can identify or determine the technique used to determine a geographic location in order to determine an accuracy of the determined geo-location data point (e.g., GPS-based location information may be more accurate than IP-based location information). The data processing system 120 can also determine geographic location information based on a user's interaction with an information resource. In some implementations, the computing device 110 may include a global positioning system (“GPS”). In some implementations the data processing system 120 may determine a geographic location based on an internet protocol (“IP”) address. For example, the computing device 110 may include a GPS sensor or antenna and be configured to determine a GPS location of the computing device 110. The data processing system 120 can also determine the geographic location by using information obtained from one or more cell towers to triangulate the location of the computing device 110. For example, the geographic location determined based on one information received from one cell tower, two cell towers or three cell towers may be sufficient for content selection. In some implementations, Wi-Fi hotpots may facilitate determining a geographic location because Wi-Fi hotspots may be stationary and can be used as a landmark. For example, the relation of a computing device 110 with respect to a Wi-Fi hotspot can facilitate determining a geographic location of the computing device 110. The data processing system 120 (e.g., via the location engine 130) can provide graphical content for display on the computing device 110. The data processing system 120 can provide data or data packets corresponding to the graphical content to establish the graphical environment on the computing device. Establishing the graphical environment on the computing device can include, for example, establish a view area for the graphical content, an extent, an active area, zoom level, pan amount, scale, filters, or content slot in which content item objects can be inserted. For example, the data processing system 120 can provide, for display via the computing device, a bitmap representing a map corresponding to the location information. The data processing system 120 can provide overlays on the map, replace content on the map, or otherwise manipulate the graphical environment or graphical content displayed on the computing device 110. The data processing system 120 (e.g., via location engine 130) can determine or identify a boundary, extent, threshold, or an indication of a viewable area of the graphical environment or active area of the graphical environment. The boundary can base based on a predetermined threshold, distance, pixels, resolution, screen size, zoom level, profile information, travel mode, location information, or travel mode. For example, the boundary can be a predetermined distance such as 0.5 mile, 0.75 miles, 1 mile, 2 miles, 5 miles, 10 miles, 30 miles, or other distance that facilitates minimizing latency in content selection for a graphical environment by reducing a set of available content items based on a geographic boundary. The data processing system 120 can identify a location and then set the boundary as a radius corresponding to the predetermined distance around the location. For example, if the location search query is “San Jose, Calif.”, the boundary can be a 5 mile radius around San Jose. The boundary can be other shapes, such as a square, rectangle, oval, ellipse, or other polygon formed around, or including, the location. In some cases, the boundary can be determined or formed based on the location information and a zoom level. The zoom level can refer to a value that indicates or represents the magnification of graphic content. A user of the computing device 110 can manipulate or adjust the zoom level using a zoom element 230 shown in FIG. 2. The zoom element 230 can include a zoom in button 230 and a zoom out button 240. The zoom element 230 can include a zoom level indicator 245 that indicates the amount of magnification. The data processing system 120, location engine, or computing device can adjust the magnification of the graphical content responsive to receiving an indication to zoom in or zoom out via zoom element 230. The data processing system 120 can determine a zoom level using a query or function. For example, the data processing system 120 can be configured with a getZoom( ) technique. The getZoom( ) function can return a current zoom level of the graphical content or map. The zoom level can include a value, numerical value, letter, symbol, color, or other indicator of zoom level. In some cases, the data processing system 120 can determine, based on the zoom, the boundary or extent of the graphic content or map. The data processing system 120 can define or establish the boundary as geographic coordinates, map tiles, a geographic area, or a location point and a radius. The data processing system 120 can determine store the boundary as a data structure that includes the coordinates, map tile information, geographic area (e.g., city, town, state, county), or location point and radius. For example, the boundary can include {latitude, longitude, radius}. In another example, the boundary can correspond to a polygon defined by geographic coordinates as follows: a four-sided polygon as {coordinate 1, coordinate 2, coordinate 3, coordinate 4}; a 6 sided polygon as {coordinate 1, coordinate 2, coordinate 3, coordinate 4}. The coordinate can include latitude and longitude coordinates. The data processing system 120 (e.g., location engine 130) can pass, provide, transmit, input or otherwise communicate the boundary to a content selector. The data processing system 120 can include a content selector 135. The content selector 135 can be designed and constructed to identify or select one or more content item objects for display in a non-textual graphical environment. A non-textual environment refers to a graphical environment where the main content of the environment is an image such as a map. Whereas, a textual environment can refer to an information resource such as an article, news article, editorial, essay, blog, social network post, electronic mail, or other online document that includes text, words, keywords, or phrases. For example, to select content items for display in a textual environment, the data processing system can parse the text to identify keywords, and use the keywords to select a matching content item based on a broad match, exact match, or phrase match. For example, the content selector 135 can analyze, parse, or otherwise process subject matter of candidate content items to determine whether the subject matter of the candidate content items correspond to the subject matter of the textual information resource. The content selector 135 may identify, analyze, or recognize terms, characters, text, symbols, or images of the candidate content items using an image processing technique, character recognition technique, or database lookup. The candidate content items may include metadata indicative of the subject matter of the candidate content items, in which case the content selector 135 may process the metadata to determine whether the subject matter of the candidate content item corresponds to the web page or search query. Content providers may provide additional indicators when setting up a content campaign that includes content items. The content provider may provide information at the content campaign or content group level that the content selector 135 may identify by performing a lookup using information about the candidate content item. For example, the candidate content item may include a unique identifier, which may map to a content group, content campaign, or content provider. The content selector 130 may determine, based on information stored in content campaign data structure in data repository 155, information about the content provider 125. However, in a non-textual graphic environment, the graphic content may not include relevant keywords, terms or phrases that can be used to select content item objects. For example, a map of a geographic area such as San Jose, Calif. can have different types of unrelated points of interests, such as restaurants, coffee shops, retail stores, museums, parks, gardens, educational institutions. Thus, it may be challenging for the data processing system to select content items responsive to a location search query, such as “San Jose, Calif.”, that does not include subject matter keywords such as pizza, Italian, jewelry, shoes, jeans, etc. It may be challenging because the data processing system could identify a large amount of unrelated, different points of interest that correspond to the geographical area including San Jose, Calif. Analyzing and processing the large amount of information in the geographic area can introduce lag, delays, or latency in selecting a content item object for presentation in the graphical environment. Thus, to facilitate reducing, minimizing, preventing, or eliminating latencies in content selection, the data processing system 120 (e.g., via content selector 135) can determine or identify content items that each having an entry in a location field that satisfies a boundary formed from the location information. For example, a content provider 125 can establish a content campaign with one or more content items or content groups. A content campaign can refer to one or more content groups that are associated with a theme. A content group can refer to one or more content items or content item objects (e.g., a textual content item, image content item, advertisement, online document) that correspond to a landing webpage. Content items in a content group can share the same or similar content selection criteria (e.g., keywords, entities, geographic criteria, or device type). Data about the content campaign, content group, or content item can be stored in the content campaign data structure 160. The data structure can include one or more fields including, for example, fields for an identifier of a content provider, an identifier of a content campaign, an identifier of a content group, an identifier of a content item, a filename or pointer to an image or data file of the content item, content selection criteria such as keywords, or a geographic location associated with the content item or content provider. Table 1 illustrates an example content campaign data structure. TABLE 1 Illustration of an example content campaign data structure in accordance with an implementation. Content Content Content Selection Geographic Provider Item ID Criteria (kw) Location Landing Page Company_A A1 Pizza San Jose, www_companyA_com California Company_A A2 Calzone San Jose, www_companyA_com California Company_A A3 subs San Jose, www_companyA_com California Company_B B1 Museum San Jose, www_companyB_com California Company_C C1 Amusement Anaheim, www_companyC_com park California Company_D D1 Hotel San Jose, www_companyD_com California Company_E E1 Auto repair San Jose, www_companyE_com California As illustrated in the example content campaign data structure shown in table 1, content providers can establish a content campaign with one or more fields or data that facilitates content selection by the data processing system 120. To minimize latency in selecting content, the data processing system 120 (e.g., via location engine 130) can determine a boundary or extent. The data processing system can compare the location in the content campaign data structure to determine whether the location is within the boundary determined by the location engine 130. The data processing system 120 can filter out content items based on the comparison. In the example illustrated in Table 1, the data processing system 120 can determine that the boundary corresponds to a city or town such as San Jose, Calif. The data processing system 120 can then filter out content items that are not in the determined boundary. In this case, the data processing system 120 can identify content items A1, A2, B1, and C1. The data processing system 120 can determine that A1, A2 and B1 correspond to San Jose, but C1 corresponds to Anaheim, which is outside the determined boundary. Thus, the data processing system 120 can establish a subset of content items as {A1, A2, B1} based on the boundary. The data processing system can use the established subset of content items for further processing. Using the filtered subset, the data processing system 120 can select a second subset of content items based on category information. For example, the data processing system 120 can selecting a second subset of content items that are assigned to a first category of one or more categories based on a load balancing technique, such as a smoothing function. The load balancing technique can include a smoothing technique or function or policy configured to improve resource allocation. The load balancing technique can facilitate reducing wasted resources or inefficient resource allocation, or optimizing resource use. The data processing system 120 can be configured with a load balancing technique that establishes categories based on campaign parameters such as campaign duration, budget, or resource availability. For example, the data processing system 120 can establish a category based on the amount of duration left in a content campaign and the amount of budget remaining in the content campaign. The category can be based on a ratio of the duration and budget. The duration can include a time interval remaining, a percent of duration remaining, or a number of days, hours, minutes or seconds remaining in the content campaign. The time remaining can refer to the start and end of a content campaign as established by the content provider that set up the content campaign. For example, a content campaign can have a duration of 1 week, 2 weeks, 30 days, 1 month, 60 days, 90 days, 180 days or some other time interval or duration. The duration can be defined as {begin_date_time, end_date_time}. The duration can be stored in the campaign data structure. The data processing system 120 can determine a duration remaining by monitoring a current time and determining the difference between the current time and the end date_date_time, and dividing the difference by the duration as determined by the difference between the end_date_time and the begin_date_time. For example, if the 7 days, and 4 days have been completed, then there are 3 days left, or approximately 43% duration remaining. The budget for the content campaign can be established by the content provider. The budget can refer to the amount of resources or monetary amount the content provider has allocated for the content campaign. The budget can be in a currency (e.g., United States dollars or other currency), points, tokens, or other unit that indicates an amount of a resource. The budget can stored in the content campaign data structure. Table 2 illustrates a content campaign data structure including budget and duration fields. TABLE 2 Example illustration of a content campaign data structure Content Content Geographic Resource/ Provider Item ID Location Budget (USD) Duration Company_A A1 San Jose, 100 7 days California Company_A A2 San Jose, 100 7 days California Company_A A3 San Jose, 100 7 days California Company_B B1 San Jose, 100 7 days California Company_C C1 Anaheim, 100 7 days California Company_D D1 San Jose, 50 7 days California Company_E E1 San Jose, 50 7 days California To determine the duration and resource availability, the data processing system 120 (e.g., content selector 135) can determine the amount of resources consumed for each content campaign and the duration remaining for each content campaign. The data processing system 120 can determine the amount of resources consumed for a content campaign using one or more techniques that monitor the resource allocation or consumption of the content campaign. For example, the data processing system 120 can be configured with a metering module that monitors resource consumption. In some cases, the data processing system 120 can meter performance metrics of the content campaign, and compute or determine the resource consumption based on the performance metric. For example, the data processing system 120 can store impression records that indicate a number of selections of a content campaign and a number of conversions. The impression record data structure 165 can be stored in data repository 155. The impression record data structure 165 can include a table format or other data format for storing, maintaining, organizing or manipulating impression records. An impression record may refer to an instance of displaying a content item on a web page. The impression may include information about the web page on which the content item is displayed (e.g., uniform resource locator of the web page, location/position of the content slot, keywords of the web page), search query input by the user into a search engine that resulted in the content item being selected, a keyword of the content item and/or a keyword of the web page or search query that resulted in the content item being selected for display (e.g., via a broad, phrase or exact match or other relevancy or similarity metric), time stamp associated with the impression, geographic location of the computing device 110 on which the content item is displayed, or type of device. The data processing system 120 may store content item impression records in the data repository 155 on a temporary basis and remove or delete the impression records after some duration (e.g., 24 hours, 48 hours, 72 hours, 30 days, 60 days, 90 days, etc.). The data processing system 120 may remove the impression records responsive to an event, condition or trigger. For example, the data processing system 120 may delete the impression record responsive to a request to delete impression history information, or after a time interval or duration after termination of the call associated with the impression. The data processing system 120 can determine, based on the number of conversion or selections, the budget spent on the conversions or selection. In another example, the data processing system 102 can determine the amount of budget spent based on a winning bid amount. For example, the content selector 135 can initiate an online auction to select content items, where content providers associate bids with content items. The content item associated with the highest ranked bid can be selected for display on an information resource. This highest ranked bid amount can correspond to the resource and deducted from a resource account of the content provider. Thus, the data processing system 120 can determine, for a content campaign, at a given time, a ratio based on an amount of budget remaining and the remaining duration. The data processing system 120 can establish categories based on the ratio of the budget remaining and remaining duration as follows: budget remaining divided by (“/”) duration remaining; budget remaining (%)/duration remaining (# of days); budget remaining (%)/duration remaining (# of minutes); or budget remaining (%)/duration remaining (%). Table 3 illustrates an example data structure storing the budget remaining, duration remaining, and the ratio for each content item that satisfies the boundary of San Jose, Calif. TABLE 3 Illustration of example data structure storing budget remaining, duration remaining, and a ratio of budget remaining to duration remaining for each content item Budget Content Content Geographic Remaining Duration Budget/ Provider Item ID Location % Remaining Duration Company_A A1 San Jose, 80 1 day 80% California Company_A A2 San Jose, 80 1 day 80% California Company_A A3 San Jose, 80 1 day 80% California Company_B B1 San Jose, 80 2 days 40% California Company_D D1 San Jose, 100 4 days 25% California Company_E E1 San Jose, 50 5 days 10% California Using the determined ratio, the data processing system 120 can categorize content items. Categories can include, for example, 80% or above; 60 to 79%; 40 to 59%; 20 to 39%; 0 to 19%. Table 4 illustrates a data structure that assigns or allocates content items based on categories. TABLE 4 Illustration of a data structure allocating content items to categories budget remaining (%)/duration remaining Category # (# of days) Content Items 1 100-80  A1, A2, A3 2 79-60 none 3 59-40 B1 4 39-20 D1 5 19-0  E1 The data processing system 120 can input the duration and the resource availability into the load balancing technique (e.g., smoothing function) to categorize the content items into categories, as shown in Table 4. The output of the load balancing technique can include a value such as the ratio of the budget remaining to duration remaining column illustrated in Table 3. This output can be used to categorize the content items as shown in Table 4, where a first category ranks higher than the second category, the second category ranks higher than the third category, the fourth category ranks higher than the fifth category. In some implementations, the data processing system 120 can use more or less than five categories, and different category limits or bounds. In some cases, the data processing system 120 can further assign a priority value to each content item. The data processing system 120 can access an impression record data structure 165 for the set of content items to retrieve, from a view field, a priority value for each of the first plurality of content items. The priority value can indicate whether the content item has been previously provided for presentation on or via a computing device. For example, a higher priority value can correspond to the content item not being previously provided for display, whereas a lower priority value indicates that the content item has already been provided for presentation to the computing device. The data processing system 120 can assign the priority value based on whether the content item has been displayed or a number of times the content item has been displayed on a particular device, to a user, or across any computing devices. The data processing system 120 can assign the priority value based on whether the content item has been displayed or a number of times the content item has been displayed on a particular device, to a user, or across any computing devices during a time interval. For example, the data processing system 120 can assign a priority value of 1 to content item A1 if it has not been displayed to computing device 110 during the duration of the content campaign corresponding to content item A1. The data processing system 120 can assign a priority value of 2 (which may be ranked lower than 1) to content item A2. The data processing system 120 can select content items that have a higher priority instead of content items that have a lower priority such that content items that have not been previously presented on a computing device are presented before content items that have already been presented. The data processing system 120 can determine a second subset of content items from the first subset of content items (e.g., those content items included in Table 4) based on a combination of the priority value. The data processing system 120 can select the second subset based on the highest category and priority pairing. For example, category 1 can be the highest ranked category based on the load balancing technique. Category 1 can include three content items A1, A2 and A3. Within category 1, content items A1 and A2 can have a high priority because they have not been previously presented, and content item A2 can have a low priority because it has been previously presented on the computing device. To select the content item to provide for display, the data processing system 120 can retrieve, from a data structure stored in memory, values (or features or scores) corresponding to the subset of content items. The values can be generated by the data processing system 120 using an offline process. The values can a likelihood of interaction with the content item, such as a predicted click through rate or a predicted conversion rate. The data processing system 120 (e.g., via content selector 135) can input the values, features or scores for the content items into a classifier 140 to select the content item from the highest group/priority pairing. In some cases, latency can be minimized by only processing the content items corresponding to the second subset, which can include content items A1 and A2. To further minimize latency in the process of providing content item objects in real-time for display on a graphical environment, the data processing system 120 can precompute some of the features or values in an offline process. The precomputed values of features can be stored in value data structure 170. The data processing system can input a combination of predetermined features or values and real-time features or values into the classifier 140 responsive to a request for the content item that lacks a keyword. Example values or features can include: Determine values based on historical terms input by a plurality of different computing devices for the location. For example, the data processing system can identify search terms that are searched for near a location. For example, if the data processing system historically receives a number or frequency of search queries for Gardens near San Jose, then when the data processing system receives a search query for just “San Jose”, the data processing system can increase the likelihood that content items for Garden shops are selected. Increase likelihood of selection for content items corresponding to the suburb/area around the location, and if the search area is too specific (e.g. a residential address) then distance to the search area can be used in the ranking content items. Increase likelihood of selection of content items corresponding to popular stores based on foot traffic. Number of times the computing device has presented the content item object (lower is better) Number of computing device that have presented the content item (lower is better) Determine value based on a number of HTML requests to access a resource corresponding to each of the second plurality of content items. For example, a number of HTML requests to access an information resource or website of the campaign in the last 30 days (higher is better) Number of visits to a store owned by the campaign owner or content provider in the last 30 days (higher is better) Is a user interested in the category of the campaign (e.g. boats) (yes is better) When did the user last research the category of the campaign (e.g. boats) (sooner is better) Determining a value based on a Doppler radar forecast for the location. For example, are the environmental conditions (Time of day, weather, etc.) suited to the campaign (more matches are better) Click through rate on previous showings The classifier 140 can be configured with or include a machine learning technique or engine. For example, the classifier 140 can be configured with a Naive Bayes classifier which can refer to a probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions between the features. The classifier 140 can construct or maintain a model that assigns class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. The classifier can use a family of techniques based on a common principle; e.g., naive Bayes classifiers can assume that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a user may be interested in purchasing sun screen if the weather is sunny, the temperature is 80 degrees, the location boundary includes a beach, the time of day is between 9 AM and 12 PM, and the day is a Saturday. A naive Bayes classifier considers each of these features to contribute independently to the probability that a user may be interested in a content item, regardless of any possible correlations between the color, roundness and diameter features. In some cases, the classifier can be trained in a supervised learning setting. For example, parameter estimation for naive Bayes models can use the method of maximum likelihood. The data processing system 120 can select, responsive to a content request lacking a keyword (or having no keywords or having no keywords in addition to geographic location information such as an address or latitude and longitude coordinate), a content item object from the subset of content items based on the output of the classifier using the plurality of values. A content request can lack a keyword if the content request only includes or is associated with location information, such as geographic location information. The content request can have no additional keywords such as topical keywords, vertical, or concepts. The content request can have no non-geographic location information. To select the content item object, the data processing system 120 can input content items A1 and A2 into the classifier, and the output may indicated that a likelihood of interaction with content item A1 is higher than content item A2. The data processing system 120 can then select content item A1 for presentation to the computing device. In some cases, the data processing system 120 can only input values determined offline into the classifier 140 to select a content item. In some cases, the data processing system can use a combination of offline and real-time values to select the content item. For example, the data processing system 120 can determine the current weather or time of day as a real-time value, mode of transportation of the computing device, or profile information associated with the computing device. In some cases, the data processing system 120 can use resource utilization or availability information. For example, the data processing system 120 can obtain, from a resource monitor 180 executed by the computing device 120, information about available bandwidth, network connectivity, processor utilization, memory utilization, etc. The data processing system 120 can compare a utilization metric with a threshold to determine whether to select a content item object. For example, if there is low bandwidth, the data processing system 120 may determine to display a content item object that has a smaller data file size. For example, the data processing system 120 may provide a content item object that is a text content item object, as opposed to an image content item object, video content item objector multimedia content item object. By selecting the text content item object, the data processing system 120 can minimize latency in the graphic environment by providing a content item that consumes less resources, thereby prioritizing resource use by the graphic environment or map application to take priority of resources. The data processing system 120 can use a combination of a real-time values and offline values to select a content item based on information from the resource monitor 180. The data processing system can identify a bandwidth availability for the computing device 110. The data processing system can generate a value or score using an online process based on whether the bandwidth satisfies a threshold (e.g., 0.5 Mbps, 1 Mbps, 2 Mbps, 5 Mbps, or 10 Mbps). For example, a higher score can indicate a satisfactory bandwidth, and a low score can indicate an unsatisfactory bandwidth. The score can be based on a data file size of the content item object. For example, the score for A1 can be high if the data file size for A1 is low and the bandwidth does not satisfy the threshold. The score for A2 can be low if the data file size is high and the bandwidth does not satisfy the threshold. In another example, the score can be high if the file size is high, but the bandwidth satisfies the threshold, indicating that latency may not be introduced by a high file size because there is sufficient bandwidth. The score can indicate a likelihood of interaction because minimizing latency improves the user interface and user experience, thereby increasing the likelihood of interaction. Responsive to the search query or other request for content lacking a keyword (or having no non-geographic location information such as topics, concepts, adjectives, or having only geographic location information such as an address, zip code, latitude and longitude coordinates), the data processing system 120 (e.g., via content selector 130) can identify, select or otherwise obtain content to be provided or presented via the computing device 110 making the request, or some other computing device 110 associated with the request for content. In some implementations, the data processing system 120 may identify, select, or otherwise obtain content not responsive to receiving any request. The content may include, e.g., text, characters, symbols, images, video, audio, or multimedia content. The content may include a phone number, a virtual phone number, or a call extension. The content item may include a link provided by content providers and included by data processing system (e.g., via content selector) for display with the search results page. The content item may include a link or button to a phone number that facilitates providing reporting data to a content provider. In cases where the content item includes a virtual phone number or a call extension, the content item may be referred to as a call content item. The request for content can include a request for an online advertisement, article, promotion, coupon, or product description. The data processing system 120 can receive the request from a computing device such as, e.g., computing device 110. For example, the data processing system 120 can receive the request via an application executing on the computing device 110, such as a mobile application executing on a mobile device (e.g., smart phone or tablet). In some cases, the data processing system 120 may not receive a separate request for content and, instead, select and provide the content responsive to the location search query lacking keywords (or having no keywords such as non-geographic location information such as an address, zip code, city, town, or latitude/longitude coordinates). In some instances, an information resource may request content from the data processing system 120 responsive to a user of a mobile device 110 visiting the information resource (e.g., via a mobile device 110). The request for content can include information that facilitates content selection. The data processing system 120 may request or obtain information responsive to receiving a request for content from the computing device 110. The information may include information about displaying the content on the computing device 110 (e.g., a content slot size or position) or available resources of computing device 110 to display or otherwise manipulate the content. The data processing system 120 may identify multiple content items (e.g., a first candidate content item and a second candidate content item) that are responsive to the request for content, or are otherwise candidates for display in the graphic environment. The data processing system may initiate or utilize an online auction process to select one or more of the multiple content items for display on the online document. An auction system may determine two or more bids for content items to be displayed in an online document. The auction system can run the bids through an auction process to determine one or more highest ranking bids or winning bids. Content items corresponding to the highest ranking or winning bids may be selected for display on or with the online document. The data processing system can provide, via a network, the selected content item object to the computing device to cause the computing device to render the content item object on an electronic map in the graphical environment. In some case, the data processing system 120 or computing 110 can convert a portion of the electronic map into structured content configured to display the content item object. For example, the computing device 110 can execute an application 185 such as a web browser or a mapping application. The data processing system 120 can provide the content item object to the computing device to cause the application 185 to render the content item object in the graphic environment. FIG. 2 is an illustration of a non-textual graphical environment with content item objects selected and delivered by a data processing system in accordance with an implementation. The computing device 110 can execute an application 185 that provides the graphic environment 200. The graphical environment 200 can include or display non-textual graphic content such as a map. A map can also be referenced to unstructured content because it is not a structured data set such as article that includes keywords related to a topic or concept. The environment 200 can include an input text box 210 for a location search query. The location search query can include only location terms without additional keywords. For example, a location term can be a city, town, state without additional keywords, such as “San Jose, Calif.”. The graphic environment can include one or more content slots 215, 220 and 225. The content item slots 220 and 225 can be overlaid on the graphic content within the boundary region 205. The content item slot 215 can be outside the graphic content boundary region 205. In some cases, the application 185 can convert a portion of the electronic map (e.g., slot 225 or 220) into structured content configured to display the content item object. The electronic map can include, e.g., one or more of a street view, satellite view, map view, hybrid view, or augmented reality view. The environment 200 can include a zoom element 230. In some cases, the data processing system can receive an indication to zoom the electronic map. The data processing system can remove, responsive to the indication, one or more content items that are not viewable. For example, the map may have initially included Anaheim, Calif. and content item objects for Anaheim, but responsive the user zooming into California, the data processing system 120 removes the content item objects outside the boundary 205. The data processing system 120 can receive the location search query 210 lacking keywords and select one or more content item objects for presentation via content items slots 215, 220 and 225. In some cases, the content item object can indicate a location of the content item object, as shown by content item object 225 pointing to a location on the map. FIG. 3 is an illustration of a method of reducing latency in a graphical environment provided via information technology infrastructure in accordance with an implementation. The method 300 can be performed via one or more system, component or interface illustrated in FIG. 1, FIG. 2, or FIG. 5, including, e.g., a data processing system, location engine, content selector, classifier, resource monitor, mapping engine, or data repository. In brief overview, and in some implementations, the method 300 includes a data processing system receiving location information at act 310. At act 315, the data processing system determines a first plurality of content items that satisfy a boundary. At act 320, the data processing system selects a second plurality of content items corresponding to a category. At act 325, the data processing system retrieves values for the second plurality of content items that indicate a likelihood of interaction. At act 330, the data processing system selects a content item object from the second plurality of content items. At act 335, the data processing system provides the content item object to a computing device to render on an electronic map. Still referring to FIG. 3, and in further detail, a data processing system receiving location information at act 310. The data processing system can receive location information from a computing device via a network. The data processing system can receive location information input into an application executed by a computing device. For example, a user can input a location search query into the application executed by the computing device that includes a city and state, zip code, or county. The application can transmit the location to the data processing system. In some cases, the data processing system can determine the location information using location sensor information. For example, the computing device can include a location sensor such as GPS module. The data processing system can receive the GPS information and determine location information. At act 315, the data processing system determines a first plurality of content items that satisfy a boundary. The data processing system can use the location information to generate, define or otherwise identify a boundary. The boundary can be based on a zoom level of an image or graphical content displayed or presented on the computing device. The boundary can be based on the viewable display size of the computing device or resolution of the computing device. The boundary can refer to the viewable graphic content on the computing device. For example, in a map application, the boundary can refer to the viewable geographic area displayed or presented on the computing device. The data processing system, using the boundary, can identify content items that correspond to a location within the boundary. For example, each content item can include a location (e.g., an address of a restaurant or retail shop; flight tickets to a geographic destination; or location keywords for a product such as sun screen or bathing suits). The data processing system can remove content items that do not correspond to a location within the boundary to create a first subset of content items having a location within the boundary. At act 320, the data processing system selects a second subset of content items corresponding to a category. The data processing system can use a load balancing technique to identify content items that have a high amount of budget remaining and a low duration remaining in order to facilitate using the entire budget for the content item. For example, a content provider may desire to spend their entire established budget for the content campaign. Thus, based on the ratio of the remaining budget and remaining duration, the data processing system can prioritize selecting the content item using different categories. The categories can be based on the ratio of budget remaining and duration remaining. In some cases, the data processing system can further prioritize content items within the category based on whether the content item has been previously presented. At act 325, the data processing system retrieves values for the second subset of content items that indicate a likelihood of interaction. The values can be precomputed using an offline classifier. The values can be based on features and indicate a likelihood of interaction. By using offline values, the data processing system can minimize latency. At act 330, the data processing system can select a content item object from the second subset of content items. The data processing system can select the content item using an online auction process that determines a score based on the offline scores, real-time scores, or bid amounts associated with the content items. At act 335, the data processing system provides the content item object to a computing device to render on an electronic map. FIG. 4 is an illustration of a method of displaying content within a graphical information resource displayed via a computing device, in accordance with an implementation. The method 400 can be performed via one or more system, component or interface illustrated in FIG. 1, FIG. 2, or FIG. 5, including, e.g., a data processing system, location engine, content selector, classifier, resource monitor, mapping engine, or data repository. In some implementations, the method 400 includes a data processing system receiving an indication of an extent or boundary of a graphical information resource at act 410. At act 415, the data processing system selects a first subset of content items based on the extent. The data processing system can select the first subset of content items that include or are associated with a location that is within the boundary. At act 420, the data processing system compares parameters (e.g., values or features) of content items of the first subset to determine relative classifications for the content items within the first subset. The data processing system can use a classifier and input parameters that are computed offline or online in real-time (e.g., responsive to a request for content from the computing device). The data processing system can use the parameters to determine a classification that indicates a likelihood of interest (e.g., selection or conversion of the content item). At act 425, the data processing system selects, from the first subset, a second subset of content items based upon the relative classifications. The data processing system can select the second subset as the content items that have a classification score that ranks high. At act 430, the data processing system provides the second subset of the content items for display within the graphical information resource. FIG. 5 is a block diagram of a computer system 500 in accordance with an illustrative implementation. The computer system or computing device 500 can be used to implement the system 100, content provider 125, computing device 110, content publisher 115, data processing system 120, location engine 130, content selector 135, classifier 140, resource monitor 145, mapping engine 150 and data repository 155. The computing system 500 includes a bus 505 or other communication component for communicating information and a processor 510 or processing circuit coupled to the bus 505 for processing information. The computing system 500 can also include one or more processors 510 or processing circuits coupled to the bus for processing information. The computing system 500 also includes main memory 515, such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 505 for storing information, and instructions to be executed by the processor 510. Main memory 515 can also be used for storing position information, temporary variables, or other intermediate information during execution of instructions by the processor 510. The computing system 500 may further include a read only memory (ROM) 520 or other static storage device coupled to the bus 505 for storing static information and instructions for the processor 510. A storage device 525, such as a solid state device, magnetic disk or optical disk, is coupled to the bus 505 for persistently storing information and instructions. The computing system 500 may be coupled via the bus 505 to a display 535, such as a liquid crystal display, or active matrix display, for displaying information to a user. An input device 530, such as a keyboard including alphanumeric and other keys, may be coupled to the bus 505 for communicating information and command selections to the processor 510. The input device 530 can include a touch screen display 535. The input device 530 can also include a cursor control, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor 510 and for controlling cursor movement on the display 535. The processes, systems and methods described herein can be implemented by the computing system 500 in response to the processor 510 executing an arrangement of instructions contained in main memory 515. Such instructions can be read into main memory 515 from another computer-readable medium, such as the storage device 525. Execution of the arrangement of instructions contained in main memory 515 causes the computing system 500 to perform the illustrative processes described herein. One or more processors in a multi-processing arrangement may also be employed to execute the instructions contained in main memory 515. In alternative implementations, hard-wired circuitry may be used in place of or in combination with software instructions to effect illustrative implementations. Thus, implementations are not limited to any specific combination of hardware circuitry and software. Although an example computing system has been described in FIG. 5, implementations of the subject matter and the functional operations described in this specification can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. The subject matter described in this specification can be implemented as one or more computer programs, e.g., one or more circuits of computer program instructions, encoded on one or more computer storage media for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate components or media (e.g., multiple CDs, disks, or other storage devices). The operations described in this specification can be performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” or “computing device” encompasses various apparatuses, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a circuit, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more circuits, subprograms, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, implementations of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular implementations of particular inventions. Certain features described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated in a single software product or packaged into multiple software products. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. Thus, particular implementations of the subject matter have been described. Other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. 16096175 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Mar 15th, 2018 12:00AM https://www.uspto.gov?id=US11315554-20220426 Methods, systems, and media for connecting an IoT device to a call Methods, systems, and media for connecting an IoT device to a call are provided. In some embodiments, a method is provided, the method comprising: establishing, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detecting, using the first end-point device, a voice command that includes a keyword; and in response to detecting the voice command, causing information associated with an IoT device that corresponds to the keyword to be transmitted to the second end-point device. 11315554 1. A method comprising: establishing, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detecting, using the first end-point device, a voice command that includes a phrase; determining whether the phrase in the voice command identifies an IoT device and an operation to be performed by the identified IoT device and determining whether the voice command corresponds with a first command to transmit stored device information associated with the IoT device to an entity associated with the second end-point device or corresponds with a second command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in; and in response to determining that the phrase in the voice command specifies the IoT device and that the voice command corresponds to the command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in, causing the IoT device to join the telecommunication channel and causing information associated with the IoT device that is specified in the phrase to be transmitted to the second end-point device. 2. The method of claim 1, further comprising determining that the first end-point device and the IoT device are connected to a particular local area network. 3. The method of claim 1, wherein the information associated with the IoT device is transmitted by the IoT device to the second end-point device using the telecommunication channel. 4. The method of claim 1, wherein the second end-point device is associated with a user account, and wherein the information associated with the IoT device is transmitted to the user account. 5. The method of claim 1, wherein the information associated with the IoT device comprises status information related to a status of the IoT device. 6. The method of claim 1, wherein the information associated with the IoT device comprises information that can be used to initiate a connection with the IoT device, and wherein the method further comprises: causing, using a connection initiated by the second end-point device, status information related to a status of the IoT device to be transmitted by the IoT device to the second end-point device. 7. The method of claim 1, further comprising: determining that the IoT device is available for wireless communication; receiving an indication of the phrase; and determining that the phrase is to be associated with the IoT device based on the indication of the phrase. 8. The method of claim 7, wherein the indication of the phrase is based on a user input. 9. The method of claim 7, wherein the indication of the phrase is received from a server device that is associated with a user account with which the IoT device is associated. 10. A system comprising: a hardware processor that is configured to: establish, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detect, using the first end-point device, a voice command that includes a phrase; determine whether the phrase in the voice command identifies an IoT device and an operation to be performed by the identified IoT device and determine whether the voice command corresponds with a first command to transmit stored device information associated with the IoT device to an entity associated with the second end-point device or corresponds with a second command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in; and in response to determining that the phrase in the voice command specifies the IoT device and that the voice command corresponds to the command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in, cause the IoT device to join the telecommunication channel and causing information associated with the IoT device that is specified in the phrase to be transmitted to the second end-point device. 11. The system of claim 10, wherein the hardware processor is further configured to determine that the first end-point device and the IoT device are connected to a particular local area network. 12. The system of claim 10, wherein the information associated with the IoT device is transmitted by the IoT device to the second end-point device using the telecommunication channel. 13. The system of claim 10, wherein the second end-point device is associated with a user account, and wherein the information associated with the IoT device is transmitted to the user account. 14. The system of claim 10, wherein the information associated with the IoT device comprises status information related to a status of the IoT device. 15. The system of claim 10, wherein the information associated with the IoT device comprises information that can be used to initiate a connection with the IoT device, and wherein the hardware processor is further configured to: cause, using a connection initiated by the second end-point device, status information related to a status of the IoT device to be transmitted by the IoT device to the second end-point device. 16. The system of claim 10, wherein the hardware processor is further configured to: determine that the IoT device is available for wireless communication; receive an indication of the phrase; and determine that the keyword is to be associated with the IoT device based on the indication of the phrase. 17. The system of claim 16, wherein the indication of the phrase is based on a user input. 18. The system of claim 16, wherein the indication of the phrase is received from a server device that is associated with a user account with which the IoT device is associated. 19. A non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method comprising: establishing, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detecting, using the first end-point device, a voice command that includes a phrase; determining whether the phrase in the voice command identifies an IoT device and an operation to be performed by the identified IoT device and determining whether the voice command corresponds with a first command to transmit stored device information associated with the IoT device to an entity associated with the second end-point device or corresponds with a second command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in; and in response to determining that the phrase in the voice command specifies the IoT device and that the voice command corresponds to the command to add the IoT device to the telecommunication channel between the first end-point device and the second end-point device that a user of the first end-point device is currently engaged in, causing the IoT device to join the telecommunication channel and causing information associated with the IoT device that is specified in the phrase to be transmitted to the second end-point device. 20. The non-transitory computer-readable medium of claim 19, the method further comprising determining that the first end-point device and the IoT device are connected to a particular local area network. 21. The non-transitory computer-readable medium of claim 19, wherein the information associated with the IoT device is transmitted by the IoT device to the second end-point device using the telecommunication channel. 22. The non-transitory computer-readable medium of claim 19, wherein the second end-point device is associated with a user account, and wherein the information associated with the IoT device is transmitted to the user account. 23. The non-transitory computer-readable medium of claim 19, wherein the information associated with the IoT device comprises status information related to a status of the IoT device. 24. The non-transitory computer-readable medium of claim 19, wherein the information associated with the IoT device comprises information that can be used to initiate a connection with the IoT device, and wherein the method further comprises: causing, using a connection initiated by the second end-point device, status information related to a status of the IoT device to be transmitted by the IoT device to the second end-point device. 25. The non-transitory computer-readable medium of claim 19, the method further comprising: determining that the IoT device is available for wireless communication; receiving an indication of the phrase; and determining that the keyword is to be associated with the IoT device based on the indication of the phrase. 26. The non-transitory computer-readable medium of claim 25, wherein the indication of the phrase is based on a user input. 27. The non-transitory computer-readable medium of claim 25, wherein the indication of the phrase is received from a server device that is associated with a user account with which the IoT device is associated. 27 CROSS-REFERENCE TO RELATED APPLICATION This application claims the benefit of U.S. Provisional Application No. 62/527,830, filed Jun. 30, 2017, which is hereby incorporated by reference herein in its entirety. TECHNICAL FIELD The disclosed subject matter relates to methods, systems, and media for connecting an Internet of things (IoT) device to a call. BACKGROUND IoT devices such as thermostats, light fixtures, cameras, speakers, and personal assistants are increasingly popular. Occasionally, a person may be engaged in a call and wish to provide another person in the call access to an IoT device, or information about an IoT device. However, no known mechanisms provide such access or information to the other person. Accordingly, it is desirable to provide new methods, systems, and media for connecting an IoT device to a call. SUMMARY In accordance with some embodiments of the disclosed subject matter, mechanisms for connecting an IoT device to a call are provided. In accordance with some embodiments of the disclosed subject matter, a method is provided, the method comprising: establishing, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detecting, using the first end-point device, a voice command that includes a keyword; and in response to detecting the voice command, causing information associated with an IoT device that corresponds to the keyword to be transmitted to the second end-point device. In some embodiments, the method further comprises determining that the IoT device is available for wireless communication; receiving an indication of the keyword; and determining that the keyword is to be associated with the IoT device based on the indication of the keyword. In some embodiments, the indication of the keyword is based on a user input. In some embodiments, the indication of the keyword is received from a server device that is associated with a user account with which the IoT device is associated. In some embodiments, the method further comprises determining that the first end-point device and the IoT device are connected to a particular local area network. In some embodiments, the method further comprises causing the IoT device to join the telecommunication channel, wherein the information associated with the IoT device is transmitted by the IoT device to the second end-point device using the telecommunication channel. In some embodiments, the second end-point device is associated with a user account, and wherein the information associated with the IoT device is transmitted to the user account. In some embodiments, the information associated with the IoT device comprises status information related to a status of the IoT device. In some embodiments, the information associated with the IoT device comprises information that can be used to initiate a connection with the IoT device, and the method further comprises: causing, using a connection initiated by the second end-point device, status information related to a status of the IoT device to be transmitted by the IoT device to the second end-point device. In accordance with some embodiments of the disclosed subject matter, a system is provided, the system comprising a hardware processor that is configured to: establish, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detect, using the first end-point device, a voice command that includes a keyword; and in response to detecting the voice command, cause information associated with an IoT device that corresponds to the keyword to be transmitted to the second end-point device. In accordance with some embodiments of the disclosed subject matter, a non-transitory computer-readable medium is provided, the non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method comprising: establishing, at a first end-point device, a telecommunication channel with a second end-point device; subsequent to establishing the telecommunication channel, and prior to a termination of the telecommunication channel, detecting, using the first end-point device, a voice command that includes a keyword; and in response to detecting the voice command, causing information associated with an IoT device that corresponds to the keyword to be transmitted to the second end-point device. BRIEF DESCRIPTION OF THE DRAWINGS Various objects, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings, in which like reference numerals identify like elements. FIG. 1 shows a flow diagram of an example of a process for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. FIG. 2 shows a flow diagram of an example of a process for configuring voice commands in accordance with some embodiments of the disclosed subject matter. FIG. 3 shows an illustration of an example of a user interface for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. FIG. 4 shows a block diagram of an example of a system suitable for implementation of the mechanisms described herein for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. FIG. 5 shows a block diagram of an example of hardware that can be used in accordance with some embodiments of the disclosed subject matter for connecting an IoT device to a call. DETAILED DESCRIPTION In accordance with various embodiments of the disclosed subject matter, mechanisms (which can include methods, systems, and media) for connecting an IoT device to a call are provided. In some embodiments of the disclosed subject matter, the mechanisms described herein can connect a device to a call with a third party, and in response to detecting certain voice commands, can cause an IoT device to be added to the call, and/or can cause information about the IoT device to be transmitted to a third party. For example, the mechanisms can connect a mobile device to a call, and during the call, detect a user voice command that is associated with adding a particular IoT device, such as a refrigerator, to the call. To continue the example, in some embodiments, the mechanisms can then use stored connection information associated with the IoT device (e.g., an I.P. address, or a BLUETOOTH address) to initiate a connection with the IoT device, and cause the IoT device to join the call. After joining the call, the IoT device can then transmit information about the IoT device (e.g., a status of the device, an error code, etc.) to the third party. In some embodiments, the mechanisms can provide a user with the opportunity to configure and/or customize voice commands and the operations with which the voice commands are associated. For example, the mechanisms can allow a user to select which operations, and/or which devices, are to be associated with which voice commands. As a more particular example, the mechanisms can allow a user to configure one voice command for adding a television to a call, configure another voice command for adding a laptop computer to a call, and yet another voice command for causing the laptop computer to transmit device information associated with the laptop computer to a third party. As another example, the mechanisms can allow a user to select keywords and/or key phrases that are to be recognized as voice commands. As a more particular example, the mechanisms can provide a user with an opportunity to speak keywords and/or key phrases that the user wishes to be recognized as a voice command, detect the keywords and/or key phrases being spoken, and then store information pertaining to the keywords and/or key phrases detected, in connection with an operation to be performed, as a voice command. To continue this example, at a later time, the mechanisms can then detect that the user has spoken keywords and/or key phrases using, for example, a speech detection technique, recognize that the keywords and/or key phrases correspond to a configured voice command, and cause the corresponding operation to be performed. FIG. 1 shows a flow diagram of an example 100 of a process for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. At 102, process 100 can connect to a call with a third party. In some embodiments, at 102, process 100 can connect any suitable device to a call, such as an end-point device 410, as described below in connection with FIG. 4. In some embodiments, at 102, process 100 can connect to any suitable type of call. For example, process 100 can connect to a telephone call, a VoIP (“Voice over Internet Protocol”) call, a video call, an audio tele-conference call, or any other suitable type of call. At 104, process 100 can receive audio data. In some embodiments, at 104, process 100 can receive audio data by causing a microphone to be activated, and receiving the audio data via the microphone. Any suitable microphone can be used in some embodiments. For example, process 100 can use a microphone of an end-point device, a microphone of headphones connected to an end-point device, a microphone connected to an end-point device, a microphone of another device that is communicatively connected to an end-point device (e.g., using a wireless connection), or any other suitable microphone. In some embodiments, at 104, process 100 can receive audio data by capturing audio data that is being transmitted over a call with a third party. In some embodiments, at 104, process 100 can receive audio data via a transmission from an end-point device. For example, in a situation where process 100 is being executed at least in part using a server device (e.g., server device 402 as described below in connection with FIG. 4), an end-point device can transmit audio data to process 100 via an Internet connection. In some embodiments, at 104, process 100 can receive audio data having any suitable format. For example, process 100 can receive: raw audio data (e.g., audio data that is in a pulse code modulation format); audio data in a compressed format (e.g., MPEG layer 3); audio data in a lossless compression format (e.g., Free Lossless Audio Codec); audio data in any other suitable format; and/or any suitable combination thereof. At 106, process 100 can determine that received audio data includes a recognized voice command. In some embodiments, process 100 can use any suitable speech recognition technique to determine that received audio data contains a recognized command. For example, process 100 can use acoustic modeling, language modeling, Hidden Markov Models, dynamic time warping algorithms, neural networks, deep neural networks, end-to-end automatic speech recognition, any other suitable speech recognition technique, or any suitable combination thereof to determine that audio data contains a recognized command. In some embodiments, at 106, process 100 can determine that received audio data contains a recognized command by identifying a keyword, keywords, or key phrases that are associated with the recognized command. For example, a recognized command for adding a device to a call can be associated with a single keyword, such as “add,” several keywords, such as “group,” “conference,” and “connect,” or a key phrase such as “connect my tablet.” In a situation in which the recognized command is associated with several keywords, process 100 can require that all of the associated keywords, or that only a portion of the associated keywords, be identified in the audio data in order to recognize the command. In some embodiments, certain keywords associated with a recognized voice command can correspond to operations to be performed, while other keywords can correspond to IoT devices. For example, a keyword such as “transmit” can correspond to an operation for transmitting information from an IoT device to a third party. As another example, a keyword such as “tablet” can correspond to a tablet device. As a more particular example, in a situation in which process 100 receives audio data that is recognized to contain both the keyword “transmit” and the keyword “tablet,” process 100 can determine that the audio data includes a recognized voice command for transmitting information from the tablet device. In some embodiments, in addition to determining that audio data contains keywords and/or key phrases corresponding to a recognized voice command, at 106, process 100 can determine that received audio data includes a recognized voice (e.g., based on stored voice information, as discussed below in connection with FIG. 2) using any suitable voice recognition, speaker identification, or speaker verification technique. For example, process 100 can use frequency estimation, Hidden Markov Models, Gaussian mixture models, pattern matching algorithms, neural networks, matrix representation, vector quantization, template matching, text-dependent techniques, text-independent techniques, any other suitable technique, or any suitable combination thereof to determine that received audio data includes a recognized voice. In some embodiments, a recognized voice command can correspond to a user-defined voice command. For example, in some embodiments, user-defined voice commands can be configured and stored using process 200 as described below in connection with FIG. 2. Additionally or alternatively, in some embodiments, a recognized voice command can correspond to a voice command that is not user-defined. For example, a recognized voice command can correspond to a voice command that is based on a set of keywords and/or key phrases that are programmatically defined. In some embodiments, in addition or in alternative to recognizing a voice command that has already been configured and/or defined, at 106, process 100 can use any suitable artificial intelligence mechanism (e.g., a virtual assistant mechanism) to determine that received audio data includes an unconfigured voice command. For example, process 100 can use any suitable artificial intelligence mechanism to determine that a portion of received speech is a voice command, and determine an operation to be performed based on the voice command. As a more particular example, process 100 can determine that a user has spoken a phrase such as “add my refrigerator to the call,” and use artificial intelligence to determine that this phrase corresponds to a voice command for adding a refrigerator associated with the user to a call that the user is engaged in. At 108, process 100 can determine which voice command was recognized at 106. In some embodiments, if process 100 determines at 108 that a voice command for transmitting IoT device information was recognized, then process 100 can access stored IoT device information at 110. In some embodiments, IoT device information can include any suitable information. For example, IoT device information can include information that can be used to initiate a connection with an IoT device, such as a phone number, an I.P. address, wireless connection configuration information (e.g., a BLUETOOTH address, a MAC address, and/or an I.P. address), authentication information associated with an IoT device (e.g., a user name and/or password), and/or any other suitable information that can be used to initiate a connection. As another example, IoT device information can include any suitable information logged by the device. As a more particular example, information logged by the device can include information related to usage of the device, times at which the device was accessed, information about operation performed by the device, temperatures recorded by the device, and/or any other suitable information logged by the device. In some embodiments, IoT device information can include status information associated with an IoT device. Status information can be related to any suitable status of an IoT device in some embodiments. For example, status information can be related to a connectivity status of an IoT device (e.g., status information identifying a network that the IoT device is connected to, or status information indicating that the IoT device is not connected to a network). As another example, status information can be related to an error status of an IoT device (e.g., information indicating that the IoT device has encountered an error, and/or information identifying a particular error that the IoT device has encountered). In some embodiments, stored IoT device information can be stored at any suitable location. For example, IoT device information can be stored on an end-point device, on an IoT device, and/or on a server device. In some embodiments in which IoT device information is stored at a device that is separate from a device that is executing at least a portion of process 100, IoT device information can be accessed by the device that is separate. For example, in a situation in which process 100 is being executed by an end-point device and IoT device information is stored at an IoT device, the IoT device can access stored IoT device information. In such an example, the IoT device can access the stored IoT device information in response to receiving a notification that a voice command has been recognized. If process 100 accesses IoT device information that includes information that can be used to initiate a connection with an IoT device at 110, in some embodiments, process 100 can initiate a connection with an IoT device at 112. Additionally or alternatively, in some embodiments, process 100 can initiate a connection with an IoT device at 112 in response to process 100 accessing IoT device information at 110 that does not include status information associated with an IoT device. In some embodiments, process 100 can initiate a connection with any suitable IoT device. For example, process 100 can initiate a connection with IoT device 416 as described below in connection with FIG. 4. In some embodiments, process 100 can initiate a connection with an IoT device that is associated with a voice command that has been recognized at 106. In some embodiments in which process 100 accesses stored IoT device information that includes information that can be used to initiate a connection with an IoT device, process 100 can initiate a connection with the IoT device, at 112, using the stored IoT device information. In some embodiments, process 100 can initiate any suitable connection with an IoT device. For example, process 100 can initiate a near-field communication connection, a BLUETOOTH connection, a Wi-Fi connection, any other suitable connection, or any suitable combination thereof. At 114, process 100 can request IoT device information. In some embodiments, process 100 can request IoT device information from any suitable source. For example, process 100 can request IoT device information from an IoT device (e.g., an IoT device that is associated with a voice command that has been recognized at 106). As a more particular example, process 100 can utilize a connection with an IoT device that has been initiated at 112 to request IoT device information from the IoT device. As another example, process 100 can request IoT device information from a server device. As a more particular example, process 100 can request IoT device information from a server device that periodically requests and/or receives IoT device information from IoT devices. At 116, process 100 can transmit IoT device information to a third party. In some embodiments, process 100 can transmit IoT device information to any suitable third party. For example, process 100 can transmit IoT device information to a third party that is connected to the same call to which process 100 connected at 102. As a more particular example, process 100 can transmit IoT device information to a device of the third party that is connected to the call. As another more particular example, process 100 can transmit IoT device information to a user account or e-mail address associated with the third party. As a still more particular example, in a situation in which the call to which process 100 connected at 102 is a call hosted by a VoIP or video calling service, process 100 can transmit IoT device information to a user account of the VoIP or video calling service. In some embodiments, process 100 can transmit IoT device information using any suitable technique or combination of techniques. For example, process 100 can transmit IoT device information via a message, such as an e-mail message, a text message, a Hypertext Transfer Protocol message, an online chat message, and/or any other suitable type of message. As another example, process 100 can transmit a link to a web page where the IoT device information is accessible. In some embodiments, process 100 can transmit IoT device information in response to receiving IoT device information that was requested at 114. In some embodiments, process 100 can transmit IoT device information configured to allow and/or cause a device of the third party to initiate a connection with an IoT device. For example, process 100 can transmit information that can be used to initiate a connection with an IoT device, and/or instructions configured to cause a device to initiate a connection using such information. In some embodiments, process 100 can transmit IoT device information in response to receiving a request for IoT device information from a third party device that is connected to the same call to which process 100 connected at 102. In some embodiments, if process 100 determines at 108 that a voice command for causing an IoT device to transmit information was recognized at 106, then process 100 can transmit instructions causing an IoT device to transmit information to a third party at 118. In some embodiments, at 118, process 100 can transmit instructions that cause an IoT device to transmit IoT device information to any suitable third party (e.g., as described above in connection with 116). In some embodiments, process 100 can transmit instructions containing any suitable information. For example, process 100 can transmit instructions containing information that can be used by an IoT device to initiate a connection with a third party device. In such an example, process 100 can transmit instructions that are configured to cause an IoT device to initiate a connection with a third party device using such information, and transmit IoT device information using the initiated connection. As a more particular example, process 100 can transmit an I.P. address associated with a device of the third party that is connected to the same call to which process 100 connected at 102. As another example, process 100 can transmit instructions containing contact information associated with a third party, such as an e-mail address and/or a phone number. In such an example, process 100 can transmit instructions that are configured to cause an IoT device to transmit an e-mail or text message, respectively, containing IoT device information. In some embodiments, at 118, process 100 can initiate a connection with an IoT device (e.g., as described above in connection with 112), and use the initiated connection to transmit instructions to the IoT device. In some embodiments, if process 100 determines at 108 that a voice command for adding an IoT device to a call was recognized at 106, then process 100 can access stored IoT device connection information at 120. In some embodiments, IoT device connection information can be any suitable connection information. For example, IoT device connection information can be information usable to initiate a connection with an IoT device as described above in connection with 110. In some embodiments, device connection information can be stored at any suitable location, such as at an end-point device (e.g., an end-point device connected to a call to which an IoT device is to be added), and/or at a server device (e.g., a server device hosting a call to which an IoT device is to be added and/or a server device executing process 100). At 122, process 100 can add an IoT device to a call using IoT device connection information. In some embodiments, process 100 can add an IoT device to the same call to which process 100 connected at 102. Additionally or alternatively, in some embodiments, process 100 can add an IoT device to a different call. In some embodiments, process 100 can use any suitable technique or combination of techniques to add an IoT device to a call. For example, in a situation in which an IoT device is to be added to a telephone call, process 100 can use any suitable three-way calling and/or conference calling technique to add an IoT device to a call. As another example, in a situation in which an IoT device is to be added to a video call or a VoIP call, process 100 can transmit information and/or instructions to a device hosting the call that can be used by the hosting device to add the IoT device to the call (e.g., an I.P. address and/or MAC address associated with the IoT device). FIG. 2 shows a flow diagram of an example 200 of a process for configuring voice commands in accordance with some embodiments of the disclosed subject matter. At 202, process 200 can identify one or more candidate IoT devices to configure for voice commands. In some embodiments, a candidate IoT device can be any suitable IoT device. For example, a candidate IoT device can be an IoT device 416 as described below in connection with FIG. 4. Additionally or alternatively, in some embodiments, a candidate IoT device can be any device that is available to connect to an end-point device that is executing process 200. In some embodiments, at 202, process 200 can identify a candidate IoT device using any suitable technique or combination of techniques. For example, process 200 can determine that a device is a candidate IoT device based on determining that the device is on the same local area network as an end-point device executing process 200. As another example, process 200 can determine that an IoT device is a candidate IoT device based on the IoT device having been used or having been configured for use in connection with a particular user account. As a more particular example, in a situation where process 200 is being used to configure voice commands for a user account of a telecommunications application (e.g., a Von′ calling application), process 200 can determine that an IoT device is a candidate based on a determination that the IoT device has been used to access the user account, and/or a determination that the IoT device has been configured to use the telecommunications application in connection with the user account. In some embodiments, at 202, process 200 can determine that an IoT device is a candidate IoT device by initiating a wireless connection with the IoT device. For example, process 200 can initiate any suitable wireless connection with an IoT device, and in response to successfully establishing the wireless connection, determine that the IoT device is a candidate IoT device. As a more particular example, in response to successfully establishing a wireless connection, process 200 can determine that an IoT device is a candidate IoT device by determining the capabilities of the IoT device (e.g., determining that the end-point device can connect to a particular type of call). In some embodiments, at 202, process 200 can identify one or more candidate IoT devices by receiving a user input identifying one or more candidate IoT devices. At 204, process 200 can receive a user selection of one or more of the identified candidate IoT devices to configure for voice commands. In some embodiments, at 204, process 200 can receive a user selection using any suitable technique or combination of techniques. For example, process 200 can receive a voice-based user selection. As a more particular example, in a situation in which keywords have already been configured with respect to one or more of the identified candidate IoT devices, process 200 can receive a user selection of one or more of the identified candidate IoT devices by detecting a user speaking a corresponding keyword. As another example, process 200 can receive a user selection by causing a user interface to be presented and by receiving a selection of a user interface element corresponding to one or more of the identified candidate IoT devices. At 206, process 200 can determine whether to configure multiple voice operations for a selected IoT device or devices. In some embodiments, at 206, process 200 can determine whether to configure multiple voice operations for selected IoT device(s) by receiving any suitable user input indicating whether to configure multiple voice operations. For example, process 200 can receive any suitable user input described above in connection with 204. If, at 206, process 200 determines to configure multiple voice commands for an IoT device, then at 208, process 200 can receive user input indicating a keyword to become associated with the IoT device. In some embodiments, at 208, process 200 can receive user input using any suitable technique or combination of techniques. For example, process 200 can receive user input by causing a user interface to be presented that is configured to allow the user to input a keyword (e.g., by typing the keyword, or selecting one of a plurality of presented keywords). As another example, process 200 can prompt a user to speak a keyword and receive audio data corresponding to a spoken keyword. In such an example, process 200 can utilize any suitable speech detection technique to parse the keyword from the received audio data. At 210, process 200 can receive user input indicating a keyword for an operation. In some embodiments, at 210, process 200 can receive user input using any suitable technique or combination of techniques. For example, process 200 can receive user input as described above in connection with 208. In some embodiments, at 210, in connection with receiving user input indicating a keyword, process 200 can further receive user input indicating an operation to be associated with an indicated keyword. For example, process 200 can cause a user interface to be presented that includes a plurality of selectable operations to become associated with an indicated keyword, and receive an indication of one of the selectable options. As another example, in a situation in which keywords have already been configured with respect to one or more operations, process 200 can receive a user selection of an operation by detecting a user speaking a keyword corresponding to the operation. In some embodiments, an indicated operation can be any suitable operation, such as an operating for transmitting IoT device information, an operation for causing an IoT device to transmit IoT device information, and/or an operation for adding an IoT device to a call. At 212, process 200 can receive speech input and associate the speech input with a keyword indicated at 208 and/or 210. In some embodiments, the speech input can be any suitable speech input. For example, the speech input can be audio data as described above in connection with FIG. 1. In some embodiments, at 212, process 200 can create any suitable association between speech input and a keyword indicated at 208 and/or 210. For example, process 200 can associate speech input and a keyword such that the keyword should be recognized as a voice command when similar speech input is later received. Additionally or alternatively, in some embodiments, at 212, process 200 can associate received speech input with a user that provided the speech input such that, in a situation in which a different user is speaking a keyword indicated at 208 and/or 210, the mechanisms described herein would recognize that such a user is different from the user that provided the speech input, and not cause, or inhibit from causing, any operations associated with the keywords from being executed. For example, after associating the received speech input with the user that provided the speech input, the mechanisms described herein can utilize a voice recognition, speaker identification, or speaker verification technique as described above in connection with 106 of FIG. 1 to determine that the user is different from the user that provided the speech input. If at 206, process 200 determines not to configure multiple voice commands for an IoT device, then at 214, process 200 can receive user input indicating an operation for a voice command. In some embodiments, at 214, process 200 can receive user input using any suitable technique or combination of techniques. For example, process 200 can receive user input using a technique as described above in connection with 210. At 216, process 200 can receive speech input and associate the speech input with an indicated operation. In some embodiments, at 216, process 200 can receive speech input using any suitable technique or combination of techniques. For example, process 200 can receive speech input using a technique as described above in connection with 212. In some embodiments, at 216, process 200 can create any suitable association between received speech input and an indicated operation. For example, process 200 can create an association as described above in connection with 212. In some embodiments, at 216, process 200 can receive speech input subsequent to prompting a user to enter speech input. For example, process 200 can cause a message to be presented on a user interface that prompts a user to provide speech input. As another example, process 200 can cause an audio message to be presented that prompts a user to provide speech input. In some embodiments, at 216, process 200 can prompt a user to enter speech input that corresponds to a predetermined keyword, keywords, and/or key phrase (e.g., as described above in connection with FIG. 1). For example, in a situation in which an operation being configured is an operation to add an IoT device (e.g., a television) to a call (e.g., as described above in connection with FIG. 1), process 200 can prompt a user to provide speech input corresponding to the phrase “add my television to this call,” or any other suitable key phrase. At 218, process 200 can identify at least one keyword or key phrase from speech input. In some embodiments, at 218, process 200 can identify at least one keyword or key phrase using any suitable technique or combination of techniques. For example, process 200 can use any suitable speech recognition technique (e.g., a speech recognition technique as described above in connection with FIG. 1). At 220, process 200 can store voice command information based on user inputs received at 208, 210, 212, 214, 216, and/or 218. In some embodiments, the voice command information to be stored can include any suitable information that is based on user inputs. For example, voice command information can include indications of keywords received at 208, 210, 216, and/or 218. As another example, voice command information can include audio data and/or any other suitable data corresponding to and/or based on speech inputs received 208, 210, 212, 214, 216, and/or 218. As yet another example, the voice command information can include information identifying an operation indicated at 210 and/or 214. In some embodiments, at 220, process 200 can store voice command information at any suitable location. For example, process 200 can store voice command information at a server device (e.g., a server device of a calling service, as described below in connection with FIG. 4). As another example, process 200 can store voice command information at an end-point device. FIG. 3 shows an illustration of an example 300 of a user interface for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. In some embodiments, user interface 300 can include a call notification element 302, a microphone status element 304, a microphone input element 306, a video call window 308, a device connecting input element 310, and an available device notification element 312. In some embodiments, call notification element 302 can be any element suitable for presenting a notification of a call. For example, call notification element 302 can be a message, an icon, an image, a video, an e-mail, a text message, a pop-up message, or any other element suitable for presenting a notification of a call. Additionally or alternatively, in some embodiments, call notification element 302 can be a tactile or audio notification element. In some embodiments, microphone status element 304 can be any element suitable for presenting a status of a microphone. For example, microphone status element 304 can be a message, an icon, an image, a video, a pop-up message, or any other user interface element suitable for presenting the status of a microphone. In some embodiments, microphone status element 304 can present any suitable microphone status, such as an on status, an off status, a volume status, a status indicating that call audio is being captured, a status indicating that process 100 is detecting voice commands, any other suitable status, or any suitable combination thereof. In some embodiments, microphone input element 306 can be any input element suitable for receiving a user input for controlling a microphone. For example, microphone input element 306 can be a selectable icon, a button, a switch, a scale, a slider, any other suitable input element, or any suitable combination thereof. In some embodiments, in response to being selected, microphone input element 306 can cause process 100 as described above in connection with FIG. 1, and/or cause process 200 as described as described above in connection with FIG. 2, to receive audio data and/or speech input via a microphone connected to a device that is presenting user interface 300. In some embodiments, in response to being selected, microphone input element 306 can cause process 100 as described above in connection with FIG. 1 to capture audio data being transmitted over a call and/or detect voice commands. In some embodiments, video call window 308 can be any element suitable for presenting any suitable video in connection with a call. For example, video call window 308 can be an element for presenting video of another user that is connected to the call, and/or for presenting video that is being shared via the call. In some embodiments, device connecting input element 310 can be any input element suitable for causing a device to be connected to a call. For example, device connecting input element 310 can be a selectable icon, a button, a switch, a scale, a slider, any other suitable input element, or any suitable combination thereof. In some embodiments, in response to being selected, device connecting input element 310 can cause an IoT device to be added to a call using any suitable technique or combination of techniques. For example, device connecting input element 310 can cause a device to be added to a call using an operation as described above in connection with FIG. 1. In some embodiments, in response to being selected, device connecting input element 310 can cause one or more selectable options to be presented that each correspond to an available IoT device. In some embodiments, in addition or in alternative to adding a device to a call, in response to being selected, device connecting input element 310 can cause IoT device information to be transmitted to a third party (e.g., using an operation as described above in connection with FIG. 1). In some embodiments, available device notification element 312 can be any element suitable for presenting a notification of IoT devices that are available for connection. For example, available device notification element 312 can be a message, an icon, an image, a video, an e-mail, a text message, a pop-up message, or any other element suitable for presenting a notification of available devices. In some embodiments, available device notification element 312 can include one or more selectable elements that each correspond to an available IoT device, and in response to being selected, can cause a corresponding IoT device to be connected to a call. In some embodiments, available device notification element 312 can present a notification corresponding to each candidate IoT device determined at 202 of FIG. 2. FIG. 4 shows a block diagram of an example 400 of a system suitable for implementation of the mechanisms described herein for connecting an IoT device to a call in accordance with some embodiments of the disclosed subject matter. As illustrated, system 400 can include two server devices 402, a communication network 406, an end-point device 410, a local area network 414, and an IoT device 416. In some embodiments, server devices 402 can be any server devices suitable for implementing some or all of the mechanisms described herein for connecting a device to a call. For example, each server device 402 can be a server device that executes one or more portions of process 100 as described above in connection with FIG. 1 and/or one or more portions of process 200 as described above in connection with FIG. 2. In some embodiments, one or more server devices 402 can connect with one or more end-point devices 410 via communication network 406 for a call (e.g., a video call, a VoIP call, and/or any other suitable call), and/or to transmit invitations to join a call. In some embodiments, one or more server devices 402 can connect to one or more IoT devices 416 via communication network 406 in order to transmit and/or receive IoT device information, and/or to connect an IoT device 416 to a call. In some embodiments, one or more of server devices 402 can be associated with a calling service (e.g., a video calling service or a VoIP calling service). Additionally or alternatively, in some embodiments, one or more of server devices 402 can store user account information associated with a calling service. In some embodiments, one or more server devices 402 can store any suitable voice command information. Communication network 406 can be any suitable combination of one or more wired and/or wireless networks in some embodiments. For example, communication network 406 can include any one or more of the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), and/or any other suitable communication network. End-point device 410 and/or IoT device 416 can be connected by one or more communications links 408 to communication network 406 which can be linked via one or more communications links 404 to server devices 402. Communications links 404 and/or 408 can be any communications links suitable for communicating data among end-point devices 410 and server devices 402, such as network links, dial-up links, wireless links, hard-wired links, any other suitable communications links, or any suitable combination of such links. Local area network 414 can be any suitable local area network in some embodiments. For example, local area network 414 can be a Wi-Fi network, an Ethernet network, a home area network, an intranet network, or any other suitable local area network. End-point device 410 can be implemented as any user device suitable for receiving and/or connecting to phone calls, video calls, VoIP calls, any other suitable calls, or any suitable combination thereof. Additionally, end-point device 410 can be implemented as any user device suitable for presenting user interfaces, receiving user inputs and/or speech inputs as described above in connection with FIG. 1, FIG. 2, and FIG. 3, and/or any other suitable functions. For example, in some embodiments, end-point device 410 can be implemented as a mobile device, such as a mobile phone, a tablet computer, a laptop computer, a vehicle (e.g., a car, a boat, an airplane, or any other suitable vehicle) entertainment system, a portable media player, and/or any other suitable mobile device. As another example, in some embodiments, end-point devices 410 can be implemented as a non-mobile device such as a desktop computer, a set-top box, a television, a streaming media player, a game console, and/or any other suitable non-mobile device. IoT device 416 can be implemented as any device suitable connecting to other devices and/or connecting to phone calls, video calls, VoIP calls, any other suitable calls, or any suitable combination thereof. For example, IoT device 416 can be implemented as any device with wireless connection capabilities, such as BLUETOOTH, Wi-Fi, near-field communication, any other suitable wireless connection capability, or any suitable combination thereof. In some embodiments, IoT device 416 can be implemented as any suitable device and/or appliance, such as an oven, a thermostat, a smart lock, a refrigerator, coffee machine, any suitable cooking device (e.g., microwave, slow cooking device, and/or pressure cooking device), a tablet device, a speaker device, a personal assistant device, a telephone, a television, a stereo device, media device, security system, alarm system (e.g., fire alarm, smore alarm, and/or intruder alarm), light fixture, light switch, light bulb, flood sensor, water valve, door sensor, window sensor, home automation hub, smart switch, air quality sensor, virtual reality device, doorbell, camera, Internet router, modem, programmable IoT button, watch, clock, fitness tracking device, climate control device (e.g., a heater, ventilator, air conditioner, and/or dehumidifier), health monitor device, pet monitor device, water heater, garage door control device, and/or any other suitable device. In some embodiments, IoT device 416 can be implemented as an end-point device 410. In some embodiments, end-point device 410 and IoT device 416 can be connected via communications link 412, and/or via communications link 408 and local area network 414. Communications links 412 and 408 can be any suitable communications links, such as a wired communications links (e.g., via Ethernet, Universal Serial Bus, or any other suitable wired communications link), near-field communications links, BLUETOOTH communications links, Wi-Fi communications links, or any other suitable communications links. Although two server devices 402 are shown in FIG. 4 to avoid over-complicating the figure, the mechanisms described herein for connecting an IoT device to a call can be performed using any suitable number of server devices (including none) in some embodiments. For example, in some embodiments, the mechanisms can be performed by a single server device 402 or multiple server devices 402. Although one end-point device 410 and one IoT device 416 are shown in FIG. 4 to avoid over-complicating the figure, any suitable number of end-point devices and IoT devices, and/or any suitable types thereof, can be used in some embodiments. Server devices 402, end-point device 410, and IoT device 416 can be implemented using any suitable hardware in some embodiments. For example, server devices 402, end-point device 410, and IoT device 416 can be implemented using hardware as described below in connection with FIG. 5. As another example, in some embodiments, devices 402, 410, and 416 can be implemented using any suitable general purpose computer or special purpose computer. Any such general purpose computer or special purpose computer can include any suitable hardware. FIG. 5 shows a block diagram of an example 500 of hardware that can be used in accordance with some embodiments of the disclosed subject matter. Hardware processor 512 can execute the processes described herein for configuring voice commands (e.g., as described above in connection with FIG. 2), connecting an IoT device to a call (e.g., as described above in connection with FIG. 1), and/or performing any other suitable functions in accordance with the mechanisms described herein (e.g., as described above in connection with FIG. 1, FIG. 2, FIG. 3, and/or FIG. 4). In some embodiments, hardware processor 512 can send and receive data through communications links 404, 408, 412, 416, or any other communication links using, for example, a transmitter, a receiver, a transmitter/receiver, a transceiver, or any other suitable communication device. In some embodiments, memory and/or storage 518 can include a storage device for storing data received through communications link 404, 408, 412, 416, or any other communication links. The storage device can further include a program for controlling hardware processor 512. In some embodiments, memory and/or storage 518 can store voice command information (e.g., as described above in connection with 108 of FIG. 1 and 220 of FIG. 2). Display 514 can include a touchscreen, a flat panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Input device 516 can be a computer keyboard, a computer mouse, a touchpad, a voice recognition circuit, a touchscreen, a microphone, and/or any other suitable input device. Any other suitable components can be additionally or alternatively included in hardware 500 in accordance with some embodiments. In some embodiments, at least some of the above described blocks of the processes of FIG. 1 and FIG. 2 can be executed or performed in any order or sequence not limited to the order and sequence shown in and described in connection with the figures. Also, some of the above blocks of FIG. 1 and FIG. 2 can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. Additionally or alternatively, in some embodiments, some of the above described blocks or components of the processes and/or systems of FIG. 1, FIG. 2, FIG. 3, FIG. 4, and/or FIG. 5 can be omitted. In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as non-transitory forms of magnetic media (such as hard disks, floppy disks, etc.), non-transitory forms of optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), non-transitory forms of semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. Accordingly, methods, systems, and media for connecting an IoT device to a call are provided. Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed embodiments can be combined and rearranged in various ways. 15922602 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers
nasdaq:goog Alphabet Apr 26th, 2022 12:00AM Oct 1st, 2018 12:00AM https://www.uspto.gov?id=US11314083-20220426 Beam steering optics for near-eye and head mounted displays A near-eye display system includes a display panel to present image frames to the eyes of a user for viewing. The system also includes a beam steering assembly facing the display panel that is configurable to displace a light beam incident on the beam steering assembly, thereby laterally shifting light relative to an optical path of the light beam incident on the beam steering assembly. The beam steering assembly includes a birefringent plate configurable to replicate a light ray incident on the beam steering assembly such that the replicated light ray is laterally shifted relative to an optical path of the light ray incident on the beam steering assembly. 11314083 1. A near-eye display system, comprising: a display panel configured to display a sequence of images; and a beam steering assembly facing the display panel, the beam steering assembly including a first birefringent plate configurable to replicate a light ray incident on the beam steering assembly, wherein the replicated light ray is laterally shifted by a distance, based on a tilt angle of the beam steering assembly, relative to an optical path of the light ray incident on the beam steering assembly while maintaining a same angular direction as the light ray. 2. The near-eye display system of claim 1, wherein: an in-plane symmetry axis of the first birefringent plate is tilted relative to the optical path of the light ray incident on the beam steering assembly. 3. The near-eye display system of claim 1, further comprising: an actuator coupled to the beam steering assembly, the actuator to change the tilt angle between the beam steering assembly and the optical path of the light ray incident on the beam steering assembly. 4. The near-eye display system of claim 3, further comprising: a display controller coupled to the display panel, the display controller to drive the display panel to display the sequence of images; and a beam steering controller coupled to the actuator, the beam steering controller to instruct the actuator to impart a different tilt angle between the beam steering assembly and the optical path of the light ray incident on the beam steering assembly, wherein the different tilt angle changes a lateral shift for light rays replicated by the beam steering assembly. 5. The near-eye display system of claim 1, wherein: the beam steering assembly comprises a stacked pair of birefringent plates including the first birefringent plate and a second birefringent plate, and further wherein each of the first and second birefringent plates replicates incident light rays. 6. The near-eye display system of claim 5, further comprising: a quarter wave plate positioned between the first birefringent plate and the second birefringent plate, wherein the quarter wave plate polarizes light rays output from the first birefringent plate prior to passing the polarized light rays to the second birefringent plate. 7. The near-eye display system of claim 5, wherein: the beam steering assembly outputs at least four light rays for each light ray incident on the beam steering assembly. 8. The near-eye display system of claim 1, wherein the beam steering assembly is configured to output the replicated light ray that is laterally shifted and an additional replicated light ray that maintains a same optical path as the light ray incident on the beam steering assembly. 9. The near-eye display system of claim 1, wherein the replicated light ray represents a same visual content as the light ray incident on the beam steering assembly so that each of the sequence of images is perceived by a user as having a resolution of the display panel and having pixels with an apparent size that is larger than an actual size of the pixels of the display panel. 10. In a near-eye display system, a method comprising: positioning a beam steering assembly, including a first birefringent plate, at a first tilt angle so that the beam steering assembly is tilted relative to an optical path of a light ray incident on the beam steering assembly; and replicating the light ray incident on the beam steering assembly by passing the light ray through the first birefringent plate, wherein the replicated light ray is laterally shifted by a first distance, based on the first tilt angle, relative to the optical path of the light ray while maintaining a same angular direction as the light ray. 11. The method of claim 10, further comprising: repositioning the beam steering assembly to a second tilt angle so that the beam steering assembly is tilted relative to the optical path of the light ray incident on the beam steering assembly; and replicating the light ray incident on the beam steering assembly by passing the light ray through the first birefringent plate, wherein the replicated light ray is laterally shifted by a second distance, based on the second tilt angle, relative to the optical path of the light ray. 12. The method of claim 11, wherein: repositioning the beam steering assembly comprises controlling an actuator coupled to the beam steering assembly to change from the first tilt angle to the second tilt angle. 13. The method of claim 10, further comprising: controlling a display panel facing the beam steering assembly to display a first image while the beam steering assembly is a first configuration state with the beam steering assembly positioned at the first tilt angle; and controlling the beam steering assembly to impart a first lateral shift for the first image. 14. The method of claim 13, further comprising: controlling the display panel facing the beam steering assembly to display a second image; and signaling the beam steering assembly to enter a second configuration state with the beam steering assembly positioned at a second tilt angle to impart a second lateral shift for the second image. 15. The method of claim 14, further comprising: controlling the display panel to display the first and second images within a visual perception interval so that the first and second images are perceptible as a single image with an effective resolution that is higher than a native resolution of the display panel. 16. The method of claim 14, wherein: the first and second images contain a same visual content and which are displayed in a period of time less than a visual persistence interval so that the first and second images are perceptible as a single image having a resolution of the display panel and having pixels with an apparent size that is larger than an actual size of the pixels of the display panel. 17. The method of claim 10, wherein: the beam steering assembly includes the first birefringent plate and a second birefringent plate, and the first tilt angle of the beam steering assembly replicates the light ray incident on the beam steering assembly into a plurality of laterally shifted beams, and wherein each of the plurality of laterally shifted beams includes a different lateral shift. 18. The method of claim 17, further comprising: passing the plurality of laterally shifted beams for display so that the plurality of laterally shifted beams are perceptible as a single image having a resolution of a display panel facing the beam steering assembly and having pixels with an apparent size that is larger than an actual size of the pixels of the display panel. 19. A rendering system, comprising: at least one processor; a beam steering assembly facing a display panel, the beam steering assembly including a first birefringent plate configurable to replicate a light ray incident on the beam steering assembly, wherein the replicated light ray is laterally shifted by a distance, based on a tilt angle of the beam steering assembly, relative to an optical path of the light ray incident on the beam steering assembly while maintaining a same angular direction as the light ray; and a storage component to store a set of executable instructions, the set of executable instructions configured to manipulate the at least one processor to: sample a source image to render a first image including a first array of pixels; resample the source image to render a second image comprising a second array of pixels; and signal a display controller coupled to the display panel to present both the first image and the second image in a period of time less than a visual persistence interval so that the first array of pixels and the second array of pixels are perceptible as a single image. 20. The rendering system of claim 19, wherein the set of executable instructions are further configured to manipulate the at least one processor to: resample the source image to render a second image comprising a second array of pixels; and signal a display controller coupled to the display panel to present both the first image and the second image in a period of time less than a visual persistence interval so that the first and second arrays of pixels are perceptible as a single image. 21. The rendering system of claim 19, wherein the first array of pixels are laterally shifted relative to the optical path by the beam steering assembly prior to presentation to a user. 21 CROSS-REFERENCE TO RELATED APPLICATIONS The present application claims priority to U.S. patent application Ser. No. 15/889,796, entitled “BEAM STEERING OPTICS FOR VIRTUAL REALITY SYSTEMS” and filed on Feb. 6, 2018, the entirety of which is incorporated by reference herein. BACKGROUND Head-mounted displays (HMDs) and other near-eye display systems can utilize an integral lightfield display, magnifier lens, lenslet or pinhole array, or other viewing optics provide effective display of three-dimensional (3D) graphics. Generally, the integral lightfield display employs one or more display panels and an array of lenslets, pinholes, or other optic features that overlie the one or more display panels. The HMDs and other near-eye display devices may have challenges associated with the limited pixel density of current displays. Of particular issue in organic light emitting diode (OLED)-based displays and other similar displays is the relatively low pixel fill factor; that is, the relatively large degree of “black space” between pixels of the OLED-based displays. While this black space is normally undetectable for displays having viewing distances greater than arm's length from the user, in HMDs and other near-eye displays this black space may be readily detectable by the user due to the close proximity of the display to the user's eyes. The visibility of the spacing between pixels (or sub-pixels) is often exacerbated due to magnification by the optics overlying the display panel. Therefore, there occurs a screen-door effect, in which a lattice resembling a mesh screen is visible in an image realized in the display, which typically interferes with user immersion in the virtual reality (VR) or augmented reality (AR) experience. BRIEF DESCRIPTION OF THE DRAWINGS The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items. FIG. 1 is a diagram illustrating an arrangement of components of a near-eye display system utilizing a beam steering assembly to project imagery in accordance with some embodiments. FIG. 2 is a diagram illustrating a cross-section view of an implementation of the near-eye display system of FIG. 1 for providing super-resolution imagery in accordance with some embodiments. FIG. 3 is a diagram illustrating a diffractive beam steering element for use in the near-eye display system of FIG. 1 in accordance with some embodiments. FIG. 4 is a diagram illustrating a refractive beam steering element for use in the near-eye display system of FIG. 1 in accordance with some embodiments. FIG. 5 a diagram illustrating another refractive beam steering element for use in the near-eye display system of FIG. 1 in accordance with some embodiments. FIG. 6 is a flow diagram illustrating a method for sequential display of images to provide a super-resolution image display in the near-eye display system of FIG. 1 in accordance with some embodiments. FIG. 7 is a diagram illustrating a method of generating passive super-resolution images in accordance with some embodiments. FIG. 8 is a diagram illustrating a top-down view of a birefringent beam steering element in accordance with some embodiments. FIG. 9 is a diagram illustrating a top-down view of another birefringent beam steering element in accordance with some embodiments. DETAILED DESCRIPTION FIGS. 1-9 illustrate various systems and techniques for providing optical beam steering in a near-eye display system or imaging system. As described in further detail below, a head mounted display (HMD) or other near-eye display system implements a beam steering assembly disposed between a display panel and a user's eye. The beam steering assembly can be deployed in a passive configuration to reduce or remove the screen door effect or increase perceived resolution, or in an active configuration (e.g. via time multiplexing) to increase effective resolution through exploitation of the visual persistence effects of the human eye. In some implementations, the near-eye display system projects time-multiplexed images at a higher display rate such that two or more of the images having different visual information are effectively combined by the human visual perception system into a single “super-resolution” image; that is, an image with an effective resolution higher than the native resolution of the display panel. In other implementations, the near-eye display system projects two or more adjacent images having the same visual information but are spatially shifted via the beam steering apparatus relative to each other so as to be perceived by the user as an image with light emitting elements of increased apparent size, and thus effectively covering the non-emissive portions between the light emitting elements of the display. FIG. 1 illustrates a near-eye display system 100 for implementation in a head mounted device (HMD), heads-up display, or similar device in accordance with some embodiments. As depicted, the near-eye display system 100 includes a computational display sub-system 102. The near-eye display system 100 further can include other components, such as an eye-tracking subsystem, an inertial measurement unit (IMU), audio componentry, and the like, that have been omitted for purposes of clarity. The computational display sub-system 102 includes a left-eye display 104 and a right-eye display 106 mounted in an apparatus 108 (e.g., goggles, glasses, etc.) that places the displays 104, 106 in front of the left and right eyes, respectively, of the user. As shown by view 110, each of the displays 104, 106 includes at least one display panel 112 to display a sequence or succession of near-eye images, each of which comprises an array 114 of elemental images 116. The display panel 112 is used to display imagery to at least one eye 118 of the user in the form of a normal image (e.g., for super-resolution implementations) or a lightfield (e.g., for lightfield implementations). In some embodiments, a separate display panel 112 is implemented for each of the displays 104, 106, whereas in other embodiments the left-eye display 104 and the right-eye display 106 share a single display panel 112, with the left half of the display panel 112 used for the left-eye display 104 and the right half of the display panel 112 used for the right-eye display 106. As depicted, the near-eye display system 100 incudes a beam steering assembly 120 overlying the display panel 112 so as to be disposed between the display panel 112 and the at least one eye 118 of a user. Cross view 122 depicts a cross-section view along line A-A of the beam steering assembly 120 overlying the display panel 112. The beam steering assembly 120 includes a stack of one or more optical beam steering elements, such as the two optical beam steering elements 124, 126 illustrated in FIG. 1, each optical beam steering element configured to replicate and displace incident light rays originating from the display panel 112. The near-eye display system 100 also includes a display controller 130 to control the display panel 112 and, in some embodiments, a beam steering controller 132 to control the operation of the beam steering assembly 120. As also shown in FIG. 1, the near-eye display system 100 also includes a rendering component 134 including a set of one or more processors, such as the illustrated central processing unit (CPU) 136 and graphics processing units (GPUs) 138, 140 and one or more storage components, such as system memory 142, to store software programs or other executable instructions that are accessed and executed by the processors 136, 138, 140 so as to manipulate the one or more of the processors 136, 138, 140 to perform various tasks as described herein. Such software programs include, for example, rendering program 144 comprising executable instructions for an optical beam steering and image rendering process, as described below. In operation, the rendering component 134 receives rendering information 146 from a local or remote content source 148, where the rendering information 146 represents graphics data, video data, or other data representative of an object or scene that is the subject of imagery to be rendered and displayed at the display sub-system 102. Executing the rendering program 144, the CPU 136 uses the rendering information 146 to send drawing instructions to the GPUs 138, 140, which in turn utilize the drawing instructions to render, in parallel, a series of image frames 150 for display at the left-eye display 104 and a series of lightfield frames 152 for display at the right-eye display 106 using any of a variety of well-known VR/AR computational/lightfield rendering processes. As described in greater detail herein, the beam steering assembly 120 laterally displaces, or “shifts” the position of pixels in the image frames 150, 152 to fill in non-emissive portions of the display panel 112. For example, in some embodiments, the beam steering assembly 120 shifts the position of successive images displayed at the display panel 112 so as to project to the user a super-resolution image or a higher-resolution lightfield due to the succession of images effectively being superimposed due to the visual persistence effect of the human visual system. In other embodiments, the beam steering assembly 120 replicates pixels of each given image and laterally displaces the replicated pixels so as to project an image with pixels of a perceived larger size (e.g., due to increased effective pixel count) that conceals the non-emissive space between pixels. It will be appreciated that although described in the context of the near-eye display system 100, the beam steering described herein may be used for any type of VR or AR system (e.g., conventional magnifier displays, computational displays, see-through displays, and the like). FIG. 2 illustrates a cross-section view of an implementation 200 of the near-eye display system 100 for providing super-resolution imagery to the eye 118 of the user in accordance with at least one embodiment of the present disclosure. In this example, the display panel 112 comprises an array of pixels, which typically are arranged as an interwoven pattern of sub-pixels of different colors, such as red, green, and blue (RGB) sub-pixels, and wherein the spatial persistence effects of human vision result in adjacent sub-pixels of different colors to be perceived as a single pixel having a color represented by a blend of the adjacent sub-pixels and their respective intensities. For ease of illustration, the display panel 112 is not depicted to scale, and is depicted as having only five sub-pixels in the cross-section (sub-pixels 202, 204, 206, 208, 210), whereas a typical display would have hundreds or thousands of sub-pixels along the cross-section, and thus it will be appreciated that the dimensions of the sub-pixels 202-210, and the non-emissive space in between the sub-pixels (e.g., non-emissive space 212 between sub-pixels 206 and 208) is significantly exaggerated relative to the other components of the implementation 200. Further, to aid in illustration of the operation of the beam steering assembly 120, the implementation 200 of FIG. 2 illustrates the beam steering assembly 120 as having only a single optical beam steering element 214. Moreover, in FIG. 2, the user's eye 118 is depicted as a lens 216 representing the lens of the eye 118 and a panel 218 representing the retinal plane of the eye 118. As such, the panel 218 is also referred to herein as “retina 218.” Further, the implementation 200 includes a magnifier lens assembly 220 (not shown in FIG. 1 for ease of illustration) overlaying the display panel 112 such as to be disposed between the optical beam steering element 214 and the eye 118 of the user. Although illustrated in FIG. 2 to be a single lens, in other embodiments, the magnifier lens assembly 220 includes a lenslet array (not shown) with each lenslet focusing a corresponding region of the display panel 112 onto the lens 216 of the eye. It also should be noted that while FIG. 2 depicts an optical configuration with a single lens and the optical beam steering element 214 between the display panel 112 and the eye 118, in a typical implementation the optical system may comprise a larger number of lenses, prisms, or other optical elements between the display panel 112 and the eye 118. As shown, the optical beam steering element 214 is configured to replicate light originating from sub-pixel 206 and displace the replicated sub-pixel such that the eye 118 perceives the replicated sub-pixel as originating from the non-emissive space 212 between sub-pixels 206 and 208, and thus create a perception of a display having an effective resolution of approximately twice the actual resolution of the display panel 112. To illustrate, in one embodiment, the beam steering controller 132 of FIG. 1 at time t0 deactivates the beam steering element 214 and the display controller 130 scans in a first image for display by the display panel 112. The resulting light output by sub-pixel 206 for the first image is directed to a display-panel-facing surface of the beam steering element 214. Because the beam steering element 214 is deactivated at time t0, the incident light is passed through the beam steering element 214 without lateral displacement to the user's eye 118, whereupon the lens 216 of the eye 118 focuses the light on the retina 218 at position 222 (with light from the other sub-pixels 202-204 and 208-210 taking corresponding paths). Subsequently, at time t1, the beam steering controller 132 of FIG. 1 activates the beam steering element 214, which configures the beam steering element 214 to laterally displace incident light (e.g., two-dimensional shift of incident light in the X- and/or Y-axis directions of FIG. 2). The display controller 130 scans in a second image for display by the display panel, and the resulting light output by sub-pixel 206 for the second image is directed to the display-panel-facing surface of the beam steering element 214. Because the beam steering element 214 is activated at time t1, the incident light is laterally displaced after passing through the beam steering element 214. The laterally-displaced light is passed to the user's eye 118, whereupon the lens 216 of the eye 118 focuses the light on the retina 218 at position 224. The eye 118 perceives light at position 224 as originating from the non-emissive space 212 between sub-pixels 206 and 208 (although the light actually originated from sub-pixel 206). The lateral displacement of incident light at the beam steering element 214 results in presenting sub-pixels of the second image at positions where non-emissive spaces would have been perceived as black space by the eye 118 from the display of the first image at time t0. Thus, if the first image at time t0 and the second image at time t1 are displayed in quick succession (i.e., within the visual persistence interval of the human eye, which is approximately 10 ms), the human visual system perceives the first and second images to be overlapping. That is, in this example, the lateral displacement introduced to the light of the second image has the result of presenting the sub-pixels of the second image where black spaces would have appeared to the eye 118 from the display of the first image, and thus the sub-pixels of the second image appear to the eye 118 to occupy black spaces associated with non-emissive portions of the display panel 112 for the first image. The second image at time t1, in some embodiments, has the same visual content as the first image at time t0. In such embodiments, the eye 118 perceives the two images as overlapping in a single image of the same resolution of the first and second images (i.e., at native resolution of the display panel 112) but with larger perceived pixels that fill in the black space associated with non-emissive portions of the display panel 112, and thus reduce or eliminate the screen-door effect that would otherwise be visible to the eye 118. In other embodiments, the second image at time t1 has different visual content than the first image at time t0. In such embodiments, the eye 118 perceives the two images as overlapping in a single super-resolution image with visual content of the second image filling in the black space associated with non-emissive portions of the display panel 112. This reduces or eliminates the user's ability to perceive these non-emissive portions of the display panel 112, thereby creating a perception of a display having an effective resolution of approximately twice the actual resolution of the display panel 112. It should be noted that although the implementation 200 of the near-eye display system 100 in FIG. 2 depicts a beam steering assembly having a single beam steering element 214 for lateral light displacement, as noted above the beam steering assembly 120 may employ a stack of multiple beam steering elements (e.g., beam steering elements 124, 126 of FIG. 1) of differing configurations so as to provide multiple different lateral displacements, and thus provide the option to shift multiple successive images in different directions. For example, assuming the stack uses beam steering elements having a replication factor of two (e.g., beam steering element 214 of FIG. 2 that passes incident light to two different locations as either laterally displaced or not laterally displaced based on two corresponding states of the beam steering element 214, activated or deactivated), a stack of four beam steering elements allows for the replication and steering of each sub-pixel of the display panel to four different positions (i.e., three laterally displaced positions plus one original sub-pixel position in which all four beam steering elements are deactivated so that light passes through without any lateral displacement). It should further be noted that although the example of FIG. 2 is described in the context of a beam steering element 214 having a replication factor of two (i.e., deactivated to pass through without any lateral displacement or activated to replicate a sub-pixel for shifting to another position), other embodiments may employ beam steering elements having multiple different states. For example, instead of using a stack of four beam steering elements that each have a replication factor of two, a single beam steering element (not shown) having a replication factor of four may be controlled by beam steering controller 132 of FIG. 1 to switch between four different states that allow for the replication and steering of each sub-pixel of the display panel to four different positions (i.e., three laterally displaced positions plus one original sub-pixel position in which all four beam steering elements are deactivated so that light passes through without any lateral displacement). In some embodiments, an amount of screen door effect perception (i.e., metric of screen door effect severity) is represented by the equation: MTF(u)*CSF(u) for all spatial frequencies u, where MTF represents a Modulation Transfer Function specifying how different spatial frequencies are handled by the optics of a system (e.g., the near-eye display system 100) and CSF represents a Contrast Sensitivity Function representing the eye's ability to discern between luminances of different levels in a static image. The product of eye's contrast sensitivity (i.e., how sensitive the eye is to certain spatial frequencies, which turns out to be very sensitive to screen door frequency) and the spatial frequency content of pattern produced with replication provides a system transfer function in which the larger the transfer function is (for that specific spatial frequency u), the more screen door will be perceived. Accordingly, reduction of the system transfer function can be represented by an optimization metric as provided by equation (1) below: min d , θ ⁢ ∫ u min u max ⁢ PTF ⁡ ( u , d , θ ) ⁢ CSF ⁡ ( u ) ⁢ du ∫ u min u max ⁢ PTF ⁡ ( u , 0 , 0 ) ⁢ CSF ⁡ ( u ) ⁢ du ( 1 ) where u represents spatial frequency, d represents possible lateral displacement between replication spots, and θ represents rotation of replication/beam steering elements. The product of the equation provides a metric of how much screen door effect is perceivable after stacking a number N of beam steering elements. For example, based on equation (1) for a beam steering element (which can also be referred to as a “filter”) having a replication factor of two (such as described herein with respect to FIG. 2), one filter results in approximately 41% perceptibility of the screen door effect, two filters results in approximately 14% perceptibility of the screen door effect, three filters results in approximately 7.5% perceptibility of the screen door effect, and four filters results in approximately 3.1% perceptibility of the screen door effect. Accordingly, increasing the number of beam steering elements in a stack for the beam steering assembly generally reduces perceptibility of the screen door effect. The beam steering assembly is implementable using any of a variety of suitable optical beam steering elements capable of sub-pixel-scale steering (i.e., steer replicated sub-pixels between positions based on states of the beam steering element). For example, FIG. 3 is a diagram illustrating a diffractive beam steering element in accordance with some embodiments. In the example of FIG. 3, a beam steering element 300 (e.g., one of the beam steering elements 124, 126 of FIG. 1 or beam steering element 214 of FIG. 2) is a stacked pair of gratings including a first grating 302 and a second grating 304 that splits and diffracts incident light into several beams traveling in different directions. In some embodiments, the relationship between grating spacing and the angles of the incident and diffracted beams of light for beam steering element 300 is represented by equations (2) and (3): θ = sin - 1 ⁢ n ⁢ ⁢ λ D , ( 2 ) t = d tan ⁢ ⁢ θ , ( 3 ) where θ represents the diffractive angle between beams of diffracted light (i.e., angular deflection of the diffracted beam, n represents the order number, λ represents the wavelength of incident light, D represents the period of the gratings, t represents the distance between the gratings, and d represents the optical lateral displacement between the replicated sub-pixels. As discussed in more detail relative to FIG. 2, the lateral displacement distance d is less than a pixel to fill in the non-emissive portions between sub-pixels. As shown, the first grating 302 of beam steering element 300 diffracts an incident beam of light 306 (e.g., light from a sub-pixel of the display panel) into the ±1 first orders. The second grating 304 of the grating pair further diffracts the light beams of ±1 first orders into the ±2 second orders and reverses the angular deflection of the diffracted beams such that light beams passing through the second grating 304 (and therefore leaving the beam steering element 300) has a direction matching the incidence angle of the incident beam of light 306 from sub-pixels of the display panel. In this manner, the beam steering element 300 replicates the original incident beam of light 306 and laterally displaces the replicated beams. Various amplitude or phase gratings may be utilized for diffracting the incident light beam and then reversing the angular deflection without departing from the scope of this disclosure. Further, the gratings may be designed for single or multiple diffraction orders to reduce thickness of the beam steering element. In some embodiments, the relative spot intensities (i.e. diffraction efficiency) of spots (e.g., replicated beams of light) replicated by beam steering element 300 is represented by equation (4): sin c(n*w/D)2  (4) where n represents diffraction order number, w/D is the open fraction of the grating, and sin c(x)=sin(π*x)/(π*x). Accordingly, the intensity of the replicated spots may be adjusted based on the open fraction of gratings in beam steering element 300. In another embodiment, FIG. 4 is a diagram illustrating a refractive beam steering element in accordance with some embodiments. In the example of FIG. 4, a beam steering element 400 (e.g., one of the beam steering elements 124, 126 of FIG. 1 or beam steering element 214 of FIG. 2) is a stacked pair of prisms including a first prism 402 and a second prism 404 that refracts incident light into several beams traveling in different directions. As shown, the first prism 402 angularly disperses an incident beam of white light 406 from the display panel 112 into three angularly-deviated rays. As the refractive index of prisms varies with the wavelength (i.e., color) of light, rays of different colors will be refracted differently and leave the first prism 402, thereby separating the incident beam of white light 406 into red, green, and blue rays. The red ray R has a longer wavelength than the green ray G and the blue ray B, and therefore leaves the first prism 402 with less angular deviation relative to the incident beam of white light 406 than the other rays. Similarly, the green ray G has a longer wavelength than the blue ray B, and therefore leaves the first prism 402 with less angular deviation relative to the incident beam of white light 406 than the blue ray B. As shown, the second prism 404 receives the three angularly-deviated rays from the first prism 402 and reverses the angular deviations such that the rays leaving the second prism 404 (and therefore leaving the beam steering element 400) have red, green, and blue colored rays displaced laterally while having a direction matching the incidence angle of the incident beam of white light 406. In this manner, the beam steering element 400 spreads out or changes the location of pixels at a sub-pixel scale. FIG. 5 a diagram illustrating another refractive beam steering element in accordance with some embodiments. In the example of FIG. 5, a beam steering element 500 (e.g., one of the beam steering elements 124, 126 of FIG. 1 or beam steering element 214 of FIG. 2) includes a liquid crystal cell 502 having liquid crystal molecules 504 oriented such as to form a birefringent material having a refractive index that depends on the polarization and propagation direction of light. As shown, the liquid crystal molecules 504 are oriented to have their symmetry axes at 45 degrees relative to the substrate plane. Accordingly, due to the double refraction phenomenon whereby a ray of incident light is split based on polarization into two rays taking slightly different paths, an incident beam of unpolarized light 506 is split into two rays 508, 510 and steered to one of two deflection angles, depending on polarization state. For the incident beam of unpolarized light 506, a first ray 508 having a first polarization state (e.g., light whose polarization is perpendicular to the optic axis of the liquid crystal cell 502, referred to as “ordinary axis oriented”) passes through the liquid crystal cell 502 without deflection. The second ray 510 having a second polarization state (e.g., light whose polarization is in the direction of the optic axis of the liquid crystal cell 502, referred to as “extraordinary axis oriented”) is deflected and is passed with a lateral displacement d. In some embodiments, the beam steering element 500 includes liquid crystal molecules 504 that are oriented as illustrated in FIG. 5 and polymerized such that the liquid crystal molecules 504 are linked to be static in that configuration, thereby forming a beam replication assembly. In other embodiments, the beam steering element 500 further includes a polarization switch (not shown) stacked on top of the polymerized liquid crystal cell that switches polarization between two values so that the liquid crystal molecules 504 only received polarized light (rather than the beam of unpolarized light 506 illustrated in FIG. 5). Accordingly, depending on the polarization of incident light, the incoming polarized light is either passed through or deviated (rather than passing both rays 508, 510 as illustrated in FIG. 5). It should be noted that while embodiments implementing various beam steering elements (such as the beam steering elements of FIGS. 3-5 and 8-9) are described herein for illustrative purposes, other suitable beam steering elements capable of lateral (i.e., not angular) sub-pixel shifts may be implemented in place of the beam steering elements described herein unless otherwise noted. FIG. 6 illustrates an example method 600 of operation of the near-eye display system 100 for display of super-resolution imagery in accordance with various embodiments. As described above relative to FIGS. 1-2, the near-eye display system 100 takes advantage of the visual persistence effect to provide a time-multiplex display of shifted imagery so that either a series of images is perceived by the user as either a single super-resolution image or a native-resolution image with effectively larger pixels that conceal the non-emissive portions of the display panel 112. The method 600 illustrates one iteration of the process for rendering and displaying an image for one of the left-eye display 104 or right-eye display 106, and thus the illustrated process is repeatedly performed in parallel for each of the displays 104, 106 to generate and display a different stream or sequence of frames for each eye at different points in time, and thus provide a 3D, autostereoscopic VR or AR experience to the user. The method 600 initiates at block 602 with determining a display image to be generated and displayed at the display panel 112. In some embodiments, the rendering component 134 identifies the image content to be displayed to the corresponding eye of the user as a frame. In at least one embodiment, the rendering component 134 receives pose data from various pose-related sensors, such as a gyroscope, accelerometer, magnetometer, Global Positioning System (GPS) sensor, and the like to determines a current pose of the apparatus 108 (e.g., HMD) used to mount the displays 104, 106 near the user's eyes. From this pose data, the CPU 136, executing the rendering program 144, can determine a corresponding current viewpoint of the subject scene or object, and from this viewpoint and graphical and spatial descriptions of the scene or object provided as rendering information 146, determine the imagery to be rendered. At block 604, the rendering program 144 manipulates the CPU 136 to sample the source image and generate a first array of pixels representing imagery to be rendered (e.g., as determined in block 602). The generated first array of pixels is subsequently transmitted to the display panel 112 to be displayed. At block 606, the beam steering controller 132 configures the beam steering assembly 120 to be in a first configuration state while the display controller 130 controls the display panel 112 facing the beam steering assembly 120 to display the first array of pixels generated in block 604. In some embodiments, such as described above relative to time t0 in FIG. 2, the first configuration state of the beam steering assembly is a deactivated state in which the optical beam steering element 214 allows the first array of pixels to be passed without any lateral displacements. In other embodiments, the first configuration state of the beam steering assembly laterally displaces the first array of pixels such that they are not laterally aligned with the original optical path between the display panel 112 and the beam steering assembly 120. Accordingly, the beam steering assembly, while in the first configuration state, imparts a first lateral displacement to the first array of pixels. As explained above, for various beam steering devices, the switching between configuration states for the beam steering assembly typically includes activating or deactivating a particular combination of stages of the stack of beam steering elements comprising the beam steering assembly, such that the array of pixels leaving the beam steering assembly is laterally shifted based on the configuration state. Note that the process of block 606 may be performed concurrently with the corresponding image generation at block 604. At block 608, the rendering program 144 manipulates the CPU 136 to sample the source image and generate a second array of pixels representing imagery to be rendered (e.g., as determined in block 602). The generated second array of pixels is subsequently transmitted to the display panel 112 to be displayed. At block 610, the beam steering controller 132 configures the beam steering assembly 120 to be in a second configuration state while the display controller 130 controls the display panel 112 facing the beam steering assembly 120 to display the second array of pixels generated in block 608. In some embodiments, such as described above relative to time t1 in FIG. 2, the second configuration state of the beam steering assembly is an activated state in which the optical beam steering element 214 laterally displaces the second array of pixels such that they are not laterally aligned with first array of pixels. As explained above, for various beam steering devices, the switching between configuration states for the beam steering assembly typically includes activating or deactivating a particular combination of stages of the stack of beam steering elements comprising the beam steering assembly, such that the array of pixels leaving the beam steering assembly is laterally shifted based on the configuration state. Note that the process of block 608 may be performed concurrently with the corresponding image generation at block 610. At block 612, the display controller 130 instructs the display panel to display the first and second array of pixels (e.g., as generated from blocks 604-610) within a visual perception interval so that the first and second first and second array of pixels are perceived by a user to be a single image with an effective resolution that is higher than a native resolution of the display panel 112, thereby presenting a super-resolution image. It should be noted that although the method 600 of FIG. 6 is described in the context of only combining two arrays of pixels that are laterally shifted to each other to generate a super-resolution image, those skilled in the art will recognize that the number and rate of iterations of the processes of blocks 604-610 may be varied increase the number of laterally shifted images to be displayed by display panel 112 during the visual persistence interval of the human eye. For example, assuming the beam steering assembly 120 includes a stack of multiple different beam steering elements (e.g., beam steering elements 124, 126 of FIG. 1) with different configuration states, the processes of blocks 604-608 are repeated for each of the different configuration states so that multiple arrays of pixels that are laterally shifted relative to each other may be generated and displayed so as to be perceived as a single super-resolution image by the user. Alternatively, rather than resampling the source image between each lateral displacement of pixels, the same array of pixels can be shifted across the various configuration states and displayed so as to be perceived as a single standard-resolution image with reduced screen-door effect by the user. As demonstrated above, the various optical beam steering assemblies described may be advantageously used to leverage the visual persistence effect of the human visual system to provide dynamically time-multiplexed, spatially shifted images that are perceived by a user as super-resolution images or native-resolution images with reduced perception of non-emissive portions of the display. Additionally, in other embodiments, the optical beam steering assemblies described may be used to passively replicate (i.e., without sending control voltages and changing states of the beam steering assemblies) and spatially shift incident light beams coming from the display panel to provide native-resolution images with reduced perception of non-emissive portions of the display. It will be appreciated that other embodiments provide for passive super-resolution without any time-multiplexing of images. FIG. 7 is a diagram illustrating a method of generating passive super-resolution images in accordance with some embodiments. As shown, a first image 702 includes a plurality of pixels (with only four pixels 704, 706, 708, 710 shown for ease of illustration). A second image 712 provides the same content data (e.g., pixels 704, 706, 708, 710) as the first image 702, but is laterally shifted in position relative to the first image 702. For example, in some embodiments, the first image 702 and the second image 712 are generated by presenting unpolarized light from a display screen to the beam steering element 500 of FIG. 5. For the incident unpolarized light, a first set of light rays having a first polarization state (e.g., light whose polarization is perpendicular to the optic axis of the beam steering element 500) passes through without deflection, thereby providing the first image 702. Additionally, for the same incident unpolarized light, a second set of light rays having a second polarization state (e.g., light whose polarization is in the direction of the optic axis of the beam steering element 500) is deflected and passed through with a lateral displacement d of a sub-pixel distance. In this example, the lateral displacement d is half a pixel in the x-axis direction and half a pixel in the y-axis direction, thereby diagonally shifting each of the pixels by half a pixel for the second image 712. The first image 702 is overlaid with one or more sub-pixel shifted copies of itself (e.g., the second image 712) to generate a summed image 714 which is perceivable as having improved resolution relative to that of the first image 702 and the second image 712. It will be appreciated that depending on the overlap, certain sub-pixel portions of the summed image 714 gets contributions from the same pixel value. For example, the sub-pixel portion 716 provides image data that is provided only by the value of pixel 704. Other sub-pixel portions of the summed image 714 gets contributions from multiple different pixel values. For example, the sub-pixel portion 716 provides image data that is provided by both the values of pixel 704 and pixel 706. In this manner, the effective resolution of the perceived image (i.e., summed image 714) is increased without requiring time-multiplexing of images or coordinating the rendering of images with varying the states of beam steering elements. FIG. 8 is a diagram illustrating a top-down view of a birefringent beam steering element in accordance with some embodiments. In the example of FIG. 8, the beam steering element 800 (e.g., one of the beam steering elements 124, 126 of FIG. 1 or beam steering element 214 of FIG. 2) is a layer of birefringent material that is tilted relative to a planar axis 802. As defined herein, the planar axis 802 represents a longitudinal axis along which the beam steering element 800 would be oriented if the beam steering element were parallel to the display panel 112. In some embodiments, the layer of birefringent material of the beam steering element 800 is a birefringent plate 800 including a stretched polymer plate. Many polymers have a polarizability anisotropy or are inherently isotropic due to their three-dimensional chemical structures, and as such do not show birefringence in an unstressed state. In a completely amorphous state, polarizability anisotropies for repeating units are compensated by each other because the polymer molecular chains are randomly oriented. As a result, the polymer macroscopically becomes optically isotropic and exhibits no birefringence. The polymer, however, exhibits birefringence when the polymer molecular chains are oriented by stress. For example, when the polymer is subjected to stresses from extrusion, stretching and injection, blow molding processes, or post manufacturing unintentional damage, the induced stress shows up as birefringence in the finished materials. It should be noted that while embodiments implementing various beam steering elements as stretched polymer plates or stretched polymer films are described herein for illustrative purposes, other suitable birefringent beam steering elements capable of lateral (i.e., not angular) sub-pixel shifts may be implemented in place of the beam steering elements described herein unless otherwise noted. For example, various birefringent materials formed from stress and strain due to external forces and/or deformation acting on materials that are not naturally birefringent, such as deformed glass, plastic lenses, and stressed polymer castings, may be used without departing from the scope of this disclosure. As shown in FIG. 8, the birefringent plate 800 includes an in-plane symmetry axis 804 that is parallel to the longitudinal length of the birefringent plate 800. For each input ray of light 806, the birefringent plate 800 generates two output rays of light 808, 810 by replicating the input ray of light 806. Thus, both of the replicated rays of light (i.e., output rays of light 808, 810) represent the same visual content as the input ray of light 806 incident on the beam steering assembly. One of the output rays of light 808 is passed through the birefringent plate 800 along substantially the same direction as an optical path 812 of the incident, input ray of light 806 (i.e., the light ray incident on the beam steering assembly). The other output ray of light 810 is a replicated ray of the incident, input ray of light 806 that is laterally displaced (e.g., two-dimensional shift of incident light in the X- and/or Y-axis directions of FIG. 8) relative to the input ray of light 806 and the output ray of light 808. It should be noted that the birefringent plate 800 causes lateral ray displacement but does not cause angular displacement of light rays. That is, a light ray emitted from the display panel 112 does not undergo a change in the angular direction of its light path (i.e., the optical path of the light beam incident on the beam steering element 800). Thus, the beam steering element (i.e., birefringent plate 800) replicates pixels of each given image and laterally displaces the replicated pixels so as to project an image with pixels of a perceived larger size (e.g., due to increased effective pixel count) that conceals the non-emissive space of display panel 112 between pixels. In some embodiments, the beam steering element 800 is coupled to an actuator 814 configured to rotate the beam steering element 800 around the X-axis, Y-axis, and/or Z-axis such as to change the relative angle between the in-plane symmetry axis 804 of the birefringent plate and the planar axis 802. In various embodiments, the actuator 814 is controlled by the rendering component 134 to change the amount of lateral displacement between the two output rays of light 808, 810. In various embodiments, the actuator 814 may include optomechanical actuators such as piezo-electric, voice-coil, or electro-active polymer actuators. Although described here in the context of optomechanical actuators, those skilled in the art will recognize that any mechanical actuator capable of physically rotating the beam steering element 800 may be used without departing from the scope of this disclosure. In various embodiments, a distance Δx by which a replicated ray is laterally displaced is represented by the following equations: Δ ⁢ ⁢ x = α * Δ ⁢ ⁢ z ( 5 ) α = θ - arctan ⁡ ( n o 2 n e 2 ⁢ tan ⁢ ⁢ θ ) ( 6 ) where Δz is the thickness of the birefringent plate 800 and a is the angular deviation of the replicated ray inside the birefringent plate 800, which after exiting turns into a lateral displacement of Δx. θ represents a tilt angle between the in-plane symmetry axis 804 of the birefringent plate and the angle of the incoming input ray of light 806 incident on the beam steering assembly (or the planar axis 802). Generally, a distance of max displacement is achieved when the in-plane symmetry axis 804 of the birefringent plate is at 45 degrees relative to the incoming ray. At 90 degrees, zero displacement of the replicated ray occurs. It should be noted that although the example of FIG. 8 is described in the context of a beam steering element 800 having a replication factor of two (i.e., replicate an incoming input ray and sub-pixel lateral shifting to another position), other embodiments may employ a stack of birefringent plate beam steering elements to increase the amount of ray multiplication. For example, as described below in more detail relative to FIG. 9, instead of using a single tilted birefringent plate, by using a stack of two tilted birefringent plates that each have a replication factor of two, a single beam steering element having a replication factor of four may be controlled by beam steering controller 132 and/or the rendering component 134 of FIG. 1 to allow for the replication and steering of each sub-pixel of the display panel to four different positions (i.e., three laterally displaced positions plus one original sub-pixel position). FIG. 9 is a diagram illustrating a top-down view of another birefringent beam steering element in accordance with some embodiments. In the example of FIG. 9, the beam steering element 900 (e.g., one of the beam steering elements 124, 126 of FIG. 1 or beam steering element 214 of FIG. 2) is tilted relative to the planar axis 902 and includes a stack of birefringent plates including a first birefringent plate 904 and a second birefringent plate 906. Similar to that of FIG. 8, the planar axis 902 represents a longitudinal axis along which the beam steering element 900 would oriented if the beam steering element were parallel to the display panel 112. The beam steering element 900 further includes a quarter wave plate 908 positioned between the first birefringent plate 904 and the second birefringent plate 906. In various embodiments, each layer of birefringent material of the beam steering element 900 (e.g., the first birefringent plate 904 and the second birefringent plate 906) is a stretched polymer plate. However, while embodiments implementing various beam steering elements as stretched polymer plates or stretched polymer films are described herein for illustrative purposes, other suitable birefringent polymer beam steering elements capable of lateral (i.e., not angular) sub-pixel shifts may be implemented in place of the beam steering elements described herein unless otherwise noted. For example, various birefringent materials formed from stress and strain due to external forces and/or deformation acting on materials that are not naturally birefringent, such as deformed glass, plastic lenses, and stressed polymer castings, may be used without departing from the scope of this disclosure. As shown in FIG. 9, the first birefringent plate 904 includes an in-plane symmetry axis 910 that is parallel to the longitudinal length of the first birefringent plate 904. For each input ray of light 914, the first birefringent plate 904 generates two output rays of light (not shown). One of the output rays of light is passed through the first birefringent plate 904 along substantially the same direction as an optical path of the incident, input ray of light 914. The other output ray of light is a replicated ray of the incident, input ray of light 914 that is laterally displaced (e.g., two-dimensional shift of incident light in the X- and/or Y-axis directions of FIG. 9) relative to the input ray of light 914. Thus, both of the replicated rays of light (i.e., output rays of light output from the first birefringent plate 904) represent the same visual content as the input ray of light 914 incident on the beam steering assembly. The quarter wave plate 908 polarizes those two output rays of light (i.e., replicated rays resulting from the input ray of light 914 passing through the first birefringent plate 904) to generate light rays having circular polarization prior to the light rays reaching the second birefringent plate 906. If light incident on the birefringent plates is not circularly polarized, spot multiplication (i.e., light ray replication) does not occur. For each input ray of light, the second birefringent plate 906 also generates two output rays of light (not shown). Accordingly, the polarized two output rays of light (i.e., replicated rays resulting from the input ray of light 914 passing through the first birefringent plate 904 and the quarter wave plate 908) result in a total of at least four output rays of light 916. Thus, the replicated rays of light (i.e., output rays of light 916) represent the same visual content as the input ray of light 914 incident on the beam steering assembly. One of the output rays of light 916 is passed through the beam steering element 900 along substantially the same direction as an optical path of the incident, input ray of light 914 (i.e., the light ray incident on the beam steering assembly). The other three output rays of light 916 are laterally displaced relative to the optical path. In this manner, the beam steering element 900 replicates pixels of each given image and laterally displaces the replicated pixels so as to project an image with pixels of a perceived larger size (e.g., due to increased effective pixel count) that conceals the non-emissive space of display panel 112 between pixels. In some embodiments, the beam steering element 900 is coupled to an actuator 918 configured to rotate the beam steering element 900 around the X-axis, Y-axis, and/or Z-axis such as to change the relative angle between the in-plane symmetry axes 910, 912 of the beam steering element 900 and the planar axis 902. In various embodiments, the actuator 918 is controlled by the rendering component 134 to change the amount of lateral displacement between the four output rays of light 916. In various embodiments, the actuator 918 may include optomechanical actuators such as piezo-electric, voice-coil, or electro-active polymer actuators. Although described here in the context of optomechanical actuators, those skilled in the art will recognize that any mechanical actuator capable of physically rotating the beam steering element 900 may be used without departing from the scope of this disclosure. In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors. A computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)). Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. 16148489 google llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:33AM Apr 27th, 2022 08:33AM Alphabet Technology General Retailers

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.