Microsoft

- NASDAQ:MSFT
Last Updated 2022-11-28

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Jul 29th, 2020 12:00AM https://www.uspto.gov?id=US11317084-20220426 Intra-picture prediction using non-adjacent reference lines of sample values Innovations in intra-picture prediction with multiple candidate reference lines available are described herein. For example, intra-picture prediction for a current block uses a non-adjacent reference line of sample values to predict the sample values of the current block. This can improve the effectiveness of the intra-picture prediction when the reference line of sample values that is adjacent the current block includes significant capture noise, significant quantization error, or significantly different values (compared to the current block) due to an occlusion. Innovations described herein include, but are not limited to, the following: intra-picture prediction with multiple candidate reference lines available; encoding/decoding of reference line indices using prediction; filtering of reference sample values; residue compensation; weighted prediction; mode-dependent padding to replace unavailable reference sample values; using in-loop-filtered reference sample values; encoder-side decisions for selecting reference lines; and post-filtering of predicted sample values. 11317084 1. In a computer system that implements a video encoder, a method comprising: receiving a picture; encoding the picture, thereby producing encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction for the current block, and wherein the performing the intra-picture prediction for the current block includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; and filtering the selected reference line of sample values; and outputting the encoded data as part of a bitstream. 2. The method of claim 1, wherein the filtering uses a filter having a one-dimensional kernel that covers multiple sample values within the selected reference line. 3. The method of claim 2, wherein the one-dimensional kernel has three taps. 4. The method of claim 3, wherein the one-dimensional kernel is a symmetric low-pass filter having a normalization factor of 4. 5. The method of claim 1, wherein the performing the intra-picture prediction for the current block further includes deciding to perform the filtering based at least in part on one or more factors, the one or more factors including intra-picture prediction mode for the current block. 6. The method of claim 1, wherein the performing the intra-picture prediction for the current block further includes deciding to perform the filtering based at least in part on one or more factors, the one or more factors including block size of the current block. 7. The method of claim 1, wherein the performing the intra-picture prediction for the current block includes predicting the sample values of the current block using at least some sample values of the filtered reference line of sample values. 8. A computer system comprising: a coded data buffer configured to store encoded data as part of a bitstream; a video decoder configured to perform operations to decode the encoded data, thereby reconstructing a picture, the operations including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction for the current block, and wherein the performing the intra-picture prediction includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; and filtering the selected reference line of sample values; and a picture buffer configured to store the reconstructed picture for output. 9. The computer system of claim 8, wherein the filtering uses a filter having a one-dimensional kernel that covers multiple sample values within the selected reference line. 10. The computer system of claim 9, wherein the one-dimensional kernel has three taps. 11. The computer system of claim 10, wherein the one-dimensional kernel is a symmetric low-pass filter having a normalization factor of 4. 12. The computer system of claim 8, wherein the performing the intra-picture prediction for the current block further includes deciding to perform the filtering based at least in part on one or more factors, the one or more factors including intra-picture prediction mode for the current block. 13. The computer system of claim 8, wherein the performing the intra-picture prediction for the current block further includes deciding to perform the filtering based at least in part on one or more factors, the one or more factors including block size of the current block. 14. The computer system of claim 8, wherein the performing the intra-picture prediction for the current block includes predicting the sample values of the current block using at least some sample values of the filtered reference line of sample values. 15. One or more computer-readable media having stored thereon encoded data in a bitstream, the encoded data being organized to facilitate processing by a video decoder with operations comprising: decoding the encoded data, thereby reconstructing a picture, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction for the current block, and wherein the performing the intra-picture prediction for the current block includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; and filtering the selected reference line of sample values; and storing the reconstructed picture for output. 16. The one or more computer-readable media of claim 15, wherein the filtering uses a filter having a one-dimensional kernel that covers multiple sample values within the selected reference line. 17. The one or more computer-readable media of claim 16, wherein the one-dimensional kernel has three taps. 18. The one or more computer-readable media of claim 17, wherein the one-dimensional kernel is a symmetric low-pass filter having a normalization factor of 4. 19. The one or more computer-readable media of claim 15, wherein the performing the intra-picture prediction for the current block further includes deciding to perform the filtering based at least in part on one or more factors, the one or more factors including intra-picture prediction mode for the current block and/or block size of the current block. 20. The one or more computer-readable media of claim 19, wherein the performing the intra-picture prediction for the current block further includes one or more of: performing padding to replace one or more unavailable reference sample values in the selected reference line; performing residue compensation; and performing weighted prediction. 20 CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 16/099,077, filed Nov. 5, 2018, which is a U.S. National Stage of International Application No. PCT/CN2016/080966, filed May 4, 2016, which was published in English under PCT Article 21(2), and which is incorporated by reference herein in its entirety. BACKGROUND Engineers use compression (also called source coding or source encoding) to reduce the bit rate of digital video. Compression decreases the cost of storing and transmitting video information by converting the information into a lower bit rate form. Decompression (also called decoding) reconstructs a version of the original information from the compressed form. A “codec” is an encoder/decoder system. Over the last 25 years, various video codec standards have been adopted, including the ITU-T H.261, H.262 (MPEG-2 or ISO/IEC 13818-2), H.263, H.264 (MPEG-4 AVC or ISO/IEC 14496-10) standards, the MPEG-1 (ISO/IEC 11172-2) and MPEG-4 Visual (ISO/IEC 14496-2) standards, and the SMPTE 421M (VC-1) standard. More recently, the H.265/HEVC standard (ITU-T H.265 or ISO/IEC 23008-2) has been approved. A video codec standard typically defines options for the syntax of an encoded video bitstream, detailing parameters in the bitstream when particular features are used in encoding and decoding. In many cases, a video codec standard also provides details about the decoding operations a video decoder should perform to achieve conforming results in decoding. Aside from codec standards, various proprietary codec formats define other options for the syntax of an encoded video bitstream and corresponding decoding operations. Some codec standards and formats use intra-picture prediction when compressing a picture. In general, for intra-picture prediction, the sample values of a current block are predicted using the neighboring sample values. The neighboring sample values, which are called reference sample values, have been encoded and reconstructed (during encoding) or reconstructed (during decoding) before the current block. Conventionally, reference sample values in the nearest row (adjacent row) above the current block and reference sample values in the nearest column (adjacent column) left of the current block are available for use in intra-picture prediction of the current block. Some intra-picture prediction modes are directional, or angular, modes in which reference sample values are propagated along a prediction direction into the current block. Other intra-picture prediction modes such as DC (average) prediction mode and planar prediction mode are not directional, but instead use combinations of reference sample values to predict the sample values of the current block. During encoding, after intra-picture prediction, differences (called residual values) between the original sample values of the current block and predicted sample values of the current block can be calculated and encoded. During decoding, the residual values for the current block can be decoded and combined with the predicted sample values of the current block to reconstruct the sample values of the current block. Conventionally, intra-picture prediction uses reference sample values in the adjacent row above the current block and/or reference sample values in the adjacent column left of the current block. In some cases, the reference sample values do not provide for effective intra-picture prediction. This might be the case, for example, when the reference sample values in the adjacent row/column include noise due to capture (i.e., capture noise) or compression (i.e., quantization error, reconstruction noise). Or, as another example, this might be the case when there is an object in the adjacent row/column that occludes an object shown in the current block. Whatever the cause, in some cases, intra-picture prediction using the reference sample values of an adjacent row/column is ineffective. SUMMARY In summary, the detailed description presents innovations in intra-picture prediction with multiple candidate reference lines available. For example, intra-picture prediction uses a non-adjacent row and/or column of reference sample values to predict the sample values of a current block. This can improve the effectiveness of the intra-picture prediction when the row and/or column of reference sample values adjacent the current block includes significant capture noise, significant quantization error, or significantly different sample values than the current block due to an occlusion. According to one aspect of the innovations described herein, a video encoder or image encoder receives a picture, encodes the picture to produce encoded data, and outputs the encoded data as part of a bitstream. As part of the encoding, the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line (e.g., row or column) of sample values is available for the intra-picture prediction. As part of the encoding, the encoder can determine a predictor for a reference line index. The reference line index identifies a reference line (e.g., row or column) of sample values used in the intra-picture prediction for the current block. The predictor is used to encode the reference line index. Effective prediction of reference line indices can reduce the bitrate associated with signaling of the reference line indices when multiple candidate reference lines are available for intra-picture prediction. A corresponding video decoder or image decoder receives encoded data as part of a bitstream, decodes the encoded data to reconstruct a picture, and outputs the reconstructed picture. As part of the decoding, the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line (e.g., row or column) of sample values is available for the intra-picture prediction. As part of the decoding, the decoder can determine a predictor for a reference line index. The predictor is used to decode the reference line index. According to another aspect of the innovations described herein, an encoder or decoder uses residue compensation in intra-picture prediction with multiple candidate reference lines available. For example, as part of the residue compensation, for a predicted sample value at a given position in the current block, the encoder or decoder calculates a residual value and uses the residual value to adjust the predicted sample value at the given position in the current block. The residual value is based on a difference between a reconstructed sample value at a given position in an offset region, outside the current block, and a predicted sample value at the given position in the offset region. Residue compensation can improve the effectiveness of intra-picture prediction for the current block by adjusting the predicted sample values of the current block based on results of intra-picture prediction in the adjacent offset region. According to another aspect of the innovations described herein, an encoder or decoder uses filtering of reference lines in intra-picture prediction with multiple candidate reference lines available. For example, the encoder or decoder selects one of multiple candidate reference lines of sample values outside the current block (including the non-adjacent reference line of sample values) and filters the selected reference line of sample values. Filtering of the selected reference line can improve the effectiveness of intra-picture prediction for the current block by smoothing outlier values among the sample values of the selected reference line. According to another aspect of the innovations described herein, an encoder or decoder performs weighted prediction during intra-picture prediction with multiple candidate reference lines available. For example, for each of multiple reference lines, the encoder or decoder generates an intermediate predicted sample value at a given position in the current block, using at least one sample value of that reference line, and applies a weight to the intermediate predicted sample value. This produces a weighted sample value at the given position in the current block. The encoder or decoder combines the weighted sample values (from intra-picture prediction with the respective reference lines) at the given position to produce a final predicted sample value at the given position in the current block. Weighted prediction can improve the effectiveness of intra-picture prediction for the current block by blending predicted sample values from different reference lines, when no single reference line provides better performance. According to another aspect of the innovations described herein, an encoder or decoder performs mode-dependent padding to replace one or more unavailable sample values during intra-picture prediction with multiple candidate reference lines available. For example, after selecting one of multiple candidate reference lines of sample values, the encoder or decoder determines that a sample value of the selected reference line is unavailable at a given position of the selected reference line. The encoder or decoder identifies a padding direction of an intra-picture prediction mode, then determines a sample value of another reference line on a projection through the given position of the selected reference line in the padding direction. The encoder or decoder sets the unavailable sample value at the given position of the selected reference line based at least in part on the determined sample value of the other reference line. Mode-dependent padding, compared to simple padding within a reference line, can yield padded sample values that provide more effective intra-picture prediction. According to another aspect of the innovations described herein, an encoder or decoder performs intra-picture prediction, in some cases, with in-loop-filtered reference sample values. For example, when the encoder or decoder selects a non-adjacent reference line of sample values for use in intra-picture prediction for a current block, at least some of the sample values of the selected reference line may have been modified by in-loop filtering prior to use in the intra-picture prediction for the current block. So long as the reference sample values are not dependent on sample values of the current block (or another block that has not yet been reconstructed), in-loop filtering of the reference sample values can improve the effectiveness of subsequent intra-picture prediction using the reference sample values. According to another aspect of the innovations described herein, an encoder uses any of various approaches to select one or more reference lines to use in intra-picture prediction. In a computationally efficient manner, these approaches can identify appropriate reference lines to use in the intra-picture prediction. According to another aspect of the innovations described herein, an encoder or decoder filters predicted sample values during intra-picture prediction. For example, after selecting one or more reference lines of sample values outside a current block, the encoder or decoder predicts the sample values of the current block using at least some sample values of the one or more selected reference lines. Then, the encoder or decoder filters at least some of the predicted sample values of the current block, using at least some sample values that are outside the current block and outside an adjacent reference line. By using reference sample values outside the adjacent reference line, in some cases, the filtering yields predicted sample values that are closer to the original sample values. According to another aspect of the innovations described herein, an encoder or decoder filters reference sample values with direction-dependent filtering during intra-picture prediction. For example, after selecting a reference line of sample values outside a current block, the encoder or decoder filters the selected reference line of sample values. The filtering adapts to differences in a set of sample values along a prediction direction for the intra-picture prediction, where at least some of the set of sample values is outside the current block and outside an adjacent reference line. In some cases, such filtering yields reference sample values that provide more effective intra-picture prediction. The innovations can be implemented as part of a method, as part of a computer system configured to perform operations for the method, or as part of one or more computer-readable media storing computer-executable instructions for causing a computer system to perform the operations for the method. The various innovations can be used in combination or separately. This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. The foregoing and other objects, features, and advantages of the invention will become more apparent from the following detailed description, which proceeds with reference to the accompanying figures. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram illustrating an example computer system in which some described embodiments can be implemented. FIGS. 2a and 2b are diagrams illustrating example network environments in which some described embodiments can be implemented. FIG. 3 is a diagram illustrating an example video encoder system in conjunction with which some described embodiments can be implemented. FIGS. 4a and 4b are diagrams illustrating an example video encoder in conjunction with which some described embodiments can be implemented. FIG. 5 is a diagram of an example decoder system in conjunction with which some described embodiments can be implemented. FIG. 6 is a diagram illustrating an example video decoder in conjunction with which some described embodiments can be implemented. FIG. 7 is a diagram illustrating examples of angular intra-picture prediction modes in some described embodiments. FIGS. 8-10 are diagrams illustrating examples of operations for intra-picture prediction modes in some described embodiments. FIG. 11 is a diagram illustrating an example of filtering of predicted sample values. FIG. 12 is a diagram illustrating examples of sample values of adjacent reference lines. FIG. 13 is a diagram illustrating examples of multiple candidate reference lines of sample values available for intra-picture prediction of a current block. FIGS. 14 and 15 are flowcharts illustrating generalized techniques for encoding and decoding, respectively, using intra-picture prediction with multiple candidate reference lines available, including a non-adjacent reference line. FIGS. 16a and 16b are diagrams illustrating examples of intra-picture prediction with copying of sample values from non-adjacent reference lines to adjacent reference lines. FIGS. 17a and 17b are diagrams illustrating examples of intra-picture prediction with sample values of non-adjacent reference lines crossing over offset regions. FIGS. 18a and 18b are diagrams illustrating examples of filters for sample values of reference lines. FIG. 19 is a flowchart illustrating a generalized technique for filtering sample values of a reference line. FIGS. 20a-20l are diagrams illustrating examples of residue compensation during intra-picture prediction. FIGS. 21 and 22 are flowcharts illustrating generalized techniques for intra-picture prediction with residue compensation during encoding and decoding, respectively. FIG. 23 is a diagram illustrating an example of weighted prediction during intra-picture prediction with multiple reference lines. FIGS. 24 and 25 are flowcharts illustrating generalized techniques for encoding and decoding, respectively, using intra-picture prediction with weighted prediction. FIG. 26 is a flowchart illustrating an example technique for weighted prediction during intra-picture prediction with multiple reference lines. FIGS. 27 and 28 are diagrams illustrating examples of mode-dependent padding to replace unavailable sample values. FIGS. 29 and 30 are flowcharts illustrating generalized techniques for encoding and decoding, respectively, using intra-picture prediction with mode-dependent padding. FIG. 31 is a flowchart illustrating an example technique for mode-dependent padding to replace an unavailable sample value during intra-picture prediction. FIG. 32 is a diagram illustrating an example of in-loop-filtered reference sample values used during intra-picture prediction. FIGS. 33 and 34 are flowcharts illustrating generalized techniques for encoding and decoding, respectively, using intra-picture prediction that uses in-loop-filtered reference sample values. FIGS. 35-37 are flowcharts illustrating example techniques for selecting, during encoding, which reference lines to use for intra-picture prediction. FIGS. 38 and 39 are diagrams illustrating examples of post-filtering of predicted sample values. FIG. 40 is a flowchart illustrating a generalized technique for post-filtering of predicted sample values during encoding or decoding for a current block. FIGS. 41 and 42 are diagrams illustrating examples of adaptive, direction-dependent filtering of reference sample values. FIG. 43 is a flowchart illustrating a generalized technique for adaptive, direction-dependent filtering of reference sample values during encoding or decoding. DETAILED DESCRIPTION The detailed description presents innovations in intra-picture prediction with multiple candidate reference lines available. For example, intra-picture prediction uses a non-adjacent reference line of sample values to predict the sample values of a current block. This can improve the effectiveness of the intra-picture prediction when the reference line of sample values that is adjacent the current block includes significant capture noise, significant quantization error, or significantly different sample values (compared to the current block) due to an occlusion. Innovations described herein include, but are not limited to, the following: intra-picture prediction with multiple candidate reference lines available; encoding/decoding of reference line indices using prediction; filtering of reference sample values; residue compensation; weighted prediction; mode-dependent padding to replace unavailable reference sample values; using in-loop-filtered reference sample values; encoder-side decisions for selecting reference lines; and post-filtering of predicted sample values. Some of the innovations described herein are illustrated with reference to terms specific to the H.265 standard, or extensions or variations of the H.265 standard. The innovations described herein can also be implemented for extensions or variations of other video codec standards or formats (e.g., the VP9 format, H.264 standard), including future video codec standards or formats that permit the use non-adjacent reference lines for intra-picture prediction. In the examples described herein, identical reference numbers in different figures indicate an identical component, module, or operation. Depending on context, a given component or module may accept a different type of information as input and/or produce a different type of information as output, or be processed in a different way. More generally, various alternatives to the examples described herein are possible. For example, some of the methods described herein can be altered by changing the ordering of the method acts described, by splitting, repeating, or omitting certain method acts, etc. The various aspects of the disclosed technology can be used in combination or separately. Different embodiments use one or more of the described innovations. Some of the innovations described herein address one or more of the problems noted in the background. Typically, a given technique/tool does not solve all such problems. I. Example Computer Systems. FIG. 1 illustrates a generalized example of a suitable computer system (100) in which several of the described innovations may be implemented. The computer system (100) is not intended to suggest any limitation as to scope of use or functionality, as the innovations may be implemented in diverse general-purpose or special-purpose computer systems. With reference to FIG. 1, the computer system (100) includes one or more processing units (110, 115) and memory (120, 125). The processing units (110, 115) execute computer-executable instructions. A processing unit can be a general-purpose central processing unit (“CPU”), processor in an application-specific integrated circuit (“ASIC”) or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example, FIG. 1 shows a CPU (110) as well as a graphics processing unit or co-processing unit (115). The tangible memory (120, 125) may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory (120, 125) stores software (180) implementing one or more innovations for intra-picture prediction with non-adjacent reference lines of sample values available, in the form of computer-executable instructions suitable for execution by the processing unit(s). A computer system may have additional features. For example, the computer system (100) includes storage (140), one or more input devices (150), one or more output devices (160), and one or more communication connections (170). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computer system (100). Typically, operating system software (not shown) provides an operating environment for other software executing in the computer system (100), and coordinates activities of the components of the computer system (100). The tangible storage (140) may be removable or non-removable, and includes magnetic media such as magnetic disks, magnetic tapes or cassettes, optical media such as CD-ROMs or DVDs, or any other medium which can be used to store information and which can be accessed within the computer system (100). The storage (140) stores instructions for the software (180) implementing one or more innovations for intra-picture prediction with non-adjacent reference lines of sample values available. The input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computer system (100). For video, the input device(s) (150) may be a camera, video card, screen capture module, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video input into the computer system (100). The output device(s) (160) may be a display, printer, speaker, CD-writer, or other device that provides output from the computer system (100). The communication connection(s) (170) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier. The innovations can be described in the general context of computer-readable media. Computer-readable media are any available tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the computer system (100), computer-readable media include memory (120, 125), storage (140), and combinations thereof. Thus, the computer-readable media can be, for example, volatile memory, non-volatile memory, optical media, or magnetic media. As used herein, the term computer-readable media does not include transitory signals or propagating carrier waves. The innovations can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computer system on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computer system. The terms “system” and “device” are used interchangeably herein. Unless the context clearly indicates otherwise, neither term implies any limitation on a type of computer system or computing device. In general, a computer system or computing device can be local or distributed, and can include any combination of special-purpose hardware and/or general-purpose hardware with software implementing the functionality described herein. The disclosed methods can also be implemented using specialized computing hardware configured to perform any of the disclosed methods. For example, the disclosed methods can be implemented by an integrated circuit (e.g., an ASIC such as an ASIC digital signal processor (“DSP”), a graphics processing unit (“GPU”), or a programmable logic device (“PLD”) such as a field programmable gate array (“FPGA”)) specially designed or configured to implement any of the disclosed methods. For the sake of presentation, the detailed description uses terms like “select” and “determine” to describe computer operations in a computer system. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation. II. Example Network Environments. FIGS. 2a and 2b show example network environments (201, 202) that include video encoders (220) and video decoders (270). The encoders (220) and decoders (270) are connected over a network (250) using an appropriate communication protocol. The network (250) can include the Internet or another computer network. In the network environment (201) shown in FIG. 2a, each real-time communication (“RTC”) tool (210) includes both an encoder (220) and a decoder (270) for bidirectional communication. A given encoder (220) can produce output compliant with a variation or extension of the H.265/HEVC standard, SMPTE 421M standard, ISO/IEC 14496-10 standard (also known as H.264/AVC), another standard, or a proprietary format such as VP8 or VP9, with a corresponding decoder (270) accepting encoded data from the encoder (220). The bidirectional communication can be part of a video conference, video telephone call, or other two-party or multi-party communication scenario. Although the network environment (201) in FIG. 2a includes two real-time communication tools (210), the network environment (201) can instead include three or more real-time communication tools (210) that participate in multi-party communication. A real-time communication tool (210) manages encoding by an encoder (220). FIG. 3 shows an example encoder system (300) that can be included in the real-time communication tool (210). Alternatively, the real-time communication tool (210) uses another encoder system. A real-time communication tool (210) also manages decoding by a decoder (270). FIG. 5 shows an example decoder system (500) that can be included in the real-time communication tool (210). Alternatively, the real-time communication tool (210) uses another decoder system. In the network environment (202) shown in FIG. 2b, an encoding tool (212) includes an encoder (220) that encodes video for delivery to multiple playback tools (214), which include decoders (270). The unidirectional communication can be provided for a video surveillance system, web camera monitoring system, remote desktop conferencing presentation or sharing, wireless screen casting, cloud computing or gaming, or other scenario in which video is encoded and sent from one location to one or more other locations. Although the network environment (202) in FIG. 2b includes two playback tools (214), the network environment (202) can include more or fewer playback tools (214). In general, a playback tool (214) communicates with the encoding tool (212) to determine a stream of video for the playback tool (214) to receive. The playback tool (214) receives the stream, buffers the received encoded data for an appropriate period, and begins decoding and playback. FIG. 3 shows an example encoder system (300) that can be included in the encoding tool (212). Alternatively, the encoding tool (212) uses another encoder system. The encoding tool (212) can also include server-side controller logic for managing connections with one or more playback tools (214). A playback tool (214) can include client-side controller logic for managing connections with the encoding tool (212). FIG. 5 shows an example decoder system (500) that can be included in the playback tool (214). Alternatively, the playback tool (214) uses another decoder system. III. Example Encoder Systems. FIG. 3 shows an example video encoder system (300) in conjunction with which some described embodiments may be implemented. The video encoder system (300) includes a video encoder (340) that uses intra-picture prediction with non-adjacent reference lines available for the intra-picture prediction. The encoder (340) is further detailed in FIGS. 4a and 4b. The video encoder system (300) can be a general-purpose encoding tool capable of operating in any of multiple encoding modes such as a low-latency encoding mode for real-time communication, a transcoding mode, and a higher-latency encoding mode for producing media for playback from a file or stream, or it can be a special-purpose encoding tool adapted for one such encoding mode. The video encoder system (300) can be adapted for encoding of a particular type of content. The video encoder system (300) can be implemented as part of an operating system module, as part of an application library, as part of a standalone application, or using special-purpose hardware. Overall, the video encoder system (300) receives a sequence of source video pictures (311) from a video source (310) and produces encoded data as output to a channel (390). The encoded data output to the channel can include content encoded using one or more of the innovations described herein. The video source (310) can be a camera, tuner card, storage media, screen capture module, or other digital video source. The video source (310) produces a sequence of video pictures at a frame rate of, for example, 30 frames per second. As used herein, the term “picture” generally refers to source, coded or reconstructed image data. For progressive-scan video, a picture is a progressive-scan video frame. For interlaced video, an interlaced video frame might be de-interlaced prior to encoding. Alternatively, two complementary interlaced video fields are encoded together as a single video frame or encoded as two separately-encoded fields. Aside from indicating a progressive-scan video frame or interlaced-scan video frame, the term “picture” can indicate a single non-paired video field, a complementary pair of video fields, a video object plane that represents a video object at a given time, or a region of interest in a larger image. The video object plane or region can be part of a larger image that includes multiple objects or regions of a scene. An arriving source picture (311) is stored in a source picture temporary memory storage area (320) that includes multiple picture buffer storage areas (321, 322, . . . , 32n). A picture buffer (321, 322, etc.) holds one source picture in the source picture storage area (320). After one or more of the source pictures (311) have been stored in picture buffers (321, 322, etc.), a picture selector (330) selects an individual source picture from the source picture storage area (320) to encode as the current picture (331). The order in which pictures are selected by the picture selector (330) for input to the video encoder (340) may differ from the order in which the pictures are produced by the video source (310), e.g., the encoding of some pictures may be delayed in order, so as to allow some later pictures to be encoded first and to thus facilitate temporally backward prediction. Before the video encoder (340), the video encoder system (300) can include a pre-processor (not shown) that performs pre-processing (e.g., filtering) of the current picture (331) before encoding. The pre-processing can include color space conversion into primary (e.g., luma) and secondary (e.g., chroma differences toward red and toward blue) components and resampling processing (e.g., to reduce the spatial resolution of chroma components) for encoding. In general, a pixel is the set of one or more collocated sample values for a location in a picture, which may be arranged in different ways for different chroma sampling formats. The video encoder (340) encodes the current picture (331) to produce a coded picture (341). As shown in FIGS. 4a and 4b, the video encoder (340) receives the current picture (331) as an input video signal (405) and produces encoded data for the coded picture (341) in a coded video bitstream (495) as output. As part of the encoding, the video encoder (340) in some cases uses one or more features of intra-picture prediction as described herein. Generally, the video encoder (340) includes multiple encoding modules that perform encoding tasks such as partitioning into tiles, intra-picture prediction estimation and prediction, motion estimation and compensation, frequency transforms, quantization, and entropy coding. Many of the components of the video encoder (340) are used for both intra-picture coding and inter-picture coding. The exact operations performed by the video encoder (340) can vary depending on compression format and can also vary depending on encoder-optional implementation decisions. The format of the output encoded data can be a variation or extension of Windows Media Video format, VC-1 format, MPEG-x format (e.g., MPEG-1, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, H.264, H.265), or VPx format, or another format. As shown in FIG. 4a, the video encoder (340) can include a tiling module (410). With the tiling module (410), the video encoder (340) can partition a picture into multiple tiles of the same size or different sizes. For example, the tiling module (410) splits the picture along tile rows and tile columns that, with picture boundaries, define horizontal and vertical boundaries of tiles within the picture, where each tile is a rectangular region. Tiles are often used to provide options for parallel processing. A picture can also be organized as one or more slices, where a slice can be an entire picture or section of the picture. A slice can be decoded independently of other slices in a picture, which improves error resilience. The content of a slice or tile is further partitioned into blocks or other sets of sample values for purposes of encoding and decoding. Blocks may be further sub-divided at different stages, e.g., at the prediction, frequency transform and/or entropy encoding stages. For example, a picture can be divided into 64×64 blocks, 32×32 blocks, or 16×16 blocks, which can in turn be divided into smaller blocks of sample values for coding and decoding. For syntax according to the H.264/AVC standard, the video encoder (340) can partition a picture into one or more slices of the same size or different sizes. The video encoder (340) splits the content of a picture (or slice) into 16×16 macroblocks. A macroblock includes luma sample values organized as four 8×8 luma blocks and corresponding chroma sample values organized as 8×8 chroma blocks. Generally, a macroblock has a prediction mode such as inter or intra. A macroblock includes one or more prediction units (e.g., 8×8 blocks, 4×4 blocks, which may be called partitions for inter-picture prediction) for purposes of signaling of prediction information (such as prediction mode details, motion vector (“MV”) information, etc.) and/or prediction processing. A macroblock also has one or more residual data units for purposes of residual coding/decoding. For syntax according to the H.265/HEVC standard, the video encoder (340) splits the content of a picture (or slice or tile) into coding tree units. A coding tree unit (“CTU”) includes luma sample values organized as a luma coding tree block (“CTB”) and corresponding chroma sample values organized as two chroma CTBs. The size of a CTU (and its CTBs) is selected by the video encoder. A luma CTB can contain, for example, 64×64, 32×32, or 16×16 luma sample values. A CTU includes one or more coding units. A coding unit (“CU”) has a luma coding block (“CB”) and two corresponding chroma CBs. For example, according to quadtree syntax, a CTU with a 64×64 luma CTB and two 64×64 chroma CTBs (YUV 4:4:4 format) can be split into four CUs, with each CU including a 32×32 luma CB and two 32×32 chroma CBs, and with each CU possibly being split further into smaller CUs according to quadtree syntax. Or, as another example, according to quadtree syntax, a CTU with a 64×64 luma CTB and two 32×32 chroma CTBs (YUV 4:2:0 format) can be split into four CUs, with each CU including a 32×32 luma CB and two 16×16 chroma CBs, and with each CU possibly being split further into smaller CUs according to quadtree syntax. In H.265/HEVC implementations, a CU has a prediction mode such as inter or intra. A CU typically includes one or more prediction units for purposes of signaling of prediction information (such as prediction mode details, displacement values, etc.) and/or prediction processing. A prediction unit (“PU”) has a luma prediction block (“PB”) and two chroma PBs. According to the H.265/HEVC standard, for an intra-picture-predicted CU, the PU has the same size as the CU, unless the CU has the smallest size (e.g., 8×8). In that case, the CU can be split into smaller PUs (e.g., four 4×4 PUs if the smallest CU size is 8×8, for intra-picture prediction) or the PU can have the smallest CU size, as indicated by a syntax element for the CU. For an inter-picture-predicted CU, the CU can have one, two, or four PUs, where splitting into four PUs is allowed only if the CU has the smallest allowable size. In H.265/HEVC implementations, a CU also typically has one or more transform units for purposes of residual coding/decoding, where a transform unit (“TU”) has a luma transform block (“TB”) and two chroma TBs. A CU may contain a single TU (equal in size to the CU) or multiple TUs. According to quadtree syntax, a TU can be split into four smaller TUs, which may in turn be split into smaller TUs according to quadtree syntax. The video encoder decides how to partition video into CTUs (CTBs), CUs (CBs), PUs (PBs) and TUs (TBs). In H.265/HEVC implementations, a slice can include a single slice segment (independent slice segment) or be divided into multiple slice segments (independent slice segment and one or more dependent slice segments). A slice segment is an integer number of CTUs ordered consecutively in a tile scan, contained in a single network abstraction layer (“NAL”) unit. For an independent slice segment, a slice segment header includes values of syntax elements that apply for the independent slice segment. For a dependent slice segment, a truncated slice segment header includes a few values of syntax elements that apply for that dependent slice segment, and the values of the other syntax elements for the dependent slice segment are inferred from the values for the preceding independent slice segment in decoding order. As used herein, the term “block” can indicate a macroblock, residual data unit, CTB, CB, PB or TB, or some other set of sample values, depending on context. The term “unit” can indicate a macroblock, CTU, CU, PU, TU or some other set of blocks, or it can indicate a single block, depending on context. As shown in FIG. 4a, the video encoder (340) includes a general encoding control (420), which receives the input video signal (405) for the current picture (331) as well as feedback (not shown) from various modules of the video encoder (340). Overall, the general encoding control (420) provides control signals (not shown) to other modules, such as the tiling module (410), transformer/scaler/quantizer (430), scaler/inverse transformer (435), intra-picture prediction estimator (440), motion estimator (450), and intra/inter switch, to set and change coding parameters during encoding. The general encoding control (420) can evaluate intermediate results during encoding, typically considering bit rate costs and/or distortion costs for different options. In particular, the general encoding control (420) decides whether to use intra-picture prediction or inter-picture prediction for the units of the current picture (331). If inter-picture prediction is used for a unit, in conjunction with the motion estimator (450), the general encoding control (420) decides which reference picture(s) to use for the inter-picture prediction. The general encoding control (420) determines which reference pictures to retain in a decoded picture buffer (“DPB”) or other buffer. The general encoding control (420) produces general control data (422) that indicates decisions made during encoding, so that a corresponding decoder can make consistent decisions. The general control data (422) is provided to the header formatter/entropy coder (490). With reference to FIG. 4b, if a unit of the current picture (331) is predicted using inter-picture prediction, a motion estimator (450) estimates the motion of blocks of sample values of the unit with respect to one or more reference pictures. The current picture (331) can be entirely or partially coded using inter-picture prediction. When multiple reference pictures are used, the multiple reference pictures can be from different temporal directions or the same temporal direction. The motion estimator (450) potentially evaluates candidate MVs in a contextual motion mode as well as other candidate MVs. For contextual motion mode, as candidate MVs for the unit, the motion estimator (450) evaluates one or more MVs that were used in motion compensation for certain neighboring units in a local neighborhood or one or more MVs derived by rules. The candidate MVs for contextual motion mode can include MVs from spatially adjacent units, MVs from temporally adjacent units, and MVs derived by rules. Merge mode in the H.265/HEVC standard is an example of contextual motion mode. In some cases, a contextual motion mode can involve a competition among multiple derived MVs and selection of one of the multiple derived MVs. The motion estimator (450) can evaluate different partition patterns for motion compensation for partitions of a given unit of the current picture (331) (e.g., 2N×2N, 2N×N, N×2N, or N×N partitions for PUs of a CU in the H.265/HEVC standard). The DPB (470), which is an example of decoded picture temporary memory storage area (360) as shown in FIG. 3, buffers one or more reconstructed previously coded pictures for use as reference pictures. The motion estimator (450) produces motion data (452) as side information. In particular, the motion data (452) can include information that indicates whether contextual motion mode (e.g., merge mode in the H.265/HEVC standard) is used and, if so, the candidate MV for contextual motion mode (e.g., merge mode index value in the H.265/HEVC standard). More generally, the motion data (452) can include MV data and reference picture selection data. The motion data (452) is provided to the header formatter/entropy coder (490) as well as the motion compensator (455). The motion compensator (455) applies MV(s) for a block to the reconstructed reference picture(s) from the DPB (470) or other buffer. For the block, the motion compensator (455) produces a motion-compensated prediction, which is a region of sample values in the reference picture(s) that are used to generate motion-compensated prediction values for the block. With reference to FIG. 4b, if a unit of the current picture (331) is predicted using intra-picture prediction, an intra-picture prediction estimator (440) determines how to perform intra-picture prediction for blocks of sample values of the unit. The current picture (331) can be entirely or partially coded using intra-picture prediction. Using values of a reconstruction (438) of the current picture (331), for intra spatial prediction, the intra-picture prediction estimator (440) determines how to spatially predict sample values of a block of the current picture (331) from previously reconstructed sample values of the current picture (331), e.g., selecting an intra-picture prediction mode and one or more reference lines of sample values. The intra-picture prediction estimator (440) can use, for example, one of the approaches described herein to make encoder-side decisions for intra-picture prediction, e.g., which reference lines to use. The intra-picture estimator (440) can also make other decisions for intra-picture prediction, e.g., whether to use weighted prediction, how to perform filtering of reference sample values and/or predicted sample values. Thus, the intra-picture estimator (440) can use one or more of the features of intra-picture prediction described below, e.g., intra-picture prediction with multiple candidate reference lines available, determining predictors of reference line indices, weighted prediction, residue compensation, mode-dependent padding to replace unavailable sample values, filtering of reference sample values and/or predicted sample values. Or, for intra block copy mode, the intra-picture prediction estimator (440) determines how to predict sample values of a block of the current picture (331) using an offset (sometimes called a block vector) that indicates a previously encoded/decoded portion of the current picture (331). Intra block copy mode can be implemented as a special case of inter-picture prediction in which the reference picture is the current picture (331), and only previously encoded/decoded sample values of the current picture (331) can be used for prediction. As side information, the intra-picture prediction estimator (440) produces intra prediction data (442), such as the prediction mode/direction used, reference line indices that identify reference lines of sample values used, and other decisions. The intra prediction data (442) is provided to the header formatter/entropy coder (490) as well as the intra-picture predictor (445). According to the intra prediction data (442), the intra-picture predictor (445) spatially predicts sample values of a block of the current picture (331) from previously reconstructed sample values of the current picture (331), producing intra-picture predicted sample values for the block. In doing so, the intra-picture predictor (445) can use one or more of the features of intra-picture prediction described below, e.g., intra-picture prediction with multiple candidate reference lines available, weighted prediction, residue compensation, mode-dependent padding to replace unavailable sample values, filtering of reference sample values and/or predicted sample values. Or, the intra-picture predictor (445) predicts sample values of the block using intra block copy prediction, using an offset (block vector) for the block. As shown in FIG. 4b, the intra/inter switch selects whether the predictions (458) for a given unit will be motion-compensated predictions or intra-picture predictions. Intra/inter switch decisions for units of the current picture (331) can be made using various criteria. The video encoder (340) can determine whether or not to encode and transmit the differences (if any) between a block's prediction values (intra or inter) and corresponding original values. The differences (if any) between a block of the prediction (458) and a corresponding part of the original current picture (331) of the input video signal (405) provide values of the residual (418). If encoded/transmitted, the values of the residual (418) are encoded using a frequency transform (if the frequency transform is not skipped), quantization, and entropy encoding. In some cases, no residual is calculated for a unit. Instead, residual coding is skipped, and the predicted sample values are used as the reconstructed sample values. With reference to FIG. 4a, when values of the residual (418) are encoded, in the transformer/scaler/quantizer (430), a frequency transformer converts spatial-domain video information into frequency-domain (i.e., spectral, transform) data. For block-based video coding, the frequency transformer applies a discrete cosine transform (“DCT”), an integer approximation thereof, or another type of forward block transform (e.g., a discrete sine transform or an integer approximation thereof) to blocks of values of the residual (418) (or sample value data if the prediction (458) is null), producing blocks of frequency transform coefficients. The transformer/scaler/quantizer (430) can apply a transform with variable block sizes. In this case, the transformer/scaler/quantizer (430) can determine which block sizes of transforms to use for the residual values for a current block. For example, in H.265/HEVC implementations, the transformer/scaler/quantizer (430) can split a TU by quadtree decomposition into four smaller TUs, each of which may in turn be split into four smaller TUs, down to a minimum TU size. TU size can be 32×32, 16×16, 8×8, or 4×4 (referring to the size of the luma TB in the TU). In H.265/HEVC implementations, the frequency transform can be skipped. In this case, values of the residual (418) can be quantized and entropy coded. With reference to FIG. 4a, in the transformer/scaler/quantizer (430), a scaler/quantizer scales and quantizes the transform coefficients. For example, the quantizer applies dead-zone scalar quantization to the frequency-domain data with a quantization step size that varies on a picture-by-picture basis, tile-by-tile basis, slice-by-slice basis, block-by-block basis, frequency-specific basis, or other basis. The quantization step size can depend on a quantization parameter (“QP”), whose value is set for a picture, tile, slice, and/or other portion of video. When quantizing transform coefficients, the video encoder (340) can use rate-distortion-optimized quantization (“RDOQ”), which is very time-consuming, or apply simpler quantization rules. The quantized transform coefficient data (432) is provided to the header formatter/entropy coder (490). If the frequency transform is skipped, the scaler/quantizer can scale and quantize the blocks of prediction residual data (or sample value data if the prediction (458) is null), producing quantized values that are provided to the header formatter/entropy coder (490). As shown in FIGS. 4a and 4b, the header formatter/entropy coder (490) formats and/or entropy codes the general control data (422), quantized transform coefficient data (432), intra prediction data (442), motion data (452), and filter control data (462). The entropy coder of the video encoder (340) compresses quantized transform coefficient values as well as certain side information (e.g., MV information, QP values, mode decisions, reference line indices, parameter choices, filter parameters). Typical entropy coding techniques include Exponential-Golomb coding, Golomb-Rice coding, context-adaptive binary arithmetic coding (“CABAC”), differential coding, Huffman coding, run length coding, variable-length-to-variable-length (“V2V”) coding, variable-length-to-fixed-length (“V2F”) coding, Lempel-Ziv (“LZ”) coding, dictionary coding, and combinations of the above. The entropy coder can use different coding techniques for different kinds of information, can apply multiple techniques in combination (e.g., by applying Exponential-Golomb coding or Golomb-Rice coding as binarization for CABAC), and can choose from among multiple code tables within a particular coding technique. Reference line indices for intra-picture prediction can be predictively encoded using predictors, as described below. The video encoder (340) produces encoded data for the coded picture (341) in an elementary bitstream, such as the coded video bitstream (495) shown in FIG. 4a. In FIG. 4a, the header formatter/entropy coder (490) provides the encoded data in the coded video bitstream (495). The syntax of the elementary bitstream is typically defined in a codec standard or format, or extension or variation thereof. For example, the format of the coded video bitstream (495) can be a variation or extension of Windows Media Video format, VC-1 format, MPEG-x format (e.g., MPEG-1, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, H.264, H.265), VPx format, or another format. After output from the video encoder (340), the elementary bitstream is typically packetized or organized in a container format, as explained below. The encoded data in the elementary bitstream includes syntax elements organized as syntax structures. In general, a syntax element can be any element of data, and a syntax structure is zero or more syntax elements in the elementary bitstream in a specified order. In the H.264/AVC standard and H.265/HEVC standard, a network abstraction layer (“NAL”) unit is a syntax structure that contains (1) an indication of the type of data to follow and (2) a series of zero or more bytes of the data. For example, a NAL unit can contain encoded data for a slice (coded slice). The size of the NAL unit (in bytes) is indicated outside the NAL unit. Coded slice NAL units and certain other defined types of NAL units are termed video coding layer (“VCL”) NAL units. An access unit is a set of one or more NAL units, in consecutive bitstream order, containing the encoded data for the slice(s) of a picture, and possibly containing other associated data such as metadata. For syntax according to the H.264/AVC standard or H.265/HEVC standard, a picture parameter set (“PPS”) is a syntax structure that contains syntax elements that may be associated with a picture. A PPS can be used for a single picture, or a PPS can be reused for multiple pictures in a sequence. A PPS is typically signaled separate from encoded data for a picture (e.g., one NAL unit for a PPS, and one or more other NAL units for encoded data for a picture). Within the encoded data for a picture, a syntax element indicates which PPS to use for the picture. Similarly, for syntax according to the H.264/AVC standard or H.265/HEVC standard, a sequence parameter set (“SPS”) is a syntax structure that contains syntax elements that may be associated with a sequence of pictures. A bitstream can include a single SPS or multiple SPSs. An SPS is typically signaled separate from other data for the sequence, and a syntax element in the other data indicates which SPS to use. As shown in FIG. 3, the video encoder (340) also produces memory management control operation (“MMCO”) signals (342) or reference picture set (“RPS”) information. The RPS is the set of pictures that may be used for reference in motion compensation for a current picture or any subsequent picture. If the current picture (331) is not the first picture that has been encoded, when performing its encoding process, the video encoder (340) may use one or more previously encoded/decoded pictures (369) that have been stored in a decoded picture temporary memory storage area (360). Such stored decoded pictures (369) are used as reference pictures for inter-picture prediction of the content of the current picture (331). The MMCO/RPS information (342) indicates to a video decoder which reconstructed pictures may be used as reference pictures, and hence should be stored in a picture storage area such as the DPB (470) in FIGS. 4a and 4b. With reference to FIG. 3, the coded picture (341) and MMCO/RPS information (342) (or information equivalent to the MMCO/RPS information (342), since the dependencies and ordering structures for pictures are already known at the video encoder (340)) are processed by a decoding process emulator (350). The decoding process emulator (350) implements some of the functionality of a video decoder, for example, decoding tasks to reconstruct reference pictures. In a manner consistent with the MMCO/RPS information (342), the decoding process emulator (350) determines whether a given coded picture (341) needs to be reconstructed and stored for use as a reference picture in inter-picture prediction of subsequent pictures to be encoded. If a coded picture (341) needs to be stored (and possibly modified), the decoding process emulator (350) models the decoding process that would be conducted by a video decoder that receives the coded picture (341) and produces a corresponding decoded picture (351). In doing so, when the video encoder (340) has used decoded picture(s) (369) that have been stored in the decoded picture storage area (360), the decoding process emulator (350) also uses the decoded picture(s) (369) from the storage area (360) as part of the decoding process. The decoding process emulator (350) may be implemented as part of the video encoder (340). For example, the decoding process emulator (350) includes certain modules and logic as shown in FIGS. 4a and 4b. During reconstruction of the current picture (331), when values of the residual (418) have been encoded/signaled, reconstructed residual values are combined with the prediction (458) to produce an approximate or exact reconstruction (438) of the original content from the video signal (405) for the current picture (331). (In lossy compression, some information is lost from the video signal (405).) With reference to FIG. 4a, to reconstruct residual values, in the scaler/inverse transformer (435), a scaler/inverse quantizer performs inverse scaling and inverse quantization on the quantized transform coefficients. When the transform stage has not been skipped, an inverse frequency transformer performs an inverse frequency transform, producing blocks of reconstructed prediction residual values or sample values. If the transform stage has been skipped, the inverse frequency transform is also skipped. In this case, the scaler/inverse quantizer can perform inverse scaling and inverse quantization on blocks of prediction residual data (or sample value data), producing reconstructed values. When residual values have been encoded/signaled, the video encoder (340) combines reconstructed residual values with values of the prediction (458) (e.g., motion-compensated prediction values, intra-picture prediction values) to form the reconstruction (438). When residual values have not been encoded/signaled, the video encoder (340) uses the values of the prediction (458) as the reconstruction (438). With reference to FIGS. 4a and 4b, for intra-picture prediction, the values of the reconstruction (438) can be fed back to the intra-picture prediction estimator (440) and intra-picture predictor (445). The values of the reconstruction (438) can be used for motion-compensated prediction of subsequent pictures. The values of the reconstruction (438) can be further filtered. A filtering control (460) determines how to perform deblock filtering and sample adaptive offset (“SAO”) filtering on values of the reconstruction (438), for the current picture (331). For example, the filtering control (460) can determine how to perform in-loop filtering of reference sample values of non-adjacent reference lines prior to subsequent intra-picture prediction, as described below. The filtering control (460) produces filter control data (462), which is provided to the header formatter/entropy coder (490) and merger/filter(s) (465). In the merger/filter(s) (465), the video encoder (340) merges content from different tiles into a reconstructed version of the current picture. The video encoder (340) selectively performs deblock filtering and SAO filtering according to the filter control data (462) and rules for filter adaptation, so as to adaptively smooth discontinuities across boundaries in the current picture (331). The video encoder (340) can also perform in-loop filtering (e.g., deblock filtering and/or SAO filtering) during intra-picture coding, to filter reference sample values of non-adjacent reference lines prior to subsequent intra-picture prediction for a current block, as described below. Other filtering (such as de-ringing filtering or adaptive loop filtering (“ALF”); not shown) can alternatively or additionally be applied. Tile boundaries can be selectively filtered or not filtered at all, depending on settings of the video encoder (340), and the video encoder (340) may provide syntax elements within the coded bitstream to indicate whether or not such filtering was applied. In FIGS. 4a and 4b, the DPB (470) buffers the reconstructed current picture for use in subsequent motion-compensated prediction. More generally, as shown in FIG. 3, the decoded picture temporary memory storage area (360) includes multiple picture buffer storage areas (361, 362, . . . , 36n). In a manner consistent with the MMCO/RPS information (342), the decoding process emulator (350) manages the contents of the storage area (360) in order to identify any picture buffers (361, 362, etc.) with pictures that are no longer needed by the video encoder (340) for use as reference pictures. After modeling the decoding process, the decoding process emulator (350) stores a newly decoded picture (351) in a picture buffer (361, 362, etc.) that has been identified in this manner. As shown in FIG. 3, the coded picture (341) and MMCO/RPS information (342) are buffered in a temporary coded data area (370). The coded data that is aggregated in the coded data area (370) contains, as part of the syntax of the elementary bitstream, encoded data for one or more pictures. The coded data that is aggregated in the coded data area (370) can also include media metadata relating to the coded video data (e.g., as one or more parameters in one or more supplemental enhancement information (“SEI”) messages or video usability information (“VUI”) messages). The aggregated data (371) from the temporary coded data area (370) is processed by a channel encoder (380). The channel encoder (380) can packetize and/or multiplex the aggregated data for transmission or storage as a media stream (e.g., according to a media program stream or transport stream format such as ITU-T H.222.0 ISO/IEC 13818-1 or an Internet real-time transport protocol format such as IETF RFC 3550), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media transmission stream. Or, the channel encoder (380) can organize the aggregated data for storage as a file (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media storage file. Or, more generally, the channel encoder (380) can implement one or more media system multiplexing protocols or transport protocols, in which case the channel encoder (380) can add syntax elements as part of the syntax of the protocol(s). The channel encoder (380) provides output to a channel (390), which represents storage, a communications connection, or another channel for the output. The channel encoder (380) or channel (390) may also include other elements (not shown), e.g., for forward-error correction (“FEC”) encoding and analog signal modulation. Depending on implementation and the type of compression desired, modules of the video encoder system (300) and/or video encoder (340) can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoder systems or encoders with different modules and/or other configurations of modules perform one or more of the described techniques. Specific embodiments of encoder systems typically use a variation or supplemented version of the video encoder system (300). Specific embodiments of video encoders typically use a variation or supplemented version of the video encoder (340). The relationships shown between modules within the video encoder system (300) and video encoder (340) indicate general flows of information in the video encoder system (300) and video encoder (340), respectively; other relationships are not shown for the sake of simplicity. In general, a given module of the video encoder system (300) or video encoder (340) can be implemented by software executable on a CPU, by software controlling special-purpose hardware (e.g., graphics hardware for video acceleration), or by special-purpose hardware (e.g., in an ASIC). IV. Example Decoder Systems. FIG. 5 is a block diagram of an example video decoder system (500) in conjunction with which some described embodiments may be implemented. The video decoder system (500) includes a video decoder (550), which is further detailed in FIG. 6. The video decoder system (500) can be a general-purpose decoding tool capable of operating in any of multiple decoding modes such as a low-latency decoding mode for real-time communication, a transcoding mode, and a higher-latency decoding mode for media playback from a file or stream, or it can be a special-purpose decoding tool adapted for one such decoding mode. The video decoder system (500) can be implemented as part of an operating system module, as part of an application library, as part of a standalone application or using special-purpose hardware. Overall, the video decoder system (500) receives coded data from a channel (510) and produces reconstructed pictures as output for an output destination (590). The received encoded data can include content encoded using one or more of the innovations described herein. The decoder system (500) includes a channel (510), which can represent storage, a communications connection, or another channel for coded data as input. The channel (510) produces coded data that has been channel coded. A channel decoder (520) can process the coded data. For example, the channel decoder (520) de-packetizes and/or demultiplexes data that has been organized for transmission or storage as a media stream (e.g., according to a media program stream or transport stream format such as ITU-T H.222.0 ISO/IEC 13818-1 or an Internet real-time transport protocol format such as IETF RFC 3550), in which case the channel decoder (520) can parse syntax elements added as part of the syntax of the media transmission stream. Or, the channel decoder (520) separates coded video data that has been organized for storage as a file (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel decoder (520) can parse syntax elements added as part of the syntax of the media storage file. Or, more generally, the channel decoder (520) can implement one or more media system demultiplexing protocols or transport protocols, in which case the channel decoder (520) can parse syntax elements added as part of the syntax of the protocol(s). The channel (510) or channel decoder (520) may also include other elements (not shown), e.g., for FEC decoding and analog signal demodulation. The coded data (521) that is output from the channel decoder (520) is stored in a temporary coded data area (530) until a sufficient quantity of such data has been received. The coded data (521) includes coded pictures (531) and MMCO/RPS information (532). The coded data (521) in the coded data area (530) contain, as part of the syntax of an elementary coded video bitstream, coded data for one or more pictures. The coded data (521) in the coded data area (530) can also include media metadata relating to the encoded video data (e.g., as one or more parameters in one or more SEI messages or VUI messages). In general, the coded data area (530) temporarily stores coded data (521) until such coded data (521) is used by the video decoder (550). At that point, coded data for a coded picture (531) and MMCO/RPS information (532) are transferred from the coded data area (530) to the video decoder (550). As decoding continues, new coded data is added to the coded data area (530) and the oldest coded data remaining in the coded data area (530) is transferred to the video decoder (550). The video decoder (550) decodes a coded picture (531) to produce a corresponding decoded picture (551). As shown in FIG. 6, the video decoder (550) receives the coded picture (531) as input as part of a coded video bitstream (605), and the video decoder (550) produces the corresponding decoded picture (551) as output as reconstructed video (695). As part of the decoding, the video decoder (550) in some cases uses one or more features of intra-picture prediction as described herein. Generally, the video decoder (550) includes multiple decoding modules that perform decoding tasks such as entropy decoding, inverse quantization, inverse frequency transforms, motion compensation, intra-picture prediction, and filtering. Many of the components of the decoder (550) are used for both intra-picture decoding and inter-picture decoding. The exact operations performed by those components can vary depending on the type of information being decompressed. The format of the coded video bitstream (605) can be a variation or extension of Windows Media Video format, VC-1 format, MPEG-x format (e.g., MPEG-1, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, H.264, H.265), or VPx format, or another format. A picture can be organized into multiple tiles of the same size or different sizes. A picture can also be organized as one or more slices. The content of a slice or tile can be further organized as blocks or other sets of sample values. Blocks may be further sub-divided at different stages. For example, a picture can be divided into 64×64 blocks, 32×32 blocks or 16×16 blocks, which can in turn be divided into smaller blocks of sample values. In implementations of decoding for the H.264/AVC standard, for example, a picture is divided into macroblocks and blocks. In implementations of decoding for the H.265/HEVC standard, for example, a picture is partitioned into CTUs (CTBs), CUs (CBs), PUs (PBs) and TUs (TBs). With reference to FIG. 6, a buffer receives encoded data in the coded video bitstream (605) and makes the received encoded data available to the parser/entropy decoder (610). The parser/entropy decoder (610) entropy decodes entropy-coded data, typically applying the inverse of entropy coding performed in the encoder (340) (e.g., context-adaptive binary arithmetic decoding with binarization using Exponential-Golomb or Golomb-Rice). As a result of parsing and entropy decoding, the parser/entropy decoder (610) produces general control data (622), quantized transform coefficient data (632), intra prediction data (642) (e.g., intra-picture prediction modes, reference line indices), motion data (652), and filter control data (662). Reference line indices for intra-picture prediction can be decoded using predictors, as described below. The general decoding control (620) receives the general control data (622). The general decoding control (620) provides control signals (not shown) to other modules (such as the scaler/inverse transformer (635), intra-picture predictor (645), motion compensator (655), and intra/inter switch) to set and change decoding parameters during decoding. With reference to FIG. 5, as appropriate, when performing its decoding process, the video decoder (550) may use one or more previously decoded pictures (569) as reference pictures for inter-picture prediction. The video decoder (550) reads such previously decoded pictures (569) from a decoded picture temporary memory storage area (560), which is, for example, DPB (670). With reference to FIG. 6, if the current picture is predicted using inter-picture prediction, a motion compensator (655) receives the motion data (652), such as MV data, reference picture selection data and merge mode index values. The motion compensator (655) applies MVs to the reconstructed reference picture(s) from the DPB (670). The motion compensator (655) produces motion-compensated predictions for inter-coded blocks of the current picture. In a separate path within the video decoder (550), the intra-picture predictor (645) receives the intra prediction data (642), such as information indicating the prediction mode/direction used and reference line indices. For intra spatial prediction, using values of a reconstruction (638) of the current picture, according to the prediction mode/direction and one or more reference line indices, the intra-picture predictor (645) spatially predicts sample values of a current block of the current picture from previously reconstructed sample values of the current picture. In doing so, the intra-picture predictor (645) can use one or more of the features of intra-picture prediction described below, e.g., intra-picture prediction with multiple candidate reference lines available, weighted prediction, residue compensation, mode-dependent padding to replace unavailable sample values, filtering of reference sample values and/or predicted sample values. Or, for intra block copy mode, the intra-picture predictor (645) predicts the sample values of a current block using previously reconstructed sample values of a reference block, which is indicated by an offset (block vector) for the current block. The intra/inter switch selects values of a motion-compensated prediction or intra-picture prediction for use as the prediction (658) for a given block. For example, when H.265/HEVC syntax is followed, the intra/inter switch can be controlled based on a syntax element encoded for a CU of a picture that can contain intra-predicted CUs and inter-predicted CUs. When residual values have been encoded/signaled, the video decoder (550) combines the prediction (658) with reconstructed residual values to produce the reconstruction (638) of the content from the video signal. When residual values have not been encoded/signaled, the video decoder (550) uses the values of the prediction (658) as the reconstruction (638). The video decoder (550) also reconstructs prediction residual values. To reconstruct the residual when residual values have been encoded/signaled, the scaler/inverse transformer (635) receives and processes the quantized transform coefficient data (632). In the scaler/inverse transformer (635), a scaler/inverse quantizer performs inverse scaling and inverse quantization on the quantized transform coefficients. The scaler/inverse transformer (635) sets values for QP for a picture, tile, slice and/or other portion of video based on syntax elements in the bitstream. An inverse frequency transformer performs an inverse frequency transform, producing blocks of reconstructed prediction residual values or sample values. For example, the inverse frequency transformer applies an inverse block transform to frequency transform coefficients, producing sample value data or prediction residual data. The inverse frequency transform can be an inverse DCT, an integer approximation thereof, or another type of inverse frequency transform (e.g., an inverse discrete sine transform or an integer approximation thereof). If the frequency transform was skipped during encoding, the inverse frequency transform is also skipped. In this case, the scaler/inverse quantizer can perform inverse scaling and inverse quantization on blocks of prediction residual data (or sample value data), producing reconstructed values. The video decoder (550) combines reconstructed prediction residual values with prediction values of the prediction (658), producing values of the reconstruction (638). For intra-picture prediction, the values of the reconstruction (638) can be fed back to the intra-picture predictor (645). For inter-picture prediction (and, in some cases, intra-picture prediction with non-adjacent reference lines), the values of the reconstruction (638) can be further filtered. In the merger/filter(s) (665), the video decoder (550) merges content from different tiles into a reconstructed version of the picture. The video decoder (550) selectively performs deblock filtering and SAO filtering according to the filter control data (662) and rules for filter adaptation, so as to adaptively smooth discontinuities across boundaries in the pictures. For example, the video decoder (550) can perform in-loop filtering of reference sample values of non-adjacent reference lines prior to subsequent intra-picture prediction, as described below. Other filtering (such as de-ringing filtering or ALF; not shown) can alternatively or additionally be applied. Tile boundaries can be selectively filtered or not filtered at all, depending on settings of the video decoder (550) or a syntax element within the encoded bitstream data. The DPB (670) buffers the reconstructed current picture for use as a reference picture in subsequent motion-compensated prediction. The video decoder (550) can also include a post-processing filter. The post-processing filter can include deblock filtering, de-ringing filtering, adaptive Wiener filtering, film-grain reproduction filtering, SAO filtering or another kind of filtering. Whereas “in-loop” filtering is performed on reconstructed sample values of pictures in a motion compensation loop (or, in some cases, an intra-picture prediction loop), and hence affects sample values of reference pictures, the post-processing filter is applied to reconstructed sample values outside of the motion compensation loop and intra-picture prediction loop, before output for display. With reference to FIG. 5, the decoded picture temporary memory storage area (560) includes multiple picture buffer storage areas (561, 562, . . . , 56n). The decoded picture storage area (560) is, for example, the DPB (670). The decoder (550) uses the MMCO/RPS information (532) to identify a picture buffer (561, 562, etc.) in which it can store a decoded picture (551). The decoder (550) stores the decoded picture (551) in that picture buffer. In a manner consistent with the MMCO/RPS information (532), the decoder (550) also determines whether to remove any reference pictures from the multiple picture buffer storage areas (561, 562, . . . , 56n). An output sequencer (580) identifies when the next picture to be produced in display order (also called output order) is available in the decoded picture storage area (560). When the next picture (581) to be produced in display order is available in the decoded picture storage area (560), it is read by the output sequencer (580) and output to the output destination (590) (e.g., display). In general, the order in which pictures are output from the decoded picture storage area (560) by the output sequencer (580) (display order) may differ from the order in which the pictures are decoded by the decoder (550) (bitstream order). Depending on implementation and the type of decompression desired, modules of the video decoder system (500) and/or video decoder (550) can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, decoder systems or decoders with different modules and/or other configurations of modules perform one or more of the described techniques. Specific embodiments of decoder systems typically use a variation or supplemented version of the video decoder system (500). Specific embodiments of video decoders typically use a variation or supplemented version of the video decoder (550). The relationships shown between modules within the video decoder system (500) and video decoder (550) indicate general flows of information in the video decoder system (500) and video decoder (550), respectively; other relationships are not shown for the sake of simplicity. In general, a given module of the video decoder system (500) or video decoder (550) can be implemented by software executable on a CPU, by software controlling special-purpose hardware (e.g., graphics hardware for video acceleration), or by special-purpose hardware (e.g., in an ASIC). V. Innovations in Intra-Picture Prediction. This section describes various innovations in intra-picture prediction. For example, intra-picture prediction uses a non-adjacent reference line of sample values to predict the sample values of a current block, which can improve the effectiveness of the intra-picture prediction. Innovations described in this section include, but are not limited to: intra-picture prediction with multiple candidate reference lines available; encoding/decoding of reference line indices using prediction; filtering of reference sample values used in intra-picture prediction; residue compensation during intra-picture prediction; weighted prediction during intra-picture prediction; mode-dependent padding to replace unavailable reference sample values for intra-picture prediction; using in-loop-filtered reference sample values in intra-picture prediction; encoder-side decisions for selecting reference lines for intra-picture prediction; and post-filtering of predicted sample values after intra-picture prediction. A. Introduction to Intra-Picture Prediction. Intra-picture prediction exploits correlations between sample values in a region to set up effective compression of the sample values in the region. Intra-picture prediction can provide efficient compression for a region that depicts a uniform object or regular pattern (e.g., stripes or other straight lines). The H.264 standard defines nine intra-picture prediction modes. One of the intra-picture prediction modes (DC prediction mode) predicts the sample values of a current block using an average value of certain reference sample values adjacent to the current block. The other eight intra-picture prediction modes are directional prediction modes, also called angular prediction modes. According to a directional prediction mode in the H.264 standard, selected reference sample values (adjacent to the current block) are projected into the current block in order to predict the sample values of the current block. This provides effective compression when visual content follows the direction of propagation. The H.265 standard defines 35 intra-picture prediction modes, including a DC prediction mode, planar prediction mode, and 33 angular prediction modes. By adding a non-directional prediction mode (planar prediction mode) and fine-grained angular prediction modes, intra-picture prediction for the H.265 standard tends to be more effective than intra-picture prediction for the H.264 standard. FIG. 7 shows examples of 33 angular intra-picture prediction modes (700) that can be used for intra-picture prediction. The 33 angular intra-picture modes (700) shown in FIG. 7 are defined in the H.265 standard (for prediction using adjacent reference sample values), but can also be used in conjunction with one or more of the innovations described herein, e.g., intra-picture prediction using sample values of a non-adjacent reference line. Mode 26 has a vertical direction, and mode 10 has a horizontal direction. The other modes have various diagonal directions. FIG. 8 shows an example (800) of operations for a planar prediction mode, which is designed to predict sample values in a region with gradual change. For the planar prediction mode, a predicted sample value at a given position can be calculated as the average of a horizontally interpolated value and a vertically interpolated value. In the example (800) of FIG. 8, a predicted sample value predA is calculated at position A of the current block (810) as the weighted average of the reference sample values at positions B, C, D, and E. The weights depend on the location of position A in the current block (810). FIG. 8 shows reconstructed sample values of reference lines left of the current block (810) and reference lines above the current block (810), including reference sample values refB and refD at positions B and D, respectively. FIG. 8 also shows padded sample values below the current block (810) and right of the current block (810), including padded sample values refC and refE at positions C and E, respectively. The padded sample value refC is derived by horizontal repeat padding, since the line below the current block (810) has not yet been reconstructed. The padded sample value refE is derived by vertical repeat padding, since the line to the right of the current block (810) has not yet been reconstructed. Position A is at coordinates (x,y) of the current block (810). The predicted sample value predA at position A of the current block (810) is calculated as: predA=[x×refC+(block_size−x)×refB+y×refE+(block_size−y)×refD]/(2×block_size), where block_size is the size of the current block (810), which is square. Although FIG. 8 shows prediction from reference sample values of an adjacent reference line, the planar prediction mode can instead use reference sample values from a non-adjacent line (row and/or column). (For simplicity, padded sample values can still be calculated for adjacent reference lines.) FIG. 9 shows an example (900) of operations for a DC prediction mode, which is designed to predict sample values of homogeneous regions. For the DC prediction mode, a predicted sample value at any position of a current block is calculated as the average of certain reference sample values in selected reference lines. In FIG. 9, the predicted sample values at positions of the current block (910) are calculated as the average of the indicated reference sample values (920, 930) of adjacent (boundary) reference lines. Although FIG. 9 shows prediction from reference sample values of adjacent reference lines, the DC prediction mode can instead use reference sample values from a non-adjacent line (row and/or column). FIG. 10 shows an example (1000) of operations for an angular prediction mode. The angular prediction mode shown in FIG. 10 corresponds to mode 33 in FIG. 7. In general, for an angular prediction mode, reference sample values in a reference line are propagated along a prediction direction. In FIG. 10, for a given position A of the current block (1010), a predicted sample value predA is calculated along the prediction direction (1020) for the angular prediction mode, using linear interpolation between reference sample values refB and refC at positions B and C, respectively. Specifically, the predicted sample value predA is calculated as predA=(m×refC+n×refB)/(m+n), where m and n indicate fractional offsets between positions B and C for the predicted sample value predA at position A of the current block (1010). For example, if m=⅚ and n=⅙, then the interpolated value is closer to position C, and predA=((⅚)×refC+(⅙)×refB)/((⅚)+(⅙)). On the other hand, if m=¼ and n=¾, then the interpolated value is closer to position B, and predA=((¼)×refC+(¾)×refB)/((¼)+(¾)). The fractional offsets are distances between a position along the prediction direction (1020), where it crosses the reference line, and the two nearest reference sample values of the reference line (at positions B and C). Although FIG. 10 shows prediction from reference sample values of an adjacent reference line, an angular prediction mode can instead use reference sample values from a non-adjacent line (row or column). Predicted sample values can be filtered after intra-picture prediction, at least for some intra-picture prediction modes. In the H.265 standard, for example, predicted sample values of the topmost row of the current block can be filtered after horizontal prediction if the size of the current block is 16×16 or smaller, and predicted sample values of the leftmost column of the current block can be filtered after vertical prediction if the size of the current block is 16×16 or smaller. FIG. 11 shows an example (1100) of filtering of predicted sample values of a current block (1110) after horizontal intra-picture prediction. The predicted sample value predA is filtered using the reference sample values refB and refC at positions B and C, respectively: predA=predA+(refB−refC)/2. After vertical intra-picture prediction, predicted sample values of the leftmost column of the current block (1110) could similarly be filtered using the reference sample value refC at position C and a reference sample value in the same row as the to-be-filtered predicted sample value of the current block (1110). After DC prediction, predicted sample values of the topmost row and leftmost column of the current block (1110) could be filtered using a filter [1, 3]/4 and the nearest reference sample value. Although FIG. 11 shows filtering of predicted sample values using reference sample values of an adjacent reference line, filtering of predicted sample values can also use reference sample values from a non-adjacent reference line, as described below. In FIGS. 8-11, intra-picture prediction uses reference sample values of adjacent reference lines. FIG. 12 shows examples (1200) of reference sample values of adjacent reference lines. In configurations in which reference lines are assigned reference line indices, the nearest reference line can be designated reference line 0. In FIG. 12, the reference sample values of adjacent reference lines include “above” sample values (1220) of reference line (row) 0, “above-right” sample values (1230) of reference line (row) 0, “left” sample values (1240) of reference line (column) 0, “below-left” sample values (1250) of reference line (column) 0, and a “top-left” sample value (1260) of reference line 0. Examples of intra-picture prediction using reference sample values of non-adjacent reference lines are described below. B. Examples of Intra-Picture Prediction with Multiple Candidate Reference Lines Available. In prior video codec standards and formats, intra-picture prediction for a current block uses only reference sample values of an adjacent reference line (e.g., nearest row above the current block and/or nearest column left of the current block). This reflects an assumption that correlation between the reference sample values and sample values of the current block is higher when the distance between reference lines and the current block is shorter. According to this assumption, the effectiveness of intra-picture prediction should improve (that is, prediction error should decrease) when the nearest reference sample values are used for intra-picture prediction. While that assumption may hold true in many cases, in other cases the reference sample values of the adjacent reference line may exhibit noise due to capture (i.e., capture noise) or compression (i.e., quantization error, reconstruction noise). In particular, for a given intra-picture-predicted block, reconstruction noise may increase in magnitude further away from the prediction edge (e.g., reconstruction noise may have higher magnitude at the bottom edge or right edge of the given block). In this case, when used as a reference line for intra-picture prediction of sample values of a later block, the far edge of the given block may be less suitable than an earlier line of the given block. Or, as another example, an object depicted in an adjacent reference line may, by coincidence, occlude an object shown in the current block, such that the reference sample values are very different from the sample values of the current block. Whatever the cause, in some cases, intra-picture prediction using reference sample values of an adjacent reference line is ineffective. This section describes variations of intra-picture prediction in which multiple reference lines, including non-adjacent reference lines, are available for use in the intra-picture prediction. These variations can improve the effectiveness of intra-picture prediction, such that the quality level of encoded video is improved for a given bitrate or such that bitrate of encoded video is reduced for a given quality level. FIG. 13 shows examples (1300) of multiple candidate reference lines of sample values available for intra-picture prediction of a current block (1310). In general, the sample values of the current block (1310) are logically organized as multiple columns and multiple rows. In FIG. 13, reference lines are assigned reference line indices, which increase for reference lines further from the current block (1310) to represent further offsets from the current block (1310). The nearest reference line is designated reference line 0. When the reference line index is greater than 0 for the current block (1310), indicating a non-adjacent reference line, there is an offset between the reference sample values and the current block (1310). FIG. 13 shows N reference lines (rows) above the current block (1310), which are numbered 0 . . . n. FIG. 13 shows M reference lines (columns) to the left of the current block (1310), which are numbered 0 . . . m. The values of M and N depend on implementation. For example, M=N=8. Or, M=N=4. Or, M=N=16. Or, M and N depend on block size. For example, M and N are equal to the size of the current block or equal to the size of the current block divided by 2. In some example implementations, M and N can also have different values for the current block (1310). The multiple candidate reference lines shown in FIG. 13 include reference lines not adjacent to the current block (1310) (i.e., any reference line having a reference line index >0 in FIG. 13). More generally, a non-adjacent reference line of sample values can be (a) a non-adjacent column of sample values separated, by one or more other columns of sample values outside the current block (1310), from the multiple columns of the current block (1310), or (b) a non-adjacent row of sample values separated, by one or more other rows of sample values outside the current block (1310), from the multiple rows of the current block (1310). FIG. 14 shows a generalized technique (1400) for encoding that includes intra-picture prediction with multiple candidate reference lines available, including a non-adjacent reference line. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (1400). The encoder receives (1410) a picture, encodes (1420) the picture to produce encoded data, and outputs (1430) the encoded data as part of a bitstream. During the encoding (1420), the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. The encoding (1420) can include determining a predictor for a reference line index, where the reference line index identifies a reference line of sample values used in the intra-picture prediction for the current block. In this case, the predictor is used to encode the reference line index, and the reference line index is signaled as part of the encoded data (e.g., using a predictor flag, as a differential). Variations of prediction of reference line indices are described below. The encoding (1420) can also incorporate one or more of the intra-picture prediction features described below, including: filtering of reference sample values used in intra-picture prediction; residue compensation during intra-picture prediction; weighted prediction during intra-picture prediction; mode-dependent padding to replace unavailable reference sample values for intra-picture prediction; using in-loop-filtered reference sample values in intra-picture prediction; encoder-side decisions for selecting reference lines for intra-picture prediction; and post-filtering of predicted sample values after intra-picture prediction. The encoder checks (1440) whether to continue with the next picture. If so, the encoder receives (1410) and encodes (1420) the next picture. FIG. 15 shows a generalized technique (1500) for corresponding decoding that includes intra-picture prediction with multiple candidate reference lines available, including a non-adjacent reference line. A decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (1500). The decoder receives (1510) encoded data as part of a bitstream, decodes (1520) the encoded data to reconstruct a picture, and outputs (1530) the reconstructed picture. During the decoding (1520), the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. The decoding (1520) can include determining a predictor for a reference line index, where the reference line index identifies a reference line of sample values used in the intra-picture prediction for the current block. In this case, the predictor is used to decode the reference line index, and the reference line index is signaled as part of the encoded data (e.g., using a predictor flag, as a differential). Variations of prediction of reference line indices are described below. The decoding (1520) can also incorporate one or more of the intra-picture prediction features described below, including: filtering of reference sample values used in intra-picture prediction; residue compensation during intra-picture prediction; weighted prediction during intra-picture prediction; mode-dependent padding to replace unavailable reference sample values for intra-picture prediction; using in-loop-filtered reference sample values in intra-picture prediction; and post-filtering of predicted sample values after intra-picture prediction. The decoder checks (1540) whether to continue with the next picture. If so, the decoder receives (1510) encoded data for the next picture and decodes (1520) the next picture. As part of the intra-picture prediction (during the encoding (1420) or decoding (1520), the encoder or decoder selects one or more of multiple candidate reference lines of sample values outside the current block, where the multiple candidate reference lines include at least one non-adjacent reference line of sample values. After selecting the reference line(s), the encoder or decoder predicts the sample values of the current block using at least some sample values of the selected reference line(s). The current block of sample values can be a luma block (including luma sample values) or chroma block (including chroma sample values). There are several approaches to performing operations for the intra-picture prediction using a non-adjacent reference line. Values of reference line indices can be signaled in various ways. 1. Examples of Selecting Reference Lines. During encoding (1420), the encoder selects reference lines to use in intra-picture prediction based on evaluation of the results of intra-picture prediction using different reference lines. Examples of approaches to deciding, during encoding, which reference lines to use for intra-picture prediction are presented below. During decoding (1520), the decoder selects reference lines to use in intra-picture prediction based on reference line indices that are signaled as part of encoded data. Examples of approaches to signaling reference line indices are presented below. Candidate reference lines can be rows or columns, and they can be identified with reference line indices. Candidate reference lines can include multiple reference rows above the current block and/or multiple reference columns left of the current block. In some cases, the encoder/decoder selects a single reference line to use in intra-picture prediction for the current block. A reference line index for the selected reference line can be signaled as part of the bitstream. In other cases, the encoder/decoder selects multiple reference lines to use in intra-picture prediction for the current block, and reference line indices for the selected reference lines, respectively, can be signaled as part of the bitstream. When the encoder/decoder selects a reference row above the current block and a reference column left of the current block, the reference row and reference column can have the same reference line index (e.g., both reference line index 1, or both reference line index 2, with a single reference line index signaled), or they can have different reference line indices (e.g., reference line indices 1 and 2 for two reference lines, or reference line indices 3 and 0 for two reference lines, with multiple reference line indices signaled). 2. First Approach to Performing Intra-Picture Prediction Operations for a Non-Adjacent Reference Line. In a first approach to performing operations of intra-picture prediction, sample values are copied from non-adjacent reference lines to adjacent reference lines. After selecting a non-adjacent reference line of sample values for use in intra-picture prediction for the current block, the encoder/decoder replaces, by logical or physical copying operations, sample values of an adjacent reference line with at least some of the sample values of the non-adjacent reference line. The logical or physical copying operations shift some of the sample values of the non-adjacent reference line vertically or horizontally towards the current block. A sample value at a top-left position outside the current block can be calculated by aggregating multiple sample values (including at least one of the sample values of the non-adjacent reference line). Then, using at least some of the replaced sample values of the adjacent reference line, the encoder/decoder predicts the sample values of the current block. FIGS. 16a and 16b illustrate examples (1601, 1602) of the first approach to performing operations of intra-picture prediction. In the example (1601) of FIG. 16a, sample values of reference line 3 are shifted towards the current block (1610) and used to replace sample values of reference line 0. The “above” and “above-right” sample values of reference line 0 are replaced by copying sample values of reference line 3 down, and the “left” and “below-left” sample values of reference line 0 are replaced by copying sample values of reference line 3 to the right. The “top-left” sample value of reference line 0 is calculated by aggregating (e.g., averaging, weighted interpolation) sample values in the top-left section of reference line 3. In the example (1601) of FIG. 16a, the selected reference line (reference line index 3) provides a reference row and reference column. In the example (1602) of FIG. 16b, the reference row and reference column come from different reference lines. The “above” and “above-right” sample values of reference line 0 are replaced by copying sample values of reference line 1 down, and the “left” and “below-left” sample values of reference line 0 are replaced by copying sample values of reference line 3 to the right. The “top-left” sample value of reference line 0 is calculated by aggregating (e.g., averaging, weighted interpolation) sample values in the top-left section of reference line (column) 3 and reference line (row) 1. For the sake of illustration, FIGS. 16a and 16b each show a non-adjacent reference column and non-adjacent reference row. Depending on the intra-picture prediction mode that is used, intra-picture prediction operations may use a single reference line (i.e., row or column) or multiple reference lines. Each reference line can be adjacent or non-adjacent to the current block (1610). The first approach is relatively simple to harmonize with existing approaches that perform intra-picture prediction operations from adjacent reference lines. The adjacent reference line is, in effect, directly replaced by the non-adjacent reference line. After the replacement, intra-picture prediction operations are the same as for adjacent reference lines. Also, reference sample values can be filtered after the replacement process. On the other hand, using the first approach can harm the efficiency of coding intra-picture prediction mode information. In particular, the angle of prediction can change as non-adjacent reference lines shift towards the current block, which may change the intra-picture prediction mode selected for the current block. In typical intra-picture prediction, the direction of prediction tends to be uniform from block to block. This enables efficient coding of intra-picture prediction mode using the most probable mode. Having the prediction direction change for the current block compared to its neighboring blocks can make coding of intra-picture prediction modes less efficient. Another problem with the first approach is that certain features such as residue compensation from sample values in an offset region cannot be used. 3. Second Approach to Performing Intra-Picture Prediction Operations for a Non-Adjacent Reference Line. In a second approach to performing operations of intra-picture prediction, non-adjacent reference lines are not shifted towards a current block. Instead, after selecting a non-adjacent reference line of sample values for use in intra-picture prediction for a current block, the encoder/decoder predicts the sample values of the current block using at least some of the sample values of the non-adjacent reference line. The intra-picture prediction crosses an offset region between the non-adjacent reference line and the current block. According to the second approach, intra-picture prediction can also include predicting at least some sample values of the offset region using sample values of the non-adjacent reference line. The predicted sample values of the offset region can be used for residue compensation, as described below. FIGS. 17a and 17b illustrate examples (1701, 1702) of intra-picture prediction with sample values of non-adjacent reference lines crossing over offset regions. In the example (1701) of FIG. 17a, sample values of reference line 3 are used to predict the sample values of the current block (1710), crossing over the offset region (1721) between reference line 3 and the current block (1710). The offset region (1721) includes reconstructed sample values of reference lines 0, 1, and 2. For residue compensation, predicted sample values of the offset region (1721) can be calculated and compared to corresponding reconstructed sample values. In the example (1701) of FIG. 17a, the selected reference line (reference line index 3) provides a reference row and reference column. In the example (1702) of FIG. 17b, the reference row and reference column come from different reference lines. As such, the offset region (1722) in FIG. 17b has a different shape than the offset region (1721) in FIG. 17a. The offset region (1722) still includes reconstructed sample values, which can be compared to corresponding predicted sample values in the offset region (1722) for residue compensation. Alternatively, the calculation of predicted sample values of an offset region can be skipped (e.g., if residue compensation is not used). In some example implementations, the encoder or decoder predicts sample values of a prediction region that includes the current block as well as the offset region. The prediction region can be rectangular (or square), with a width equal to the width of the current block plus the offset (if any) between the current block and reference column, and with a height equal to the height of the current block plus the offset (if any) between the current block and reference row. For example, suppose the current block is a 4×4 block. If reference line 0 (adjacent reference column) is used to the left of the current block, and reference line 0 (adjacent reference row) is used above the current block, the prediction region is a 4×4 block. If reference line 1 is used to the left of the current block, and reference line 1 is used above the current block, the prediction region is a 5×5 block. If reference line 3 is used to the left of the current block, and reference line 1 is used above the current block, the prediction region is a 7×5 block. In any case, the predicted sample values outside the current block can be used in residue compensation. To isolate the predicted sample values of the current block, the encoder or decoder can clip the prediction region. In the second approach, when an area of uniform intra-picture prediction includes multiple blocks, a most probable intra-picture prediction mode remains consistent within the area despite differences in reference lines used for the multiple blocks. This preserves efficient coding of intra-picture prediction mode information, since the intra-picture prediction mode does not change from block-to-block. For the sake of illustration, FIGS. 17a and 17b each show a non-adjacent reference column and non-adjacent reference row. Depending on the intra-picture prediction mode that is used, intra-picture prediction operations may use a single reference line (i.e., row or column) or multiple reference lines. Each reference line can be adjacent or non-adjacent to the current block (1710). 4. Examples of Prediction and Signaling of Reference Line Indices. When multiple candidate reference lines are available for intra-picture prediction of a current block, the one or more reference lines that are actually used for intra-picture prediction can be signaled as part of encoded data in the bitstream. The number of reference line indices that are signaled can depend on which intra-picture prediction mode is used, the syntax level at which reference line indices are signaled, and whether other features of intra-picture prediction are used. When different reference line indices are signaled for a reference row and reference column, signaling of a reference line index can be skipped for some intra-picture prediction modes. For example, for angular prediction modes 2-9 in FIG. 7, only a reference column of sample values is used for intra-picture prediction, and signaling of a reference line index for a reference row can be skipped. For angular prediction modes 27-34 in FIG. 7, only a reference row of sample values is used for intra-picture prediction, and signaling of a reference line index for a reference column can be skipped. For non-directional prediction modes such as DC prediction mode and planar prediction mode, however, or for angular prediction modes 10-26 in FIG. 7, a reference row and a reference column of sample values are used for intra-picture prediction, and reference line indices are signaled for both. Alternatively, in implementations in which reference row and reference column necessarily use the same reference line index, that reference line index is signaled regardless of intra-picture prediction mode. Depending on implementation, a reference line index can be signaled for a coding unit, prediction unit, transform unit, or other unit. In that case, the reference line index applies for all blocks of the unit. For example, a reference line index is signaled for a unit that includes a luma block and chroma blocks, and the reference line index applies for all blocks of the unit, with chroma blocks reusing the reference line index of their collocated luma block. In other implementations, separate reference line indices can be signaled for different blocks of a given unit (e.g., luma and chroma blocks each having their own reference line index). In this case, a reference line index can be signaled for a coding block, prediction block, transform block, or other block of a given unit. Alternatively, a reference line index is only signaled for a luma block, and corresponding chroma blocks use a default reference line (e.g., reference line index 0, indicating adjacent reference sample values). When weighted prediction is used for intra-picture prediction with multiple reference lines, predicted sample values generated using different reference lines are blended to compute the predicted sample values of a current block. In this case, reference line indices can be signaled for multiple reference lines. Or, use of one or more of the multiple reference lines (e.g., reference line 0) in the weighted prediction can be assumed, such that signaling of reference line indices for the assumed reference line(s) is skipped. When a reference line index is signaled as part of encoded data, the reference line index can be entropy coded by the encoder and entropy decoded by the decoder. For example, the reference line index can be encoded using CABAC after binarization, with corresponding entropy decoding performed during decoding. Or, the reference line index can be entropy coded/decoded in some other way. A reference line index can be directly, explicitly signaled as part of encoded data. For example, the syntax element used to represent a reference line index can be coded like a reference picture index in the H.264 standard or H.265 standard. The binary sequence 0 can be used to indicate reference line 0, the binary sequence 10 can be used to indicate reference line 1, the binary sequence 110 can be used to indicate reference line 2, and so on. Generally, for reference index x, the binary sequence includes x ones followed by a zero. Alternatively, the binary sequence includes x zeros followed by a one. Such binary sequences can be entropy coded using CABAC or another form of entropy coding, with corresponding entropy decoding performed by a decoder. Or, when signaling a reference line index as part of encoded data, the reference line index can be predicted. In this case, the encoder determines a predictor for the reference line index. The predictor can be the reference line index of a neighboring block or unit (e.g., left, above, above-left). For example, the reference column index of a current block/unit is predicted using the reference column index of the above block/unit, and the reference row index of the current block/unit is predicted using the reference row index of the left block/unit. Alternatively, the encoder can calculate a given predictor based on multiple reference line indices for multiple neighboring blocks or units, respectively. For example, the predictor is the median of reference line indices of multiple neighboring blocks/units (e.g., left, above, above-right or above-left). Alternatively, the predictor is a default reference line index (e.g., reference line index of 0). Alternatively, the predictor is determined in some other way. During decoding, the corresponding decoder determines the predictor in the same way as the encoder. However the predictor is determined, the reference line index can be signaled in the encoded data using a predictor flag that indicates whether the reference line index equals the predictor. The syntax element for the predictor flag can be entropy coded/decoded, e.g., using CABAC and corresponding decoding. During encoding, the encoder sets the value of the predictor flag depending on whether the predictor is used for intra-picture prediction. During decoding, if the predictor flag indicates the reference line index equals the predictor, the decoder uses the predictor for intra-picture prediction. If the predictor is not used for intra-picture prediction, another value is signaled as part of the encoded data to indicate the reference line index. The other value can be entropy coded/decoded. The other value can be a direct, explicit indication of the reference line index, or it can be a differential value. However the predictor is determined, the reference line index can be signaled in the encoded data using a differential value that indicates a difference between the reference line index and the predictor. The syntax element for the differential value can be entropy coded/decoded, e.g., using CABAC after binarization and corresponding decoding. During encoding, the encoder sets the differential value based on the difference between the reference line index and the predictor. During decoding, the decoder recovers the reference line index by combining the differential value and the predictor. As an alternative to separate signaling of a syntax element (e.g., direct value, predictor flag, differential value) for an individual reference line index, a syntax element can jointly indicate the reference line index and other information (e.g., an intra-picture prediction mode, a weight for weighted prediction, a filter decision). In particular, when weighted prediction is used, a non-zero weight for a reference line can implicitly indicate that the reference line is used for intra-picture prediction. Or, a single syntax element can jointly indicate multiple reference line indices (e.g., a combination of two or three reference lines selected from among the multiple candidate reference lines that are available). In some example implementations, a table includes a weight for each of multiple candidate reference lines. For a given candidate reference line, a weight of 0 implicitly indicates the reference line is not used for intra-picture prediction. A non-zero weight implicitly indicates the reference line is used for intra-picture prediction. The table of weights can be explicitly signaled as part of encoded data, and the table can be entropy coded/decoded. Or, when multiple tables are defined at the encoder and decoder (e.g., predefined or defined through earlier operations or signaling), a table index can be explicitly signaled as part of encoded data. The table index can be entropy coded/decoded. Prediction can make signaling of table indices more effective (e.g., determining a predictor for table index, and using a predictor flag or differential value for the table index). C. Reference Line Filtering. This section describes examples of filtering of sample values of reference lines used for intra-picture prediction. Filtering of reference sample values can improve the effectiveness of intra-picture prediction for a current block by smoothing outlier values, which may be caused by capture noise or reconstruction noise. Filtering of reference sample values can be used in combination with one or more other innovations described herein. For example, reference sample values can be filtered as described in this section before weighted prediction and/or residue compensation. As another example, reference sample values filtered as described in this section may have been subjected to previous in-loop filtering. 1. Examples of Filters. In general, an encoder or decoder filters sample values of a reference line before intra-picture prediction using the sample values of the reference line. The shape of the filter depends on implementation. FIGS. 18a and 18b show examples (1801, 1802) of filters for sample values of reference lines. In the example (1801) of FIG. 18a, sample values of a non-adjacent reference line (reference line 1) are filtered using a symmetric one-dimensional (“1D”) filter (1820). The 1D filter (1820) is a low-pass filter having three taps 1, 2, and 1, and a normalization factor of 4. The filtered reference sample values are then used in intra-picture prediction for the current block (1810). The 1D filter (1820) is aligned with the reference line, changes sample values within the reference line, and uses only sample values within the reference line. For filtering of a reference column, the kernel of the 1D filter (1820) is vertically oriented, in alignment with the reference column. Alternatively, another 1D filter (such as [12 10 2 1]/16) can be used. Also, instead of aligning with a reference line to be filtered, a 1D filter can intersect the reference line to be filtered, using reference sample values of nearby reference lines. In the example (1802) of FIG. 18b, sample values of a non-adjacent reference line (reference line 1) are filtered using a symmetric two-dimensional (“2D”) filter (1830). The filtered reference sample values are then used in intra-picture prediction for the current block (1810). The 2D filter (1830) has five taps. For example, the 2D filter (1830) is a low-pass filter having a center tap of 4, four other taps of 1, and a normalization factor of 8. Alternatively, the taps and normalization factor have other values. Or, the 2D filter has another shape of kernel. For example, the 2D filter has a 3×3 kernel with nine taps (e.g., center tap of 8, other taps of 1, and normalization factor or 16; or, center tap of 4, corner taps of 1, side taps of 2, and normalization factor of 16). The 2D filter (1830) is aligned with the reference line and only changes sample values within the reference line, but the 2D filter (1830) uses reference sample values outside the reference line. In FIG. 18b, the 2D filter (1830) uses reference sample values in the two nearby reference lines. More generally, the filter used to filter reference sample values can have a 1D kernel or 2D kernel. The shape of the 2D kernel can be a cross, a 3×3 window, or other shape. Typically, the center tap has the highest weight value. The filter can have filter taps that make it asymmetric or symmetric. For example, outside of the center tap, an asymmetric filter has higher filter taps closer to the current block. Or, filter taps can vary depending on which reference line is filtered (e.g., filtering some reference lines more aggressively than other reference lines). Or, filter taps can vary depending on information signaled in encoded data. 2. Example of Filtering Rules. In some example implementations, filtering of reference sample values is performed automatically for all reference sample values. Alternatively, filtering of reference sample values is applied selectively by rule. For example, filtering of reference sample values is applied for some prediction directions, but not applied for other prediction directions. Or, filtering of reference sample values is applied for some block sizes, but not applied for other block sizes. Or, filtering of reference sample value can depend on a combination of factors including prediction direction and block size. Alternatively, filtering of reference sample values is applied selectively in a manner consistent with decision information signaled as part of encoded data. For example, an encoder evaluates results when sample values of a reference line are filtered and evaluates results when the sample values of the reference line are not filtered. The encoder signals decision information indicating whether the decoder should filter the reference line. For example, the decision information can be signaled jointly with a reference line index or signaled with a separate syntax element. A corresponding decoder performs filtering of reference sample values, or skips such filtering, as indicated by the decision information. In addition to signaling an on/off decision for filtering, the encoder can signal parameters specifying the filter to use (e.g., kernel shape, filter taps). Depending on implementation, the filtered reference line can replace the unfiltered reference line or provide an alternative to the unfiltered reference line. If it provides an alternative, the filtered reference line can be identified with a different reference line index than the unfiltered reference line. Or, the filtered reference line can be designated with a flag, signaled in the bitstream, that indicates whether a given reference line is filtered. 3. Examples of Filtering During Encoding or Decoding. FIG. 19 illustrates a generalized technique (1900) for filtering sample values of a reference line during encoding or decoding for a current block. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (1900), for example, during the encoding (1420) described with reference to FIG. 14. Or, a decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (1900), for example, during the decoding (1520) described with reference to FIG. 15. The encoder or decoder selects (1910) one of multiple candidate reference lines of sample values outside a current block. The multiple candidate reference lines include at least one non-adjacent reference line of sample values. The selected reference line can be an adjacent reference line or non-adjacent reference line. The encoder or decoder filters (1920) the selected reference line of sample values. The filtering can use a 1D filter having a kernel that covers multiple sample values within the selected reference line or a 1D filter having a kernel that crosses the selected reference line. The 1D filter can have three taps, five taps, or some other number of taps. Or, the filtering can use a 2D filter having a kernel that covers multiple sample values of the selected reference line and one or more sample values of each of one or more reference lines next to the selected reference line. The 2D filter can have a cross-shaped kernel, square kernel, or some other shape. The 2D filter can have five taps, nine taps, or some other number of taps. If any positions covered by the kernel are part of the current block, the filtering can ignore such positions and change the normalization factor accordingly. The encoder or decoder can repeat the technique (1900) for any other reference lines to be used in intra-picture prediction for the current block. Encoded data in the bitstream can include a filtering flag that indicates whether or not to perform the filtering (e.g., for the current block, for a unit that includes the current block, for a given reference line). The encoder can set the filtering flag. The decoder, based at least in part of the filtering flag, can decide whether or not to perform the filtering of the selected reference line. The filtering flag can be signaled in the bitstream at the syntax level of a sequence (e.g., in an SPS), picture (e.g., in a PPS), slice (e.g., in a slice header), coding unit, coding block, prediction unit, prediction block, transform unit, or transform block, or at some other syntax level. The filtering flag can be signaled with a separate syntax element, or it can be jointly signaled with other information (e.g., a reference line index). The encoder and decoder can also decide to perform filtering based at least in part on one or more factors. The factor(s) can include intra-picture prediction mode for the current block and block size of the current block. For example, reference sample values may be filtered if the current block is below a certain size (e.g., 16×16 or smaller) and the prediction direction is vertical or horizontal. In some prior codec standards and formats, sample values of an adjacent reference line can be filtered. For example, for some prediction directions and block sizes, a low-pass filter [1 2 1]/4 can be applied to sample values of an adjacent reference line. In contrast, according to the preceding approaches described in this section, sample values of non-adjacent reference lines can be filtered. Also, filtering of reference sample values, even in an adjacent reference line, can use non-adjacent reference sample values. D. Residue Compensation. This section describes examples of residue compensation performed in conjunction with intra-picture prediction. As part of residue compensation, for a predicted sample value at a given position of a current block, the encoder or decoder calculates a residual value and uses the residual value to adjust the predicted sample value at the given position in the current block. The residual value is based on a difference between a reconstructed sample value and predicted sample value at a given position in an offset region, outside the current block. Residue compensation can improve the effectiveness of intra-picture prediction for the current block by adjusting the predicted sample values of the current block based on results of intra-picture prediction in the adjacent offset region. Residue compensation can be used in combination with one or more other innovations described herein. For example, weighted prediction can be performed after residue compensation, or weighted prediction can be performed before residue compensation. Residue compensation can use filtered reference sample values. Residue compensation can use reference sample values that have been in-loop filtered. The reference sample values used for residue compensation may include padded sample values derived using mode-dependent padding. Post-filtering may be applied to predicted sample values after residue compensation. 1. Examples of Residue Compensation Operations. FIGS. 17a and 17b show examples (1701, 1702) of intra-picture prediction operations in which sample values can be predicted in an offset region (1721, 1722) between a non-adjacent reference line and a current block (1710). The offset region (1721, 1722) also includes previously reconstructed sample values of other reference lines, assuming the current block (1710) is not at the boundary of the picture. The differences between reconstructed sample values and corresponding predicted sample values, along a prediction direction, in the offset region tend to indicate appropriate corrections to the predicted sample values of the current block along that prediction direction. Residue compensation uses the differences, or residual values, from the offset region to adjust the predicted sample values of the current block. FIGS. 20a-20l illustrate examples of residue compensation during intra-picture prediction. In addition to predicted sample values of a current block (2010) and reference sample values of one or more reference lines, FIGS. 20a-20l show residual values between reference sample values and corresponding predicted sample values of an offset region. In the examples of FIGS. 20a-20l, a predicted sample value at position B of the current block (2010) is calculated as: predB=predB+weight×(refA−predA), where the factor weight×(refA−predA) is due to residue compensation. The residual value (“difference”) at position A in the offset region represents the difference between the reconstructed reference sample value refA at position A and the predicted sample value predA at position A. The weight applied to the residual value is different in different approaches, but generally depends on factors including the location of position B in the current block (2010), the intra-picture prediction direction, and the block size of the current block (2010). In general, the weight is larger for positions of the current block (2010) that are closer to the reference line used for residue compensation, and the weight is smaller for positions of the current block (2010) that are farther from the reference line used for residue compensation. Also, the number of positions adjusted by residue compensation is larger for larger blocks. In some example implementations, weights are stronger for prediction directions closer to pure horizontal prediction or pure vertical prediction, and weaker for prediction directions closer to pure diagonal prediction (e.g., 45 degrees). a. First Approach to Residue Compensation. FIG. 20a shows an example (2001) of horizontal residue compensation when vertical intra-picture prediction is performed for the predicted sample values of the current block (2010), according to a first approach. As FIG. 20a shows, the direction of intra-picture prediction (vertical) is orthogonal to the direction of residue compensation (horizontal). The weight applied for residue compensation in the example (2001) of FIG. 20a depends on position in the current block (2010) and size of the current block (2010). Typically, the weights decrease from the left of the current block (2010) to the right of the current block (2010). For example, as a 4×4 block, the weight for positions of the left column of the current block (2010) is ¾, the weight for positions of the second column is ½, the weight for positions of the third column is ¼, and the weight for positions of the right column of the current block (2010) is 0. Alternatively, the weights can have other values (e.g., 1, ¾, ½, and ¼ for the respective columns). For a larger block (e.g., 8×8, 16×16), the weights can similarly decrease column-by-column in uniform increments (e.g., ⅛ for an 8×8 block, 1/16 for an 16×16 block). Similarly, FIG. 20b shows an example (2002) of vertical residue compensation when horizontal intra-picture prediction is performed for the predicted sample values of the current block (2010), according to the first approach. As FIG. 20b shows, the direction of intra-picture prediction (horizontal) is orthogonal to the direction of residue compensation (vertical). The weight applied for residue compensation in the example (2002) of FIG. 20b depends on position in the current block (2010) and size of the current block (2010). Typically, the weights decrease from the top of the current block (2010) to the bottom of the current block (2010). For example, as a 4×4 block, the weight for positions of the top row of the current block (2010) is ¾, the weight for positions of the second row is ½, the weight for positions of the third row is ¼, and the weight for the positions of the bottom row of the current block (2010) is 0. Alternatively, the weights can have other values (e.g., 1, ¾, ½, and ¼ for the respective rows). For a larger block (e.g., 8×8, 16×16), the weights can similarly decrease row-by-row in uniform increments (e.g., ⅛ for an 8×8 block, 1/16 for an 16×16 block). FIG. 20c shows an example (2003) of residue compensation when DC or planar intra-picture prediction is performed for the predicted sample values of the current block (2010), according to the first approach. As shown in FIG. 20c, horizontal residue compensation is performed for positions below a diagonal line through the current block (2010), running from the top-left corner to the bottom-right corner. For a position B below the diagonal line of the current block (2010), the predicted sample value predB is calculated as: predB=predB+weight×(refA−predA). Vertical residue compensation is performed for positions above the diagonal line through the current block (2010). For a position B′ above the diagonal line of the current block (2010), the predicted sample value predB′ is calculated as: predB′=predB′+weight×(refA−predA). For all positions of the current block (2010), the weight is weight=(num_filtered−k)/num_lines, where k represents the column index of the position in the current block (2010) for horizontal residue compensation or row index of the position in the current block (2010) for vertical residue compensation, num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation, and num_lines is the number of lines of the current block (2010). For example, for a 4×4 block, num_filtered is 3, and num_lines is 4. Or, for an 8×8 block, num_filtered is 7, and num_lines is 8. The value of k ranges from 0 to num_filtered. Thus, the weight is highest for the leftmost column (column 0, when k=0) or topmost row (row 0, when k=0). The weight decreases as k increases, moving positions to the right or down in the current block (2010). Alternatively, the weights can have other values. In the first approach (FIGS. 20a-20c), residue compensation is not used for other prediction modes. b. Second Approach to Residue Compensation. FIG. 20d shows an example (2004) of residue compensation when DC or planar intra-picture prediction is performed for the predicted sample values of the current block (2010), according to a second approach. The example (2004) of FIG. 20d is the same as the example (2003) of FIG. 20c except for positions at the top-left of the current block (2010). As shown in FIG. 20d, horizontal residue compensation and vertical residue compensation are performed for positions in the top-left quarter of the current block (2010). For a position B″ in the top-left quarter of the current block (2010), the predicted sample value predB″ is: predB″=predB″+weightH×(refA″−predA″)+weightW×(refA′″−predA′″). The weights weightH and weightW can be the same as the weights defined in the example (2003) of FIG. 20c, or they can be half of the weights defined in the example (2003) of FIG. 20c. Alternatively, the weights can have other values. FIG. 20e shows an example (2005) of residue compensation when angular intra-picture prediction for one of modes 2-17 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the second approach. The prediction directions for modes 2-17 are, at least generally, horizontal. The residue compensation is, at least generally, vertical. As FIG. 20e shows, the direction of intra-picture prediction is orthogonal to the direction of residue compensation. The weight applied for residue compensation in the example (2005) of FIG. 20e depends on position in the current block (2010) and size of the current block (2010). In particular, the weight is calculated as: weight=(13−abs(dir_mode−HOR_IDX))×(num_filtered−k)/64, where k represents the row index of the position in the current block (2010), and num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation. For example, the value of num_filtered is 2 or 3 for a 4×4 block. The value of num_filtered can be larger if the current block (2010) is larger, with proportional increases to the denominator (64). The value of k ranges from 0 to num_filtered. The value dir_mode indicates the mode of intra-picture prediction (in the range of 2 . . . 17 for FIG. 20e), and the value HOR_IDX is the mode of horizontal prediction (10 in FIG. 7). The factor (13−abs(dir_mode−HOR_IDX)) makes the weight higher for horizontal prediction and lower for prediction directions further from horizontal prediction. Alternatively, the weights can have other values. The position A can be located at an integer-sample offset, but may also be located at a fractional sample offset horizontally. In this case, the predicted sample value and reconstructed reference sample value at position A are each interpolated. For example, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference row that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference column used for the intra-picture prediction (that is, between the two reference sample values nearest to where a projection of the prediction direction through position A intersects the reference column). (In some cases, position A may be “behind” the reference column used for intra-picture prediction, but a predicted sample value and reference sample value can still be determined for position A.) FIG. 20f shows an example (2006) of residue compensation when angular intra-picture prediction for mode 18 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the second approach. In the example (2006) of FIG. 20f, for positions of the current block (2010) that are above a diagonal line through the current block (2010), running from the top-left corner to the bottom-right corner, vertical residue compensation is performed as in the example (2005) of FIG. 20e. For positions of the current block (2010) that are below the diagonal line through the current block (2010), horizontal residue compensation is performed as in the example (2007) of FIG. 20g. FIG. 20g shows an example (2007) of residue compensation when angular intra-picture prediction for one of modes 19-34 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the second approach. The prediction directions for modes 19-34 are, at least generally, vertical. The residue compensation is, at least generally, horizontal. As FIG. 20g shows, the direction of intra-picture prediction is orthogonal to the direction of residue compensation. The weight applied for residue compensation in the example (2007) of FIG. 20g depends on position in the current block (2010) and size of the current block (2010). In particular, the weight is calculated as: weight=(13−abs(dir_mode−VER_IDX))×(num_filtered−k)/64, where k represents the column index of the position in the current block (2010), and num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation. For example, the value of num_filtered is 2 or 3 for a 4×4 block. The value of num_filtered can be larger if the current block (2010) is larger, with proportional increases to the denominator (64). The value of k ranges from 0 to num_filtered. The value dir_mode indicates the mode of intra-picture prediction (in the range of 19 . . . 34 for FIG. 20g), and the value VER_IDX is the mode of vertical prediction (26 in FIG. 7). The factor (13−abs(dir_mode−VER_IDX)) makes the weight higher for vertical prediction and lower for prediction directions further from vertical prediction. Alternatively, the weights can have other values. The position A can be located at an integer-sample offset, but may also be located at a fractional sample offset vertically. In this case, the predicted sample value and reconstructed reference sample value at position A are each interpolated. For example, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference column that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference row used for the intra-picture prediction (that is, between the two reference sample values nearest to where a projection of the prediction direction through position A intersects the reference row). (In some cases, position A may be above the reference row used for intra-picture prediction, but a predicted sample value and reference sample value can still be determined for position A.) c. Third Approach to Residue Compensation. FIGS. 20h-20l show residue compensation according to a third approach. In the third approach, residue compensation for DC prediction mode or planar prediction mode is the same as in the second approach. For many intra-picture prediction modes, however, the direction of residue compensation is no longer orthogonal to the prediction direction. Instead, the direction of residue compensation is pure horizontal or vertical, or is parallel to the prediction direction. FIG. 20h shows an example (2008) of residue compensation when angular intra-picture prediction for one of modes 2-6 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the third approach. As FIG. 20h shows, the direction of intra-picture prediction is parallel to the direction of residue compensation, but in the opposite direction (inverse). The weight applied for residue compensation in the example (2008) of FIG. 20h depends on position in the current block (2010) and size of the current block (2010). In particular, the weight is calculated as: weight=(8−dir_mode)×(num_filtered−k)/32, where k represents the row index of the position in the current block (2010), and num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation. For example, the value of num_filtered is 2 or 3 for a 4×4 block. The value of num_filtered can be larger if the current block (2010) is larger, with proportional increases to the denominator (32). The value of k ranges from 0 to num_filtered. The value dir_mode indicates the mode of intra-picture prediction (in the range of 2 . . . 6 for FIG. 20h). The factor (8−dir_mode) makes the weight higher for prediction that is closer to horizontal prediction and lower for prediction that is further from horizontal prediction. Alternatively, the weights can have other values. The position A can be located at an integer-sample offset, but may also be located at a fractional sample offset horizontally. In this case, the predicted sample value and reconstructed reference sample value at position A are each interpolated. For example, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference row that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference column used for the intra-picture prediction (that is, between the two reference sample values nearest to where a projection of the prediction direction through position A intersects the reference column). FIG. 20i shows an example (2009) of residue compensation when angular intra-picture prediction for one of modes 7-13 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the third approach. As FIG. 20i shows, the direction of intra-picture prediction is, at least generally, horizontal, and the direction of residue compensation “snaps” to pure vertical prediction. The weight applied for residue compensation in the example (2009) of FIG. 20i is the same as the weight applied in the example (2005) of FIG. 20e. FIG. 20j shows an example (2010) of residue compensation when angular intra-picture prediction for one of modes 14-22 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the third approach. As FIG. 20j shows, the direction of intra-picture prediction is parallel to the direction of residue compensation, and in the same direction. The weight applied for residue compensation in the example (2010) of FIG. 20h depends on position in the current block (2010) and size of the current block (2010). In particular, the weight is calculated as: weight=(num_filtered−k)/4, where k represents the lower index between the column index and row index of the position in the current block (2010), and num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation. For example, the value of num_filtered is 2 or 3 for a 4×4 block. The value of num_filtered can be larger if the current block (2010) is larger, with proportional increases to the denominator (4). The value of k ranges from 0 to num_filtered. Alternatively, the weights can have other values. The position A can be located at an integer-sample offset, but may also be located at a fractional sample offset vertically (or horizontally). In this case, the predicted sample value and reconstructed reference sample value at position A are each interpolated. For example, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference column that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference column used for the intra-picture prediction (that is, between the two reference sample values nearest to where the prediction direction intersects the reference column). Or, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference row that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference row used for the intra-picture prediction (that is, between the two reference sample values nearest to where a projection of the prediction direction through position A intersects the reference row). FIG. 20k shows an example (2011) of residue compensation when angular intra-picture prediction for one of modes 23-29 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the third approach. As FIG. 20k shows, the direction of intra-picture prediction is, at least generally, vertical, and the direction of residue compensation “snaps” to pure horizontal prediction. The weight applied for residue compensation in the example (2011) of FIG. 20k is the same as the weight applied in the example (2007) of FIG. 20g. FIG. 20l shows an example (2012) of residue compensation when angular intra-picture prediction for one of modes 30-34 (from FIG. 7) is performed for the predicted sample values of the current block (2010), according to the third approach. As FIG. 20l shows, the direction of intra-picture prediction is parallel to the direction of residue compensation, but in the opposite direction (inverse). The weight applied for residue compensation in the example (2012) of FIG. 20l depends on position in the current block (2010) and size of the current block (2010). In particular, the weight is calculated as: weight=(dir_mode−28)×(num_filtered−k)/32, where k represents the column index of the position in the current block (2010), and num_filtered is the number of lines of the current block (2010) that are adjusted by residue compensation. For example, the value of num_filtered is 2 or 3 for a 4×4 block. The value of num_filtered can be larger if the current block (2010) is larger, with proportional increases to the denominator (32). The value of k ranges from 0 to num_filtered. The value dir_mode indicates the mode of intra-picture prediction (in the range of 30 . . . 34 for FIG. 20l). The factor (dir_mode−28) makes the weight higher for prediction that is closer to vertical prediction and lower for prediction that is further from vertical prediction. Alternatively, the weights can have other values. The position A can be located at an integer-sample offset, but may also be located at a fractional sample offset vertically. In this case, the predicted sample value and reconstructed reference sample value at position A are each interpolated. For example, the reference sample value at position A is interpolated using bilinear filtering between the two nearest sample values of a reference column that includes position A, and the predicted sample value at position A is interpolated using bilinear filtering between appropriate sample values of the reference row used for the intra-picture prediction (that is, between the two reference sample values nearest to where a projection of the prediction direction through position A intersects the reference row). In the examples of FIGS. 20a-20l, the position A (or A′, A″, or A′″) for the residual value is in an adjacent reference line (reference line 0). Alternatively, the position A (or A′, A″, or A′″) for the residual value can be in a non-adjacent reference line (but still be along the appropriate direction of residue compensation). For example, the position A (or A′, A″, or A′″) for the residual value can be in the reference line that is closest to the reference line used for intra-picture prediction. Or, the position A (or A′, A″, or A′″) for the residual value can be in the reference line that is closest to halfway between the current block (2010) and the reference line used for intra-picture prediction. Alternatively, position A (or A′, A″, or A′″) can be located somewhere else along the direction of residue compensation. As needed, a reference sample value and interpolated sample value at position A (or A′, A″, or A′″) can be calculated using bilinear interpolation horizontally and/or vertically between adjacent reference sample values at integer-sample offsets. Also, residue compensation can use multiple residual values at different positions in the offset region, with different residual values getting different weights or identical weights. 2. Examples of Residue Compensation During Encoding or Decoding. FIG. 21 shows a generalized technique (2100) for encoding that includes intra-picture prediction with residue compensation. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (2100). The encoder receives (2110) a picture, encodes (2120) the picture to produce encoded data, and outputs (2130) the encoded data as part of a bitstream. As part of the encoding (2120), the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the encoder performs residue compensation. The encoder checks (2140) whether to continue with the next picture. If so, the encoder receives (2110) and encodes (2120) the next picture. FIG. 22 shows a generalized technique (2200) for decoding that includes intra-picture prediction with residue compensation. A decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (2200). The decoder receives (2210) encoded data as part of a bitstream, decodes (2220) the encoded data to reconstruct a picture, and outputs (2230) the reconstructed picture. As part of the decoding, the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the decoder performs residue compensation. The decoder checks (2240) whether to continue with the next picture. If so, the decoder receives (2210) encoded data for the next picture and decodes (2220) the encoded data for the next picture. The residue compensation performed during the encoding (2120) or decoding (2220) can include various operations. For example, for a predicted sample value at a given position in the current block, the encoder or decoder calculates a residual value at a given position in an offset region (outside the current block). The residual value is based on a difference between a predicted sample value at the given position in the offset region and a reconstructed sample value at the given position in the offset region. Examples of positions in offset regions are shown in FIGS. 20a-20l. The given position in the offset region can have an integer-sample offset horizontally and integer-sample offset vertically. In this case, the predicted sample value and reconstructed sample value at the given position in the offset region are previous sample values. Or, the given position in the offset region can have a fractional-sample offset horizontally and/or vertically. In this case, the predicted sample value and reconstructed sample value at the given position in the offset region are interpolated values. The encoder/decoder uses the residual value to adjust the predicted sample value at the given position in the current block. The encoder/decoder can apply a weight to the residual value, producing a weighted residual value that is used to adjust the predicted sample value at the given position in the current block. The weight that is applied can depend on factors such as the given position in the current block, the number of lines of sample values of the current block that are processed with the residue compensation, and prediction direction of the intra-picture prediction for the current block, as explained with reference to the examples of FIGS. 20a-20l. Alternatively, the weight depends on other and/or additional factors. For a directional prediction mode, intra-picture prediction for a predicted sample value at a given position in the current block follows a prediction direction. Residue compensation follows a residue compensation direction from the given position in the offset region to the given position in the current block. In some cases, the residue compensation direction is orthogonal to the prediction direction (see, e.g., FIGS. 20a, 20b, 20e-20g). In other cases, the residue compensation direction is opposite the prediction direction (see, e.g., FIGS. 20h, 20l). In still other cases, the residue compensation direction is aligned with the prediction direction (see, e.g., FIG. 20j), or the residue compensation direction “snaps” to a pure horizontal or pure vertical direction that is close to orthogonal to the prediction direction (see, e.g., FIGS. 20i, 20k). For a non-directional prediction mode, the residue compensation can include different operations. For example, for a predicted sample value at a given position of a current block, the encoder/decoder calculates a first residual value at a first position in an offset region (outside the current block). The first residual value is based on a difference between a predicted sample value and reconstructed sample value at the first position in the offset region. The encoder/decoder also calculates a second residual value at a second position in the offset region. The second residual value is based on a difference between a predicted sample value and reconstructed sample value at the second position in the offset region. The encoder/decoder uses the first and second residual values to adjust the predicted sample value at the given position in the current block (see, e.g., FIG. 20d). D. Weighted Prediction. This section describes examples of weighted prediction during intra-picture prediction with multiple reference lines. Weighted prediction can improve the effectiveness of intra-picture prediction for a current block by blending predicted sample values generated using different reference lines. Weighted prediction can be used in combination with one or more other innovations described herein. For example, weighted prediction can be performed after residue compensation (e.g., residue compensation performed for intermediate predicted sample values), or weighted prediction can be performed before residue compensation (e.g., residue compensation performed for final weighted sample values). Weighted prediction can use filtered reference sample values. Weighted prediction can use reference sample values that have been in-loop filtered. The reference sample values used for weighted prediction may include padded sample values derived using mode-dependent padding. Post-filtering may be applied to intermediate predicted sample values, to weighted sample values (before combination), and/or to final weighted sample values. 1. Examples of Weighted Prediction. FIG. 23 shows an example (2300) of weighted prediction during intra-picture prediction with multiple reference lines. In FIG. 23, reference sample values from the adjacent reference column and a non-adjacent reference column are used to predict the sample values of a current block (2310). Intra-picture prediction using the reference sample values of the non-adjacent column (reference line 3) generates a first set of intermediate predicted sample values (2311). Intra-picture prediction using the reference sample values of the adjacent column (reference line 0) generates a second set of intermediate predicted sample values (2312). The intra-picture prediction from the different reference lines can have the same prediction mode (e.g., same prediction direction, or same non-directional prediction mode) or different prediction modes (e.g., different prediction directions, or different non-directional prediction modes, or a mix of different directional and non-directional prediction modes). A weight (2321) is applied to the first set of intermediate predicted sample values (2311), which produces a first set of weighted sample values (2331). Another weight (2322) is applied to the second set of intermediate predicted sample values (2312), which produces a second set of weighted sample values (2332). Weights for the respective reference lines can be specified in various ways, as explained below. The different sets of weighted sample values are combined to produce final predicted sample values (2340) for the current block (2310). For example, the weighted sample values are added on a position-by-position basis for the respective positions in the current block (2310). Although FIG. 23 shows two reference lines used for weighted prediction, weighted prediction can use more reference lines (e.g., 3, 4, 5). Also, although FIG. 23 shows weighted prediction that uses multiple reference columns, weighted prediction can instead use multiple reference rows or a mix of reference columns and rows. For some intra-prediction modes, predicted sample values are generated using reference sample values from a reference column and reference sample values from a reference row to generate a single set of intermediate predicted sample values. Weighted prediction, in contrast, involves weighting and combining different sets of intermediate predicted sample values. 2. Examples of Signaling of Weights. The different reference lines used in weighted prediction can be identified with reference line indices signaled as part of encoded data in the bitstream. Alternatively, one or more of the reference lines used in weighted prediction can be pre-defined. For example, weighted prediction can use two reference lines: a non-adjacent reference line (identified by a reference line index signaled in the bitstream) and the adjacent reference line (which is always used, and hence not identified with a reference line index). Alternatively, weighted prediction can use some other pre-defined reference line in addition to one or more signaled reference lines, or weighted prediction can use multiple pre-defined reference lines. In one approach to specifying weights for weighted prediction, the weights used in the weighted prediction are fixed (and hence not signaled as part of encoded data in the bitstream). For example, the weighted prediction always uses two reference lines with fixed weights: a non-adjacent reference line with a weight of ¾ and the adjacent reference line with a weight of ¼. Alternatively, the weighted prediction uses different, pre-defined weights for the two reference lines or uses more reference lines with pre-defined weights. The pre-defined weights can depend on a rule (e.g., distance between a non-adjacent reference line and the current block). In another approach to specifying weights for weighted prediction, the weights used in the weighted prediction are signaled as part of encoded data in the bitstream. This provides flexibility for the encoder to set different weights in different situations (e.g., different non-adjacent reference lines used with the adjacent reference line, different combinations of non-adjacent reference lines). For example, one weight is signaled per reference line used in weighted prediction. Different weights can be signaled at block level or unit level, wherever reference line indices are signaled. When n weights add up to some defined amount (e.g., 1), the n weights can be signaled with n−1 values, since the last weight must be whatever remains when the n weights are combined to make the defined amount. In some example implementations, weights for weighted prediction are specified using a table-based approach. A weight table is a set of weights for weighted prediction. A weight table includes weights for the reference lines used in weighted prediction (e.g., a non-zero weight per reference line). The weight table can be expressly signaled as part of encoded data in the bitstream (e.g., for a block or unit). Or, multiple weight tables can be stored at the encoder or decoder (e.g., pre-defined weight tables, or weight tables signaled at a higher syntax level such as sequence level in an SPS, picture level in a PPS, or slice level in a slice header or associated syntax structure). In this case, a table index signaled at block level or unit level identifies one of the weight tables for use in weighted prediction. A weight table can include a set of weights associated with a specific, defined combination of reference lines, thereby implicitly identifying the reference lines. Or, a weight table can include a set of n weights associated with any combination of n reference lines, which are identified separately. When n weights add up to some defined amount (e.g., 1), the n weights in a table can be specified with n−1 values. For example, the encoder and decoder store four different weight tables for a combination of three reference lines. The four weight tables are: {¾, 3/16, 1/16}, {¼, ⅝, ⅛}, { 3/16, ⅛, 11/16}, and {⅜, 5/16, 5/16}. A table index of 0, 1, 2, or 3 indicates which of the weight tables to use (e.g., for a block or unit). The table index can be entropy coded/decoded. In the preceding examples of weight tables, a weight table includes only non-zero weights—that is, a non-zero weight per reference line used in weighted prediction. Alternatively, a weight table can include a weight per candidate reference line, including a zero weight for a candidate reference line if the candidate reference line is not used for weighted prediction. In the preceding examples of weighted prediction, weights are specified for the reference lines of a current block. For a given reference line, the same weight is applied to all predicted sample values generated using that reference line. Alternatively, weighted prediction uses more fine-grained weights. For example, for a given reference line, different weights can be specified for different positions of the current block. The position-specific weights can be pre-defined (fixed), or signaled as part of encoded data in the bitstream (flexible or table-based). Or, the position-specific weights can be determined by rule starting from weights determined at a higher level (e.g., block level, unit level), or they can be indicated by pattern information signaled as part of encoded data in the bitstream for the current block or unit (e.g., the left half of the current block uses predicted sample values from a first reference line, and the right half of the current block uses predicted sample values from a second reference line). This flexibility allows, for example, sample values at some positions of the current block to be predicted from one reference line (or mostly one reference line), while sample values at other positions of the current block are predicted from another reference line (or mostly the other reference line). 3. Examples of Weighted Prediction During Encoding and Decoding. FIG. 24 shows a generalized technique (2400) for encoding that includes intra-picture prediction with weighted prediction. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (2400). The encoder receives (2410) a picture, encodes (2420) the picture to produce encoded data, and outputs (2430) the encoded data as part of a bitstream. As part of the encoding (2420), the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the encoder performs weighted prediction of the sample values of the current block using multiple reference lines of sample values. The multiple reference lines include at least one non-adjacent reference line of sample values. The encoder checks (2440) whether to continue with the next picture. If so, the encoder receives (2410) and encodes (2420) the next picture. FIG. 25 shows a generalized technique (2500) for decoding that includes intra-picture prediction with weighted prediction. A decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (2500). The decoder receives (2510) encoded data as part of a bitstream, decodes (2520) the encoded data to reconstruct a picture, and outputs (2530) the reconstructed picture. As part of the decoding, the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the decoder performs weighted prediction of the sample values of the current block using multiple reference lines of sample values. The multiple reference lines include at least one non-adjacent reference line of sample values. The decoder checks (2540) whether to continue with the next picture. If so, the decoder receives (2510) encoded data for the next picture and decodes (2520) the encoded data for the next picture. The weighted prediction performed during the encoding (2420) or decoding (2520) can include various operations. For example, for a given position in the current block, the encoder or decoder performs operations for each of multiple reference lines. Specifically, the encoder/decoder generates an intermediate predicted sample value at the given position using at least one sample value of that reference line, and applies a weight to the intermediate predicted sample value to produce a weighted sample value at the given position. The encoder/decoder can generate intermediate predicted sample values and apply weights on a reference line-by-reference line basis for all sample values of the current block (see FIG. 26). The encoder/decoder combines the weighted sample values at the given position to produce a final predicted sample value at the given position in the current block. FIG. 26 shows an example technique (2600) for weighted prediction during intra-picture prediction with multiple reference lines for a current block. The example technique can be performed by an encoder during the encoding (2420) of FIG. 24 or by a decoder during the decoding (2520) of FIG. 25. The encoder or decoder selects (2610) one of multiple candidate reference lines of sample values outside a current block. The multiple candidate reference lines include at least one non-adjacent reference line of sample values. The encoder or decoder generates (2620) intermediate predicted sample values for the current block using the selected reference line, and applies (2630) a weight for the reference line to the intermediate predicted sample values. This produces weighted sample values. The encoder or decoder checks (2640) whether to continue with another one of the multiple candidate reference lines. If so, the encoder or decoder selects (2610) the next one of the multiple candidate reference lines. After producing weighted sample values for each of the multiple candidate reference lines to be used in intra-picture prediction, the encoder or decoder combines (2650) the weighted sample values for the respective reference lines used in intra-picture prediction. The encoder or decoder can repeat the technique (2600) for other blocks of the picture. In the weighted prediction, each of the multiple reference lines used in intra-picture prediction can have a weight associated with that reference line. The weights for the respective reference lines can be signaled as part of encoded data in the bitstream. Or, the weights for the multiple lines can be pre-defined (and hence not signaled as part of encoded data in the bitstream). For example, if the multiple reference lines used in intra-picture prediction include a non-adjacent reference line (identified by reference line index) and adjacent reference line (assumed), pre-defined weights for weighted prediction can be used (e.g., ¾ and ¼, or ½ and ½). The weights for the multiple reference lines used in the intra-picture prediction can be defined at block level for the current block. Alternatively, the weights for the multiple reference lines used in the intra-picture prediction can be defined for individual sample values of the current block. Alternatively, a weight table can include weights for the multiple reference lines used in intra-picture prediction, or weights for all of the multiple candidate reference lines whether used in intra-picture prediction or not. For example, the weight table includes a non-zero weight for each of the multiple reference lines used in the intra-picture prediction. The weight table can also include a zero weight for any other reference line of the multiple candidate reference lines. The weight table can be signaled as part of encoded data in the bitstream. Or, an identifier of the weight table can be signaled as part of the encoded data, referencing one of multiple weight tables available at the encoder and decoder. The intra-picture prediction for the respective reference lines can use the same intra-picture prediction mode (e.g., specified for the current block at block level or unit level). Alternatively, a separate intra-picture prediction mode can be specified for each of the multiple reference lines used in the intra-picture prediction. In this case, intra-picture prediction for different reference lines into the current block can use different intra-picture prediction modes. E. Mode-Dependent Padding to Replace Unavailable Reference Sample Values. When performing intra-picture prediction, reference sample values may be unavailable because they lie outside a picture boundary or slice boundary. Or, reference sample values may be unavailable because they are part of a block that has not yet been encoded/decoded/reconstructed. Or, in some example implementations, sample values in a neighboring inter-coded block may be unavailable for use as reference sample values (e.g., if intra-picture prediction is constrained to only use reference sample values from intra-coded blocks—so-called “constrained intra-picture prediction”). In some previous codec standards, unavailable reference sample values in a reference line can be replaced by repeating the last available reference sample value in that reference line. This can be called horizontal repeat padding or vertical repeat padding. This section describes examples of mode-dependent padding to replace unavailable reference sample values for intra-picture prediction. Compared to simple padding of sample values within a reference line, mode-dependent padding can yield padded sample values that provide more effective intra-picture prediction. Mode-dependent padding to replace unavailable reference sample values can be used in combination with one or more other innovations described herein. For example, mode dependent padding as described in this section can be performed before residue compensation and/or weighted prediction. 1. Examples of Mode-Dependent Padding. With mode-dependent padding, unavailable sample values in a reference line can be replaced with available, actual reference sample values along an intra-picture prediction direction. In general, if the sample value at a given position of a reference line is unavailable (e.g., because it is outside a picture boundary or slice boundary, or it has not yet been encoded/decoded/reconstructed, or it cannot be used for intra-picture prediction due to constrained intra-picture prediction or some other constraint), one or more reference sample values are evaluated in one or more other reference lines. The reference sample value(s) of the other reference line(s) are evaluated along a padding direction. For example, the padding direction is the prediction direction for an intra-picture prediction mode being used. Or, for a non-directional prediction mode, the padding direction is horizontal or vertical. Along the padding direction, a projection passes through the given position (with the unavailable sample value) and through the position(s) of the other reference line(s). The reference sample value(s) on the projection can be on either side of the given position with the unavailable sample value. When a reference sample value on the projection has a fractional-sample offset, the reference sample value is determined by interpolating between the two nearest reference sample values at integer-sample offsets in the same reference line. For example, the interpolation uses bilinear filtering. If one of the two nearest reference sample values at integer-sample offsets is not available, the reference sample value on the projection is deemed to be not available. The reference sample value(s) on the projection are checked sequentially, starting at the position closest to the given position (with the unavailable sample value), and continuing outward, until a position is found with an available, actual reference sample value or the last position is evaluated. If two positions are equidistant from the given position with the unavailable sample value, the position closer to the current block is checked first. If an available, actual sample value is found, that sample value is used in place of the unavailable sample value in the given position. If no available, actual sample value is found along the padding direction, the closest available, actual sample value in the same reference line as the given position is used (e.g., as in horizontal repeat padding or vertical repeat padding). FIG. 27 shows a first example (2700) of mode-dependent padding to replace unavailable sample values for intra-picture prediction of sample values of a current block (2710). In FIG. 27, the sample value at position A is unavailable. The reference sample values are at integer-sample offsets in the picture. The unavailable sample value at position A is replaced with a reconstructed reference sample value at integer-sample offsets. Along the prediction direction for the intra-picture prediction, a projection passes through position A. This projection also passes through positions B, C, D, and E of other reference lines. Positions B, C, D, and E are checked sequentially, starting with position closest to position A, and continuing outward from position A on either side, until a position is found that has an available, actual sample value. If an available, actual sample value is found at position B, C, D, or E, that sample value is used in place of the unavailable sample value at position A. If no available, actual sample value is found at positions B, C, D, and E, the closest available, actual sample value in the same reference line is used (e.g., at position F, G, and so on). FIG. 28 shows a second example (2800) of mode-dependent padding to replace unavailable sample values for intra-picture prediction of sample values of a current block (2810). In FIG. 28, the sample value at position A is unavailable. The unavailable sample value at position A may be replaced with an interpolated sample value at a fractional-sample offset. Along the prediction direction for the intra-picture prediction, a projection passes through position A. This projection also passes through other reference lines at fractional-sample offsets. For example, the projection passes through a reference line at a position between positions B′ and B″. An interpolated sample value is calculated on the projection between B′ and B″. Interpolated sample values are similarly calculated between positions C′ and C″, between positions D′ and D″, and between positions E′ and E″. The interpolated sample values at the position between positions B′ and B″, position between positions C′ and C″, position between positions D′ and D″, and position between positions E′ and E″ are checked sequentially, starting with position closest to position A, and continuing outward from position A on either side, until a position is found that has an available, actual sample value. If an available, actual sample value is found, that sample value is used in place of the unavailable sample value at position A. Otherwise, the closest available, actual sample value in the same reference line is used (e.g., at position F, G, and so on). In FIGS. 27 and 28, the intra-picture prediction mode is an angular prediction mode having a prediction direction. For DC prediction mode and planar prediction mode, an unavailable sample value at a position that is above the current block or above-right of the current block can be replaced with an actual sample value using vertical repeat padding. An unavailable sample value at a position that is left of the current block or below-left of the current block can be replaced with an actual sample value using horizontal repeat padding. An unavailable sample value at a top-left position relative to the current block can be replaced by aggregating nearby reference sample values as shown in FIG. 16a or 16b. 2. Examples of Mode-Dependent Padding During Encoding or Decoding. FIG. 29 shows a generalized technique (2900) for encoding that includes intra-picture prediction with mode-dependent padding to replace one or more unavailable reference sample values. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (2900). The encoder receives (2910) a picture, encodes (2920) the picture to produce encoded data, and outputs (2930) the encoded data as part of a bitstream. As part of the encoding (2920), the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the encoder performs mode-dependent padding to replace one or more unavailable reference sample values. The encoder checks (2940) whether to continue with the next picture. If so, the encoder receives (2910) and encodes (2920) the next picture. FIG. 30 shows a generalized technique (3000) for decoding that includes intra-picture prediction with mode-dependent padding to replace one or more unavailable reference sample values. A decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (3000). The decoder receives (3010) encoded data as part of a bitstream, decodes (3020) the encoded data to reconstruct a picture, and outputs (3030) the reconstructed picture. As part of the decoding, the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the decoder performs mode-dependent padding to replace one or more unavailable reference sample values. The decoder checks (3040) whether to continue with the next picture. If so, the decoder receives (3010) encoded data for the next picture and decodes (3020) the encoded data for the next picture. FIG. 31 shows an example technique (3100) for mode-dependent padding to replace an unavailable sample value during intra-picture prediction. The example technique can be performed by an encoder during the encoding (2920) of FIG. 29 or by a decoder during the decoding (3020) of FIG. 30. To start, the encoder or decoder selects (3110) one of multiple candidate reference lines of sample values outside the current block. The multiple candidate reference lines include at least one non-adjacent reference line of sample values. The encoder or decoder determines (3120) that a sample value of the selected reference line is unavailable at a given position of the selected reference line. For example, the given position is outside a picture boundary, outside a slice boundary, not reconstructed yet, constrained to not be used for reference in intra-picture prediction, or otherwise unavailable for use in intra-picture prediction. The encoder or decoder identifies (3130) a padding direction of an intra-picture prediction mode. For example, the padding direction is a prediction direction for an angular prediction mode having the prediction direction. Or, for a non-directional prediction mode such as DC prediction mode or planar prediction mode, the padding direction is vertical or horizontal. The encoder or decoder determines (3140) a sample value, if any, available on another reference line on a projection, in the padding direction, through the given position of the selected reference line. For example, the encoder or decoder evaluates one or more positions along the projection in one or more candidate reference lines, in order of increasing distance away from the given position of selected reference line, until an available sample value is determined in one of the candidate reference line(s) along the projection in the padding direction. The available sample value can be available, without padding, at an integer-sample offset horizontally and vertically in one of the candidate reference line(s). Or, the available sample value can be derived by interpolation from sample values available, without padding, at integer-sample offsets horizontally and vertically in one of the candidate reference line(s). In this case, the determining the sample value can include interpolating the sample value at a fractional-sample offset horizontally and/or vertically in another reference line. The encoder or decoder checks (3150) if an available sample value has been determined in another reference line on the projection. If so, the encoder or decoder uses (3170) the determined available sample value as the unavailable sample value at the given position of the selected reference line. Otherwise (no sample value of another reference line is determined to be available), the encoder or decoder determines (3160) a sample value that is available at another position within the selected reference line (e.g., by horizontal or vertical repeat padding). Then, the encoder or decoder uses (3170) that determined available sample value as the unavailable sample value at the given position of the selected reference line. The encoder or decoder can repeat the technique (3100) for unavailable sample values at other positions. F. Intra-Picture Prediction with in-Loop-Filtered Reference Sample Values. Many conventional codec standards and formats use in-loop filtering on reconstructed sample values for use in subsequent inter-picture prediction. For example, the in-loop filtering can include deblocking and/or sample adaptive offset (“SAO”) filtering. Generally, in-loop filtering makes the filtered sample values more effective for subsequent inter-picture prediction that uses the filtered sample values. Conventionally, in-loop filtering is not performed on reference sample values before those reference sample values are used in intra-picture prediction. A typical deblocking filter or SAO filter relies on sample values on both sides of an edge that is to be filtered. For example, for the edge offset mode of an SAO filter, sample classification may be based on comparisons between current sample values and neighboring sample values. For intra-picture prediction of a current block, the current block has not yet been encoded and reconstructed, so the sample values of the current block are not available for use by the deblocking filter or SAO filter. As such, the reference sample values of an adjacent reference line (column or row) are not in-loop filtered prior to intra-picture prediction. This section describes examples of intra-picture prediction using in-loop-filtered reference sample values. In particular, reference sample values of non-adjacent reference lines can, in some cases, be in-loop filtered before those reference sample values are used for intra-picture prediction. In-loop filtering of the reference sample values can improve the effectiveness of subsequent intra-picture prediction that uses the reference sample values. In-loop filtering of reference sample values for intra-picture prediction can be used in combination with one or more other innovations described herein. For example, in-loop-filtered reference sample values in a non-adjacent reference frame can be used in intra-picture prediction with residue compensation or weighted prediction, or can be further filtered or used in mode-dependent padding. 1. Examples of in-Loop-Filtered Reference Sample Values. FIG. 32 shows an example (3200) of in-loop-filtered reference sample values used during intra-picture prediction. Reference sample values in multiple candidate reference lines (in FIG. 32, columns) are available for intra-picture prediction of sample values of a current block (3210). The reference sample values include reference sample values (3230) that may have been in-loop filtered. The possibly in-loop-filtered reference sample values (3230) have no dependencies on the current block (3210) or any other block that has not yet been decoded. For the possibly in-loop-filtered reference sample values (3230), in-loop filtering can be completed without waiting for reconstructed sample values of the current block (3210). In some example implementations, to improve the effectiveness of intra-picture prediction of the current block (3210), reference sample values are accessed for the intra-picture prediction of the current block (3210) after in-loop filtering is performed on those reference sample values if the in-loop filtering can be performed without using reconstructed sample values of the current block (3210). This condition is satisfied for non-adjacent reference lines farther away from the current block (3210), outside the range of in-loop filtering of the edge between the adjacent reference line and current block (3210). In FIG. 32, the reference sample values also include reference sample values (3220) that cannot be in-loop filtered prior to the intra-picture prediction for the current block (3210). For the right-most reference columns, closest to the current block (3210), in-loop filtering depends on reconstructed sample values of the current block (3210). Similarly, for the bottom-most reference rows, which are not shown in FIG. 32, in-loop filtering depends on reconstructed sample values of the current block (3210). 2. Examples of Intra-Picture Prediction with in-Loop-Filtered Reference Sample Values During Encoding and Decoding. FIG. 33 shows a generalized technique (3300) for encoding that includes intra-picture prediction that uses in-loop-filtered sample values of a non-adjacent reference line. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (3300). The encoder receives (3310) a picture, encodes (3320) the picture to produce encoded data, and outputs (3330) the encoded data as part of a bitstream. As part of the encoding (3320), the encoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the encoder selects the non-adjacent reference line of sample values for use in the intra-picture prediction for the current block. At least some of the sample values of the selected non-adjacent reference line have been modified by in-loop filtering prior to use in the intra-picture prediction for the current block. For the in-loop filtering, none of the modified sample values of the selected reference line is dependent on any of the sample values of the current block. The encoder checks (3340) whether to continue with the next picture. If so, the encoder receives (3310) and encodes (3320) the next picture. FIG. 34 shows a generalized technique (3400) for decoding that includes intra-picture prediction that uses in-loop-filtered sample values of a non-adjacent reference line. A decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (3400). The decoder receives (3410) encoded data as part of a bitstream, decodes (3420) the encoded data to reconstruct a picture, and outputs (3430) the reconstructed picture. As part of the decoding, the decoder performs intra-picture prediction for a current block of sample values in the picture. A non-adjacent reference line of sample values is available for the intra-picture prediction. When it performs intra-picture prediction for the current block, the encoder selects the non-adjacent reference line of sample values for use in the intra-picture prediction for the current block. At least some of the sample values of the selected non-adjacent reference line have been modified by in-loop filtering prior to use in the intra-picture prediction for the current block. For the in-loop filtering, none of the modified sample values of the selected reference line is dependent on any of the sample values of the current block. The decoder checks (3440) whether to continue with the next picture. If so, the decoder receives (3410) encoded data for the next picture and decodes (3420) the encoded data for the next picture. Instead of or in addition to selecting the non-adjacent reference line, the encoder or decoder can select another reference line of sample values for use in the intra-picture prediction for the current block. The selected other reference line can be between the selected non-adjacent reference line and the current block. For example, the selected other reference line is the adjacent reference line for the current block. In-loop filtering of the sample values of the selected other reference line may be dependent on at least some of the sample values of the current block. In this case, none of the sample values of the selected other reference line are modified by in-loop filtering prior to the intra-picture prediction for the current block using the selected other reference line. G. Encoder-Side Decisions to Select Reference Lines. This section describes examples of encoder-side decision-making approaches to select reference lines for intra-picture prediction. With the approaches, in a computationally efficient manner, an encoder can identify appropriate reference lines to use for intra-picture prediction. In general, the encoder receives a picture, encodes the picture to produce encoded data, and outputs the encoded data as part of a bitstream. As part of the encoding, the encoder performs intra-picture prediction for a current block of sample values in the picture. To select one or more reference lines for intra-picture prediction, the encoder follows one of the decision-making approaches described herein. The approaches to making encoder-side decisions can be used in combination with one or more other innovations described herein when implemented during encoding. 1. First Approach to Making Encoder-Side Decisions. FIG. 35 shows a first example technique (3500) for selecting, during encoding, which reference lines to use for intra-picture prediction. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (3500). In the first approach, during encoding of a picture, the encoder selects a combination of one or more reference lines and one or more intra-picture prediction modes for a unit (e.g., a coding unit (“CU”) or prediction unit (“PU”)) of the picture. For example, for a CU, the encoder selects one or more reference line indices or a table index indicating which reference line(s) to use in intra-picture prediction for the CU, with one or more intra-picture prediction modes also signaled for the CU or its one or more PUs. In this case, the PU(s) and transform unit(s) (“TU(s)”) of the CU all use the same reference line index, indices or table index. Or, as another example, for a PU, the encoder selects one or more reference line indices or a table index indicating which reference line(s) to use in intra-picture prediction for the PU, with one or more intra-picture prediction modes also signaled for the PU. In this case, the TU(s) associated with the PU all use the same reference line index, indices or table index. To start, the encoder selects (3510) a combination of one or more reference lines (“reference line combination”). For example, the selected reference line combination is a single candidate reference line. Or, the selected reference line combination is multiple candidate reference lines. With a rough mode decision (“RMD”) process, the encoder selects (3520) one or more intra-picture prediction modes for the selected reference line combination. In general, the RMD process provides a coarse-grained decision about intra-picture prediction modes for the reference line combination under evaluation. The RMD process typically uses a cost measure that is not especially accurate but is simple to compute (e.g., sum of absolute transform differences (“SATD”)). Alternatively, the encoder uses another type of cost measure in the RMD process. The RMD process may simplify evaluation by skipping evaluation of some intra-picture prediction modes. For example, the encoder evaluates only a subset of possible intra-picture prediction modes (e.g., every other mode, every third mode, or every fourth mode) in the RMD process. Next, with a mode refinement process, the encoder selects (3530) one or more final intra-picture prediction modes for the selected reference line combination. In doing so, the encoder uses the intra-picture prediction mode(s) from the RMD process. In general, the mode refinement process provides a fine-grained decision about intra-picture prediction modes for the reference line combination under evaluation. The mode refinement process typically uses a cost measure that is accurate but complex to compute (e.g., rate-distortion cost). Alternatively, the encoder uses another type of cost measure in the mode refinement process. The mode refinement process may skip evaluation of some intra-picture prediction modes (e.g., if they are far from any of the mode(s) from the RMD process) but typically evaluates modes near the mode(s) from the RMD process. In some example implementations that permit variable-size TUs, TU size is kept fixed during the RMD process and mode refinement process. Next, when TU size is variable, the encoder selects (3540) one or more TU sizes for the unit. In this stage, the encoder uses the final intra-picture prediction mode(s) selected in the mode refinement process for the selected reference line combination. Alternatively, the encoder selects TU size in a separate process. The encoder checks (3550) whether to continue the decision-making process with another reference line combination. If so, the encoder selects (3510) another reference line combination for evaluation. In this way, the encoder iteratively evaluates different reference line combinations for the unit. After evaluating all of the reference line combinations that it will evaluate, the encoder selects (3560) a best reference line combination among the multiple reference line combinations that were evaluated. For example, the encoder selects the reference line combination with lowest rate-distortion cost (considering the best intra-picture prediction mode(s) and TU sizes found when evaluating the respective reference line combinations). 2. Second Approach to Making Encoder-Side Decisions. FIG. 36 shows a second example technique (3600) for selecting, during encoding, which reference lines to use for intra-picture prediction. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (3600). Like the first approach, in the second approach, during encoding of a picture, the encoder selects a combination of one or more reference lines and one or more intra-picture prediction modes for a unit (e.g., CU or PU) of the picture. Unlike, the first approach, different reference line combinations are not iteratively evaluated. With a rough mode decision (“RMD”) process, the encoder selects (3610) one or more combinations of intra-picture prediction modes and reference lines. Each of the one or more combinations from the RMD process includes a different pair of (a) one or more intra-picture prediction modes and (b) one or more reference lines. In general, the RMD process provides a coarse-grained decision about the combination under evaluation. The RMD process typically uses a cost measure that is not especially accurate but is simple to compute (e.g., SATD). Alternatively, the encoder uses another type of cost measure in the RMD process. The RMD process may simplify evaluation by skipping evaluation of some intra-picture prediction modes and/or some reference lines. For example, the encoder evaluates only a subset of possible intra-picture prediction modes (e.g., every other mode, every third mode, or every fourth mode) in the RMD process. Or, as another example, the encoder evaluates only a subset of possible reference lines (e.g., every other reference line) in the RMD process. Next, with a mode refinement process, the encoder selects (3620) a final combination. In doing so, the encoder uses the combination(s) from the RMD process. In general, the mode refinement process provides a fine-grained decision about the combination(s) under evaluation. The mode refinement process typically uses a cost measure that is accurate but complex to compute (e.g., rate-distortion cost). Alternatively, the encoder uses another type of cost measure in the mode refinement process. For example, the mode refinement process selects a combination with lowest rate-distortion cost. The mode refinement process may skip evaluation of some intra-picture prediction modes and/or some reference lines (e.g., if they are far from any of the combination(s) from the RMD process), but typically evaluates modes and reference lines near the combination(s) from the RMD process. In some example implementations that permit variable-size TUs, TU size is kept fixed during the RMD process and mode refinement process. Next, when TU size is variable, the encoder selects (3630) one or more TU sizes for the unit. In this stage, the encoder uses the final combination selected in the mode refinement process. Alternatively, the encoder selects TU size in a separate process. In some example implementations, the second approach can be used for a CU whose partition size is 2N×2N (single PU for the CU). For a CU with another partition size, the encoder uses the first approach. 3. Third Approach to Making Encoder-Side Decisions. FIG. 37 shows a third example technique (3700) for selecting, during encoding, which reference lines to use for intra-picture prediction. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (3700). In the third approach, during encoding of a picture, the encoder selects a combination of one or more reference lines for a TU. For example, for a TU, the encoder selects one or more reference line indices or a table index indicating which reference lines to use in intra-picture prediction for the TU. (One or more intra-picture prediction modes are signaled for a PU associated with the TU.) To start, the encoder selects (3710) a combination of one or more reference lines (“reference line combination”). For example, the selected reference line combination is a single candidate reference line. Or, the selected reference line combination is multiple candidate reference lines. The encoder evaluates (3720) the selected reference line combination in conjunction with one or more intra-picture prediction modes that were previously selected (e.g., for a PU). For example, the encoder calculates a cost measure such as a rate-distortion cost when the selected reference line combination is used. Alternatively, the encoder uses another type of cost measure. The encoder checks (3730) whether to continue the decision-making process with another reference line combination. If so, the encoder selects (3710) another reference line combination for evaluation. In this way, the encoder iteratively evaluates different reference line combinations for the TU. After evaluating all of the reference line combinations that it will evaluate, the encoder selects (3740) a best reference line combination among the multiple reference line combinations that were evaluated. For example, the encoder selects the reference line combination with lowest rate-distortion cost. Or, as another example, the encoder selects the reference line combination with the lowest SATD. Or, the encoder uses another cost measure to select the best reference line combination. H. Post-Filtering of Predicted Sample Values. In some prior codec standards, an encoder and decoder can filter the predicted sample values of a current block using reference sample values of an adjacent reference line. Such post-filtering can be performed for certain intra-picture prediction modes (i.e., DC prediction mode, horizontal prediction mode, or vertical prediction mode). This section describes examples of post-filtering of predicted sample values after intra-picture prediction. In particular, an encoder or decoder filters certain predicted sample values of a current block, using at least some sample values that are outside the current block and outside an adjacent reference line. By using reference sample values outside the adjacent reference line, in some cases, the filtering yields predicted sample values that are closer to the original sample values for the current block. Post-filtering of predicted sample values can be used in combination with one or more other innovations described herein. For example, post-filtering of predicted sample values as described in this section can follow weighted prediction, residue compensation, filtering of reference sample values, mode-dependent padding, and/or use of in-loop-filtered reference sample values. The post-filtering described in this section can be used when intra-picture prediction uses a non-adjacent reference line. The post-filtering described in this section can also be used when intra-picture prediction uses an adjacent reference line (e.g., in an encoder or decoder in which intra-picture prediction can only use reference sample values of an adjacent reference line). 1. Examples of Post-Filtering of Predicted Sample Values FIG. 38 shows an example (3800) of post-filtering of predicted sample values of a current block (3810) after horizontal intra-picture prediction. In the example (3800) of FIG. 38, the first K rows of the current block (3810) are post-filtered. The value of K depends on implementation. For example, K is 1, so the predicted sample values of the top row are post-filtered. Or, K is 2, and the predicted sample values of the top two rows are post-filtered. More generally, the value of K can depend on block size, with K increasing for larger blocks (e.g., K=1 or 2 for a 4×4 block; K=2 or 3 for an 8×8 block). The value of K can be pre-defined, in which case it is not signaled as part of encoded data in the bitstream. Alternatively, the value of K can change. In this case, the value of K can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). FIG. 38 shows post-filtering of a predicted sample value predA at position A of the current block (3810), using reference sample values refB, refC, refD, refE, and refF at positions B, C, D, E, and F, respectively. Positions B and F are in the same column as position A. Positions B, C and D are in the adjacent reference row, while positions E and F are in a non-adjacent reference row. Positions C and E are in the adjacent reference column, while position D is in a non-adjacent reference column. The values M and N control the distances between the positions of reference sample values used. The values of M and N depend on implementation. For example, M=N=1, as shown in FIG. 38. Alternatively, M and/or N have higher values. For example, the values of M and N can depend on block size, with M and/or N increasing for larger blocks. Instead of indicating distances between the positions of reference sample values used, M and N can indicate a number of gradients considered in post-filtering operations, with more gradients considered for higher values of M and N. The values of M and N can be pre-defined, in which case they are not signaled as part of encoded data in the bitstream. Alternatively, the values of M and N can change. In this case, the values of M and N can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). The predicted sample value at position A of the current block (3810) is calculated as: predA=predA+(refB−refC)/x+(refB−refD)/y+(refF−refE)/z, where x, y, and z are weights that depend on implementation. For example, x, y, and z are 2, 3, and 4. Alternatively, x, y, and z have other values. The values of x, y, and z can change depending on the block size of the current block (3810), e.g., as K increases. The weights x, y, and z can also change depending on position of sample value within the current block. For example, the weights are smaller (and hence weighting is more aggressive, since weights are only in the denominators) for the first row of post-filtered positions, and successively larger for later rows of post-filtered positions. More generally, the weights x, y, and z are smaller (and hence the weighting is more aggressive) if the sample values used to perform the post-filtering are closer to the positions being filtered in the current block (3810). The values of x, y, and z can be pre-defined, in which case they are not signaled as part of encoded data in the bitstream. Alternatively, the values of x, y, and z can change. In this case, the values of x, y, and z can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). The differences refB−refC, refB−refD, and refF−refE indicate gradients between reference sample values outside the current block (3810). The gradients are parallel to the prediction direction of intra-picture prediction. Two of the gradients use sample values at positions (D, E, F) not adjacent to the current block (3810). Thus, even if intra-picture prediction uses only reference sample values of an adjacent reference line, sample values of one or more non-adjacent reference lines can be considered for the post-filtering operations. Post-filtering of predicted sample values of a current block can also be performed after vertical intra-picture prediction. In this case, the first K columns of the current block are post-filtered, with K being defined as described with reference to FIG. 38. The gradients shown in FIG. 38 are transposed if the prediction direction is vertical, with corresponding changes to positions B, C, D, E, and F of reference sample values. Values for M, N, x, y, and z are defined as described with reference to FIG. 38. FIG. 39 shows an example (3900) of post-filtering of predicted sample values of a current block (3910) after intra-picture prediction with a non-directional prediction mode such as DC prediction mode or planar prediction mode. In the example (3900) of FIG. 39, the first K rows of the current block (3910) are post-filtered, and the first K columns of the current block (3910) are post-filtered. The value of K depends on implementation. For example, K is 1, so the predicted sample values of the top row and left column are post-filtered. Or, K is 2, and the predicted sample values of the top two rows and left two columns are post-filtered. More generally, the value of K can depend on block size, with K increasing for larger blocks (e.g., K=1 or 2 for a 4×4 block; K=2 or 3 for an 8×8 block). The value of K can be pre-defined, in which case it is not signaled as part of encoded data in the bitstream. Alternatively, the value of K can change. In this case, the value of K can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). FIG. 39 shows post-filtering of a predicted sample value predA at position A of the current block (3910), using reference sample values refB and refC at positions B and C, respectively. Positions B and C are in the same column as position A. Position B is in the adjacent reference row, while position C is in a non-adjacent reference row. The value N controls the distance between the positions of reference sample values used in post-filtering. The value of N depends on implementation. For example, N=1, as shown in FIG. 39. Alternatively, N has a higher value. For example, the value of N can depend on block size, with N increasing for larger blocks. Instead of indicating distances between the positions of reference sample values used in post-filtering, the value of N can indicate a number of reference sample values considered in post-filtering operations, with more reference sample values considered for higher values of N. The value of N can be pre-defined, in which case it is not signaled as part of encoded data in the bitstream. Alternatively, the value of N can change. In this case, the value of N can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). The predicted sample value at position A of the current block (3910) is calculated as: predA=(x×predA+y×refB+z×refC)/(x+y+z), where x, y, and z are weights that depend on implementation. For example, x, y, and z are 5, 2, and 1. Alternatively, x, y, and z have other values. In general, the weights are larger (and hence weighting is more aggressive, since weights are in the numerator) for the position being filtered (position A) and for the reference sample value closer to the current block (3910). The values of x, y, and z can change depending on the block size of the current block (3910), e.g., as K increases. The weights x, y, and z can also change depending on position of sample value within the current block. For example, the weights are larger (and hence weighting is more aggressive) for the first row of post-filtered positions, and successively smaller for later rows of post-filtered positions. The values of x, y, and z can be pre-defined, in which case they are not signaled as part of encoded data in the bitstream. Alternatively, the values of x, y, and z can change. In this case, the values of x, y, and z can be signaled as part of encoded data in the bitstream (e.g., in an SPS, PPS, slice header or associated structure). For post-filtering of predicted sample values in one of the first K columns of the current block (3910), the positions of reference sample values shown in FIG. 39 are transposed. For a position of the current block (3910) that is in one of the first K columns of the current block (3910) and also in one of the first K rows of the current block (3910), either set of reference sample values (to the left or above) can be used for post-filtering, depending on implementation. 2. Examples of Post-Filtering of Predicted Sample Values During Encoding or Decoding. FIG. 40 shows a generalized technique (4000) for post-filtering of predicted sample values during encoding or decoding for a current block. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (4000), for example, during the encoding (1420) described with reference to FIG. 14. Or, a decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (4000), for example, during the decoding (1520) described with reference to FIG. 15. For the post-filtering shown in FIG. 40, the encoder or decoder performs intra-picture prediction for a current block of sample values in the picture. An adjacent reference line of sample values is available for the intra-picture prediction. The encoder or decoder selects (4010) one or more reference lines of sample values outside the current block and predicts (4020) sample values of the current block using at least some sample values of the selected reference line(s). The encoder or decoder filters (4030) at least some of the predicted sample values of the current block. The filtering uses at least some sample values outside the current block and outside the adjacent reference line. The filtering can include various operations. For example, for a given predicted sample value of the predicted sample values of the current block, the filtering includes computing one or more gradients between sample values outside the current block, and adjusting the given predicted sample value based on the gradient(s). The sample values that are used to compute the gradient(s) can depend on block size of the current block. The gradient(s) can be parallel to the prediction direction of the intra-picture prediction. A weight can be applied to each of the gradient(s), with the weighted gradient(s) being used in the adjusting. The weight(s) applied to the gradient(s) can depend on block size of the current block as well as position of the given predicted sample value in the current block. In some example implementations, such post-filtering operations are performed if the prediction direction is horizontal or vertical (see, e.g., FIG. 38) but not for other prediction directions. As another example, for a given predicted sample value of the predicted sample values of the current block, the filtering includes adjusting the given predicted sample value based on one or more sample values outside the current block and outside the adjacent reference line. The sample value(s) that are used to adjust the given predicted sample value can depend on block size of the current block. A weight can be applied to each of the sample value(s) from outside the current block, with the weighted sample value(s) being used in the adjusting. The weight(s) applied to the sample value(s) can depend on block size of the current block as well as position of the given predicted sample value in the current block. In some example implementations, such post-filtering operations are performed if the intra-picture prediction mode is a non-directional prediction mode such as DC prediction mode or planar prediction mode (see, e.g., FIG. 39). Returning to FIG. 40, the encoder or decoder can repeat the technique (4000) for other blocks of the picture. I. Adaptive, Direction-Dependent Filtering of Reference Sample Values. This section describes examples of adaptive, direction-dependent filtering of reference sample values used for intra-picture prediction. In some cases, such filtering yields reference sample values that provide more effective intra-picture prediction. Adaptive, direction-dependent filtering of reference sample values can be used in combination with one or more other innovations described herein. For example, reference sample values can be filtered as described in this section before weighted prediction and/or residue compensation. The filtering described in this section can be used when intra-picture prediction uses a non-adjacent reference line. The filtering described in this section can also be used when intra-picture prediction uses an adjacent reference line (e.g., in an encoder or decoder in which intra-picture prediction can only use reference sample values of an adjacent reference line). 1. Examples of Adaptive, Direction-Dependent Filtering FIG. 41 shows an example (4100) of adaptive, direction-dependent filtering of reference sample values. Sample values of a current block (4110) are predicted using an intra-picture prediction mode having a prediction direction. The intra-picture prediction uses reference sample values in an adjacent reference column (reference line 0), including position D. In FIG. 41, the reference sample value at position D is filtered. The filtering is adaptive, depending on the differences between reference sample values along the prediction direction. In particular, the filtering of the reference sample value at position D depends on the pattern of reference sample values at positions A, B, C, and D. For example, if there is no significant difference among reference sample values at positions A, B, and C, the reference sample value at position D is filtered by weighting the reference sample values at positions A, B, and C. The weights depend on implementation. For example, the weights applied at positions A, B, C, and D are 1, 2, 4, and 8, respectively. Alternatively, the weights have other values. In general, the weight is highest for a position to be filtered (position D in FIG. 41), and weights decrease for positions further away from the position to be filtered. As another example, if the reference sample values at positions A, B, C, and D vary monotonically (e.g., increasing along the prediction direction, or decreasing along the prediction direction), the reference value at position D is filtered as the average of the reference sample values at positions A, B, C, and D. Again, the weights depend on implementation. In general, when sample values vary monotonically, the weight for the position to be filtered is increased. Alternatively, the filtering of the reference sample value at position D follows another rule. In FIG. 41, the reference sample values along the prediction direction are at integer-sample offsets horizontally and vertically. FIG. 42 shows another example (4200) of adaptive, direction-dependent filtering of reference sample values, in which some of the values along the prediction direction are at fractional-sample offsets. In FIG. 42, sample values of a current block (4210) are predicted using an intra-picture prediction mode having a prediction direction. The intra-picture prediction uses reference sample values in an adjacent reference column (reference line 0) that includes position D. In FIG. 42, the reference sample value at position D is filtered. The filtering is adaptive, depending on the differences between reference sample values along the prediction direction. Some of the reference sample values along the prediction direction are at positions with fractional-sample offsets. A value, along the prediction direction, between positions A′ and A″ is calculated by interpolating (e.g., with bilinear interpolation) between the reference sample values at positions A′ and A″. Another value is similarly interpolated between the reference sample values at positions C′ and C″. The filtering of the reference sample value at position D depends on the pattern of values at the position between A′ and A″, position B, the position between C′ and C″, and position D. With respect to the pattern of values along the prediction direction, the filtering of the reference sample value at position D follows the same rules as the example (4100) of FIG. 41. Adaptive, direction-dependent filtering can be performed for other intra-picture prediction modes having different prediction directions. The reference sample values filtered using adaptive, direction-dependent filtering can be part of an adjacent reference line or non-adjacent reference line. 2. Examples of Adaptive, Direction-Dependent Filtering During Encoding and Decoding. FIG. 43 shows a generalized technique (4300) for adaptive, direction-dependent filtering of reference sample values during encoding or decoding for a current block. An encoder such as the video encoder (340) of FIG. 3, another video encoder, or an image encoder can perform the technique (4300), for example, during the encoding (1420) described with reference to FIG. 14. Or, a decoder such as the video decoder (550) of FIG. 5, another video decoder, or an image decoder can perform the technique (4300), for example, during the decoding (1520) described with reference to FIG. 15. After the filtering shown in FIG. 43, the encoder or decoder performs intra-picture prediction for a current block of sample values in the picture. The encoder or decoder selects (4310) a reference line of sample values outside the current block. The encoder or decoder filters (4320) the selected reference line of sample values. The filtering adapts to differences in a set of sample values along a prediction direction for the intra-picture prediction. At least some of the set of sample values is outside the current block and outside an adjacent reference line. The set of sample values along the prediction direction can include reference sample values at integer-sample offsets horizontally and vertically (see, e.g., FIG. 41). The set of sample values along the prediction direction can also include interpolated values at fractional-sample offsets horizontally or vertically (see, e.g., FIG. 42). Returning to FIG. 43, the encoder or decoder can repeat the technique (4300) for other blocks of the picture. VI. Features. Different embodiments may include one or more of the inventive features shown in the following table of features. # Feature A. Residue Compensation in intra-Picture Prediction A1 In a computer syster that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction for the current block includes performing residue comensation; and outputting the encoded data as part of a bitstream. A2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing intra- picture prediction for a current block of sample values in the picture, wherein a non- adjacent refernce line of sample values is available for the intra-picture prediction, and where in the performing intra-picture prediction for the current block includes performing residue compensation; and outputting the reconstructed picture. A3 The method of A1 or A2, wherein the residue compensation includes, for a predicted sample value a given position in the current block: calculating, at a given position in an offset region outside the current block, a residual value based on a difference between a reconstructed sample value at the given position in the offset region and a predicted sample value at the given position in the offset region; and using the residual value to adjust the predicted sample value at the given position in the current block. performing residue compensation; and outputting the reconstructed picture. A4 The method of A3, wherein the residue compensation further includes, for the predicted sample value at the given position in the current block: applying a weight to the residual value to produce a weighted residual value, wherein the weighted residual value is used to adjust the predicted sample value at the given position in the current block. A5 The method of A4, wherein the weight depends on the given position in the current block. A6 The method of A5, wherein the weight further depends on one or more of: number of lines of sample values of the current block that are processed with the residue compensation; and prediction direction of the intra-picture prediction for the current block. A7 The method of A3, wherein the given position in the offset region is at a fractional- sample offset horizontally and/or vertically, and wherein the predicted sample value at the given position in the offset region and the reconstructed sample value at the given position in the offset region are interpolated values. A8 The method of A3, wherein the intra-picture prediction for the predicted sample value at the given position in the current block follows a prediction direction, wherein the residue compensation follows a residue compensation direction from the given position in the offset region to the given position in the current block, and wherein the residue compensation direction is orthogonal to the prediction direction. A9 The method of A3, wherein the intra-picture prediction for the predicted sample value at the given position in the current block follows a prediction direction, wherein the residue compensation follows a residue compensation direction from the given position in the offset region to the given position in the current block, and wherein the residue compensation direction is opposite the prediction direction. A10 The method of A3, wherein the intra-picture prediction for the predicted sample value at the given position in the current block follows a prediction direction, wherein the residue compensation follows a residue compensation direction from the given position in the offset region to the given position in the current block, and wherein the residue compensation direction is aligned with the prediction direction. A11 The method of A1 or A2, wherein the residue compensation includes, for a predicted sample value at a given position in the current block: calculating, at a first position in an offset region outside the current block, a first residual value based on a difference between a reconstructed sample value at the first position in the offset region and a predicted sample value at the first position in the offset region; calculating, at a second position in the offset region outside the current block, a second residual value based on a difference between a reconstructed sample value at the second position in the offset region and a predicted sample value at the second position in the offset region; and using the first residual value and the second residual value to adjust the predicted sample value at the given position in the current block. A12 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of A1 to A11 A13 A computer system configured to perform operations of the method of any one of A1 and A3 to A11, the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. A14 A computer system configured to perform operations of the method of any one of A2 to A11, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. B. Filtering of Reference Lines in Intra-Picture Prediction B1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; and filtering the selected reference line of sample values; and outputting the encoded data as part of a bitstream. B2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein a non- adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; and filtering the selected reference line of sample values; and outputting the reconstructed picture. B3 The method of B1 or B2, wherein the filtering uses a filter having a one- dimensional kernel that covers multiple sample values within the selected reference line. B4 The method of B3 wherein the one-dimensional kernel has three taps or five taps. B5 The method of B1 or B2, wherein the filtering uses a filter having a two- dimensional kernel that covers multiple sample values of the selected reference line and one or more sample values of each of one or more reference lines next to the selected reference line. B6 The method of B5 wherein the two-dimensional kernel has five taps in a cross pattern. B7 The method of B1 or B2, wherein the bitstream includes a filtering flag that indicates whether or not to perform the filtering, and wherein the performing infra- picture prediction further includes: deciding to perform the filtering based at least in part on the filtering flag. B8 The method of B7, wherein the filtering flag is in the bitstream at syntax level of a sequence, picture, slice, coding unit, coding block, prediction unit, prediction block, transform unit, or transform block. B9 The method of B7, wherein the filtering flag is jointly signaled with a reference line index. B10 The method of B1 or B2, wherein the performing intra-picture prediction further includes: deciding to perform the filtering based at least in part on one or more factors, the one or more factors including one or more of intra-picture prediction mode for the current block and block size of the current block. B11 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of B1 to B10. B12 A computer system configured to perform operations of the method of any one of B1 and B3 to B10, the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. B13 A computer system configured to perform operations of the method of any one of B2 to B10, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. C. Weighted Prediction in Intra-Picture Prediction C1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes performing weighted prediction of the sample values of the current block using multiple reference lines of sample values, the multiple reference lines including the non-adjacent reference line of sample values; and outputting the encoded data as part of a bitstream. C2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein a non- adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes performing weighted prediction of the sample values of the current block using multiple reference lines of sample values, the multiple reference lines including the non-adjacent reference line of sample values; and outputting the reconstructed picture. C3 The method of C1 or C2, wherein the weighted prediction includes, for a given position in the current block: for each of the multiple reference lines: generating an intermediate predicted sample value at the given position using at least one sample value of that reference line; and applying a weight to the intermediate predicted sample value at the given position to produce a weighted sample value at the given position; and combining the weighted sample values at the given position to produce a final predicted sample value at the given position in the current block. C4 The method of C3, wherein the generating and the applying are performed on a reference line-by-reference line basis for all sample values of the current block. C5 The method of C1 or C2, wherein each of the multiple reference lines has a weight associated with that reference line. C6 The method of C5, wherein, the multiple reference lines further include an adjacent reference line of sample values, and wherein the weights for the non-adjacent reference line and adjacent reference line are pre-defined and not signaled in the bitstream. C7 The method of C5, wherein the weights for the multiple reference lines are signaled in the bitstream. C8 The method of C1 or C2, wherein multiple candidate reference lines of sample values include the multiple reference lines used for the weighted prediction, wherein a weight table includes a non-zero weight for each of the multiple reference lines used for the intra-picture prediction and includes a zero weight for any other reference line of the multiple candidate reference lines. C9 The method of C8, wherein the weight table is signaled as part of the encoded data. C10 The method of C8, wherein an identifier of the weight table is signaled as part of the encoded data. C11 The method of C1 or C2, wherein weights for the multiple reference lines are defined at block level for the current block. C12 The method of C1 or C2, wherein weights for the multiple reference lines are defined for individual sample values of the current block. C13 The method of C1 or C2, wherein, for the weighted prediction of the sample values of the current block, an intra-picture prediction mode is specified for the current block. C14 The method of Cl or C2, wherein, for the weighted prediction of the sample values of the current block, an intra-picture prediction mode is specified for each of the multiple reference lines. C15 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of C1 to C14. C16 A computer system configured to perform operations of the method of any one of C1 and C3 to C14, the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. C17 A computer system configured to perform operations of the method of any one of C2 to C14, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. D. Mode-Dependent Padding in Intra-Picture Prediction D1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction, wherein the performing intra-picture prediction includes performing mode-dependent padding to replace one or more unavailable reference sample values; and outputting the encoded data as part of a bitstream. D2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein a non- adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes performing mode- dependent padding to replace one or more unavailable reference sample values; and outputting the reconstructed picture. D3 The method of D1 or D2, wherein the mode-dependent padding includes: selecting one of multiple candidate reference lines of sample values outside the current block, the multiple candidate reference lines including the non-adjacent reference line of sample values; determining that a sample value of the selected reference line is unavailable at a given position of the selected reference line; identifying a padding direction of an intra-picture prediction mode; determining a sample value available on another reference line on a projection through the given position of the selected reference line in the padding direction; and using the determined sample value as the unavailable sample value at the given position of the selected reference line. D4 The method of D3, wherein the determining the sample value of another reference line includes evaluating positions of one or more candidate reference lines, in order of increasing distance away from the given position of the selected reference line, until an available sample value is determined in one of the multiple candidate reference lines along the projection in the padding direction. D5 The method of D4, wherein the available sample value is: available, without padding, at an integer-sample offset horizontally and vertically in one of the multiple candidate reference lines; or derived by interpolation from sample values available, without padding, at integer- sample offsets horizontally and vertically in one of the multiple candidate reference lines. D6 The method of D3, wherein the determining the sample value of another reference line includes interpolating a sample value at a fractional-sample offset horizontally and/or vertically in the other reference line. D7 The method of D3, wherein: if the intra-picture prediction mode is an angular mode having a prediction direction, the padding direction is the prediction direction of the intra-picture prediction mode; and otherwise, if the intra-picture prediction mode is DC mode or planar mode, the padding direction is horizontal or vertical. D8 The method of D1 or D2, wherein the mode-dependent padding includes: determining that a sample value of the non-adjacent reference line is unavailable at a given position of the non-adjacent reference line; identifying a padding direction of an intra-picture prediction mode; determining a sample value, if any, of another reference line on a projection through the given position of the non-adjacent reference line in the padding direction; if no sample value of another reference line is determined, determining a sample value that is available at another position of the non-adjacent reference line; and using the determined sample value as the unavailable sample value at the given position of the non-adjacent reference line. D9 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of D1 to D8. D10 A computer system configured to perform operations of the method of any one of D1 and D3 to D8 the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. D11 A computer system configured to perform operations of the method of any one of D2 to D10, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. E. Intra-Picture Prediction with In-Loop-Filtered Reference Sample Values E1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein a non-adjacent reference line of sample values is available for the intra-picture prediction, wherein the performing intra-picture prediction includes selecting the non-adjacent reference line of sample values for use in the intra-picture prediction for the current block, and wherein at least some of the sample values of the selected non-adjacent reference line have been modified by in-loop filtering prior to use in the intra-picture prediction for the current block; and outputting the encoded data as part of a bitstream. E2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein a non- adjacent reference line of sample values is available for the intra-picture prediction, wherein the performing intra-picture prediction includes selecting the non-adjacent reference line of sample values for use in the intra-picture prediction for the current block, and wherein at least some of the sample values of the selected non-adjacent reference line have been modified by in-loop filtering prior to use in the intra-picture prediction for the current block; and outputting the reconstructed picture. E3 The method of E1 or E2, wherein, for the in-loop filtering, none of modified the sample values of the selected non-adjacent reference line is dependent on any of the sample values of the current block. E4 The method of E1 or E2, wherein the performing intra-picture prediction further includes: selecting another reference line of sample values for use in the intra-picture prediction for the current block, wherein none of the sample values of the selected other reference line have been modified by in-loop filtering prior to the intra-picture prediction for the current block using the selected other reference line. E5 The method of E4, wherein the selected other reference line is between the selected non-adjacent reference line and the current block. E6 The method of E4, wherein in-loop filtering of the sample values of the selected other reference line is dependent on at least some of the sample values of the current block. E7 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of E1 to E6. E8 A computer system configured to perform operations of the method of any one of E1 and E3 to E6 the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. E9 A computer system configured to perform operations of the method of any one of E2 to E6, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. F. Encoder-Side Decisions to Select Reference Line(s) for Intra-Picture Prediction F1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein the encoding includes, for a unit of the picture: for each of multiple reference line combinations of one or more reference lines: selecting, with a rough mode decision (RMD) process, one or more infra- picture prediction modes for the reference line combination; and selecting, with a mode refinement decision process, one or more final infra- picture prediction modes for the reference line combination using the one or more intra-picture prediction modes from the RMD process; and selecting a best reference line combination among the multiple reference line combinations; and outputting the encoded data as part of a bitstream. F2 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein the encoding includes, for a unit of the picture: selecting, with a rough mode decision (RMD) process, one or more combinations of modes and reference lines, each of the one or more combinations from the RMD process including a different pair of one or more intra-picture prediction modes and one or more reference lines; and selecting, with a mode refinement decision process, a final combination using the one or more combinations from the RMD process; and outputting the encoded data as part of a bitstream. F3 The method of F1 or F2, wherein the encoding further includes: selecting one or more transform sizes for the unit. F4 The method of F1 or F2, wherein the unit is a coding unit or prediction unit. F5 The method of F1 or F2, wherein the RMD process uses sum of absolute transform differences as a cost measure and includes evaluation of a subset of possible intra- picture prediction modes. F6 The method of F1 or F2, wherein the mode refinement decision process uses rate- distortion cost as a cost measure. F7 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein the encoding includes, for a transform unit of the picture: for each of multiple reference line combinations of one or more reference lines, evaluating the reference line combination for a previously selected intra-picture prediction mode; and selecting a best reference line combination among the multiple reference line combinations; and outputting the encoded data as part of a bitstream. F8 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of F1 to F7. F9 A computer system configured to perform operations of the method of any one of F1 to F7 the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. G. Post-Filtering of Predicted Sample Values in Intra-Picture Prediction G1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein an adjacent reference line of sample values is available for the intra-picture prediction, and wherein the performing intra-picture prediction includes: selecting one or more reference lines of sample values outside the current block; predicting the sample values of the current block using at least some sample values of the one or more selected reference lines; and filtering at least some of the predicted sample values of the current block, wherein the filtering uses at least some sample values outside the current block and outside the adjacent reference line; and outputting the encoded data as part of a bitstream. G2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein an adjacent reference line of sample values is available for the intra-picture prediction, and wherein theperforming intra-picture prediction includes: selecting one or more reference lines of sample values outside the current block; predicting the sample values of the current block using at least some sample values of the one or more selected reference lines; and filtering at least some of the predicted sample values of the current block, wherein the filtering uses at least some sample values outside the current block and outside the adjacent reference line; and outputting the reconstructed picture. G3 The method of G1 or G2, wherein the filtering includes, for a given predicted sample value of the predicted sample values of the current block: computing one or more gradients between sample values outside the current block; and adjusting the given predicted sample value based on the one or more gradients. G4 The method of G3, wherein the filtering further includes, for the given predicted sample value: applying a weight to each of the one or more gradients, wherein the one or more weighted gradients are used in the adjusting. G5 The method of G4, wherein the one or more weights depend on block size of the current block and position of the given predicted sample value in the current block. G6 The method of G3, wherein the sample values that are used to compute the one or more gradients depend on block size of the current block. G7 The method of G3, wherein the intra-picture prediction has a prediction direction, and wherein the one or more gradients are parallel to the prediction direction. G8 The method of G1 or G2, wherein the filtering includes, for a given predicted sample value of the predicted sample values of the current block: adjusting the given predicted sample value based on one or more sample values outside the current block and outside the adjacent reference line. G9 The method of G8, wherein the filtering further includes, for the given predicted sample value: applying a weight to each of the one or more sample values used in the adjusting. G10 The method of G9, wherein the one or more weights depend on block size of the current block and position of the given predicted sample value in the current block. G11 The method of G9, wherein the one or more sample values used in the adjusting depend on block size of the current block. G12 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of G1 to G11. G13 A computer system configured to perform operations of the method of any one of G1 and G3 to G11 the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. G14 A computer system configured to perform operations of the method of any one of G2 to G11, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. H. Adaptive, Direction-Dependent Filtering of Reference Sample Values H1 In a computer system that implements a video encoder or image encoder, a method comprising: receiving a picture; encoding the picture to produce encoded data, including performing intra-picture prediction for a current block of sample values in the picture, wherein the performing intra-picture prediction includes: selecting a reference line of sample values outside the current block; and filtering the selected reference line of sample values, wherein the filtering adapts to differences in a set of sample values along a prediction direction for the intra-picture prediction, at least some of the set of sample values being outside the current block and outside an adjacent reference line; and outputting the encoded data as part of a bitstream. H2 In a computer system that implements a video decoder or image decoder, a method comprising: receiving encoded data as part of a bitstream; decoding the encoded data to reconstruct a picture, including performing infra- picture prediction for a current block of sample values in the picture, wherein the performing intra-picture prediction includes: selecting a reference line of sample values outside the current block; and filtering the selected reference line of sample values, wherein the filtering adapts to differences in a set of sample values along a prediction direction for the intra-picture prediction, at least some of the set of sample values being outside the current block and outside an adjacent reference line; and outputting the reconstructed picture. H3 The method of H1 or H2, wherein the set of sample values along the prediction direction are at integer-sample offsets horizontally and vertically. H4 The method of H1 or H2, wherein at least some of the set of sample values along the prediction direction are interpolated values at fractional-sample offsets horizontally or vertically. H5 One or more computer-readable media storing computer-executable instructions for causing a computer system, when programmed thereby, to perform operations of the method of any one of H1 to H4. H6 A computer system configured to perform operations of the method of any one of H1 and H3 to H4 the computer system comprising: a picture buffer configured to store the picture; a video encoder or image encoder configured to perform the encoding; and a coded data buffer configured to store the encoded data for output. H7 A computer system configured to perform operations of the method of any one of H2 to H4, the computer system comprising: a coded data buffer configured to store the encoded data; a video decoder or image decoder configured to perform the decoding; and a picture buffer configured to store the reconstructed picture for output. In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims. 16942628 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Aug 25th, 2018 12:00AM https://www.uspto.gov?id=US11314408-20220426 Computationally efficient human-computer interface for collaborative modification of content Technologies are disclosed that enable a computing system to collect and process user preferences regarding content that is shared in a collaborative workspace. By the use of an input gesture, individual users of a multi-user sharing session can provide a vote for a portion of content indicating that they favor (“up-vote”) or disfavor (“down-vote”) the content. The system can collect and analyze the votes from each user. The system can then modify the content based on the votes. Modifications to the content can include, but are not limited to, rearranging selected portions of the content, generating annotations, generating one or more UI elements that bring focus to portions of the content, identifying high-priority content or low-priority content, or deleting portions of the content. 11314408 1. A computer-implemented method for modifying content in a multi-user sharing session, the method comprising: causing the content to be displayed on a plurality of computing devices associated with a plurality of users in the multi-user sharing session; receiving an input from individual users of the plurality of users indicating a modification to the content displayed on the plurality of devices, wherein the modification of the content by each user indicates a vote for the modification, wherein the input is from a subset of computing devices of a subset of users indicating a deletion of a portion of the content; analyzing a number of the votes to determine if the number of the votes meets one or more criteria, wherein the number of the votes based on one or more modifications to the content is analyzed for determining a level of brightness associated with the content; and in response to determining that two or more votes of the number of the votes meets the one or more criteria, modifying the display of the content with the level of brightness that indicates the number of the votes, wherein the one or more modifications to the content includes deleting the portion of the content, and after the modifying the display, in response to the one or more inputs from the subset of users exceeding a threshold, deleting the portion of the content that is displayed on computing devices of each of the plurality of users. 2. The computer-implemented method of claim 1, wherein modifying the display of the content comprises adding a user interface (UI) element to the content to bring focus to the portion of the content if the portion of the content has a priority exceeding a threshold. 3. The computer-implemented method of claim 1, further comprising weighting the votes received from the plurality of users prior to determining the priority for the portion of the content, the weighting based at least in part on a context associated with each of the plurality of users. 4. The computer-implemented method of claim 1, wherein the level of brightness is reduced in response to determining that the number of votes is below the threshold. 5. The computer-implemented method of claim 1, wherein the one or more criteria comprises a plurality of thresholds for the votes, wherein the level of brightness is modified to a first level of brightness when the number of votes exceeds a first threshold of the plurality of thresholds, and wherein the level of brightness is modified to a second level of brightness when the number of votes exceeds a second threshold of the plurality of thresholds. 6. The computer-implemented method of claim 1, wherein the number of votes is based on a number of modifications to the content. 7. The computer-implemented method of claim 1, wherein the one or more inputs from the subset of computing devices cause a generation of a graphical element indicating a selection of a portion of the content, wherein a threshold number of graphical elements positioned over the portion of the content causes the modification to the display of the content, wherein modifying the display of the content includes deleting the portion of the content from the content that is displayed on computing devices of each of the plurality of users. 8. A system, comprising: one or more processing units; and a computer-readable storage medium having computer-executable instructions encoded thereon to cause the one or more processing units to cause a display of a digital whiteboard to a plurality of users, the digital whiteboard comprising a plurality of whiteboard objects, receive an input from individual users of the plurality of users indicating a modification to the content displayed on the plurality of devices, where the modification of the content by each user indicated a vote for the modification, wherein the input is from a subset of computing devices of a subset of users indicating a deletion of a portion of the content; analyze a number of the votes to determine if the number of the votes meets one or more criteria, wherein the number of the votes based on one or more modifications to the content is analyzed for determining a level of brightness associated with the content; and in response to determining that two or more votes of the number of the votes meets the one or more criteria, modify the display of the digital whiteboard with the level of brightness that indicates the number of the votes, wherein the one or more modifications to the content includes deleting the portion of the content, and after the modifying the display, in response to the one or more inputs from the subset of users exceeding a threshold, deleting the portion of the content that is displayed on computing devices of each of the plurality of users. 9. The system of claim 8, wherein the modification of the display of the digital whiteboard comprises deleting one or more whiteboard objects having a priority that does not meet the threshold. 10. The system of claim 8, wherein the modification of the display of the digital whiteboard comprises generating an annotation indicating preferences of individual whiteboard objects. 11. A computer-readable storage medium having computer-executable instructions encoded thereupon which, when executed, cause one or more processing units to: cause content to be displayed on a plurality of computing devices associated with a plurality of users in a multi-user sharing session; receive an input from individual users of the plurality of users indicating a modification to the content displayed on the plurality of devices, wherein the modification of the content by each user indicates a vote for the modification, wherein the input is from a subset of computing devices of a subset of users indicating a deletion of a portion of the content; analyze a number of the votes to determine if the number of the votes meets one or more criteria, wherein the number of the votes based on one or more modifications to the content is analyzed for determining a level of brightness associated with the content; and in response to determining that two or more votes of the number of the votes meets the one or more criteria, modify the display of the content by rearranging the content, deleting the portion of the content if the portion of the content has a priority not exceeding a threshold, generating an annotation indicating the preference for the portion of the content, or modify the display of the content with the level of brightness that indicates the number of the votes, wherein the one or more modifications to the content includes deleting the portion of the content, and after the modifying the display, in response to the one or more inputs from the subset of users exceeding a threshold, deleting the portion of the content that is displayed on computing devices of each of the plurality of users. 12. The computer-readable storage medium of claim 11, wherein the multi-user sharing session comprises a three-dimensional (3D) collaborative workspace and wherein the portion of the content comprises virtual objects displayed in a mixed reality computing environment. 13. The computer-readable storage medium of claim 11, having further computer-executable instructions encoded thereupon to generate a report indicating the preferences for the portion of the content. 14. The computer-readable storage medium of claim 11, having further computer-executable instructions encoded thereupon to weight the votes received from the plurality of users prior to determining the priority for the portion of the content, the weighting based at least in part on a context associated with each of the plurality of users. 15. The computer-readable storage medium of claim 11, wherein the computer-readable storage medium has further computer-executable instructions encoded thereon to receive at least one of the votes from a machine learning component. 15 BACKGROUND Many productivity applications provide specialized tools for displaying and manipulating the contents of a file. Some productivity applications also allow multiple users to collaborate within a shared workspace, an environment where multiple users can simultaneously view and edit the contents of a file. For example, some environments provide a digital whiteboard for multiple users to manipulate whiteboard objects, such as digital ink expressions, etc. Although existing productivity applications can provide specialized functions for manipulating content, existing productivity applications do not provide a satisfactory user experience when a workflow requires a group of users to come to a consensus regarding content. In a multi-user collaborative workspace, for example, when group consensus is needed, users usually contribute individually by providing manual edits to the content. Such methods may be uncoordinated as some efforts can conflict with one another. Some users might also coordinate by the use of a shared communications session, but such efforts are still inefficient with respect to computing resources, e.g., multiple channels of communication may be needed. Moreover, when consensus is needed, existing systems can be inefficient because users are still required to manually edit the contents of a file even after a group makes a decision regarding the content. SUMMARY The technologies disclosed herein provide a computationally efficient human-computer interface for collaborative modification of content. Among other technical benefits, the technologies disclosed herein can reduce utilization of computing resources by simplifying the collaborative process for modifying content in a multi-user collaborative workspace. For example, when using the disclosed technologies, individual users of a multi-user sharing session can provide a vote for a portion of content indicating that they favor (“up-vote”) or disfavor (“down-vote”) the content. Votes can then be collected from each user, analyzed, and the content can be modified based on the votes. By modifying the content based upon the votes, the need for users to manually edit content and to coordinate editing efforts can be reduced or eliminated. This can reduce the utilization of computing resources like processor cycles, memory, network bandwidth, and power. In order to provide the technical benefits described above, and potentially others, a system is provided that enables users to participate in a multi-user sharing session. The multi-user sharing session might, for example, be implemented as a digital whiteboard presenting whiteboard objects such as, but not limited to, handwriting or hand-drawn images such as digital ink created using a digital pen or touchscreen. The multi-user sharing session can also be implemented as a three-dimensional (“3D”) collaborative workspace presenting virtual objects displayed in a mixed reality computing environment in other configurations. The technologies disclosed herein can be implemented with other types of multi-user sharing sessions in other configurations. Users can access the multi-user sharing session utilizing client computing devices configured with an appropriate client application. The client application can present content to the users such as, for example, on display devices connected to the client computing devices. The content might be, for example, a digital whiteboard, virtual objects, a word processing document, a spreadsheet document, a presentation document, an image or video, or another type of content. The content can be stored in a file on a server computer and distributed to the client computing devices participating in the multi-user sharing session. Users participating in a multi-user sharing session can provide input gestures to the client application executing on the client devices in order to vote on portions of the displayed content. The input gestures made by the users indicate a preference for a portion of the displayed content. For example, and without limitation, a user might provide an input gesture indicating that they favor (“up-vote”) or disfavor (“down-vote”) a portion of the displayed content. A server computer, or other computing device operating as a part of the disclosed system, can collect the votes from the users in the multi-user sharing session. The server computer can then analyze the votes to determine a priority for a portion of the content. The server computer can then modify the content based on the priority for the portion of the content by rearranging the content, deleting the portion of the content if a priority for the portion of the content does not exceed a threshold, generating an annotation indicating the preference for the portion of the content, or adding a user interface (“UI”) element to the content to bring focus to the portion of the content if a priority for the portion of the content exceeds a threshold. An audio output can also be used to bring focus to a portion of the content. For example, a voice instruction can be generated to describe a portion of content and one or more results related to the determined priority or the votes. The server computer can modify the content in other ways based upon the priority associated with portions of the content in other configurations. The server computer can also generate a report that indicates the user preferences for portions of the content and that provides other information regarding the voting. In some configurations, the server computer applies weights to the votes received from the users in a multi-user sharing session prior to determining the priority for a portion of the content. The weighting can be based on contextual data (a “context”) associated with each of the of users such as, but not limited to, a user's role in an organization or a user's past voting history. In some configurations, a machine learning component can vote on portions of the content in addition to user votes. The machine learning component can be trained to vote on content based upon users' previous votes, the type of content being voted on, data relating to the content being voted on, the results of previous votes on particular types of content, and/or other types of data. In addition to those technical benefits discussed above, implementations of the disclosed technologies can result in improved human-computer interaction during a multi-user sharing session and editing of content. This can reduce the likelihood of inadvertent user input and thus save computing resources, such as memory resources, processing resources, and networking resources. The reduction of inadvertent inputs can also reduce a user's time interacting with a computer, reduce the need for redundant editing of content, redundant entries for selecting content to be edited, redundant entries for pasting and transferring edited content to other users. Other technical benefits not specifically mentioned herein can also be realized through implementations of the disclosed subject matter. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The term “techniques,” for instance, may refer to system(s), method(s), computer-readable instructions, module(s), algorithms, hardware logic, and/or operation(s) as permitted by the context described above and throughout the document. BRIEF DESCRIPTION OF THE DRAWINGS The Detailed Description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items. References made to individual items of a plurality of items can use a reference number with a letter of a sequence of letters to refer to each individual item. Generic references to the items may use the specific reference number without the sequence of letters. FIG. 1A is a block diagram of a system for providing a computationally efficient human-computer interface for collaborative modification of content. FIG. 1B shows how the computing devices of the system can provide a user interface through which users can vote on portions of content. FIG. 1C shows how the computing devices of the system can modify content based upon the votes provided by the users. FIG. 2A illustrates how the system can be utilized to modify other types of content based upon votes provided by users. FIG. 2B shows how the computing devices of the system can provide a user interface through which users can vote on portions of content. FIG. 2C shows other examples of how the computing devices of the system can modify content based upon the votes provided by the users. FIG. 2D shows other examples of how the computing devices of the system can modify content based upon the votes provided by the users. FIG. 3A shows a multi-user sharing session in a three-dimensional (3D) collaborative workspace through which users can vote on content that includes virtual and real objects displayed in a mixed reality computing environment. FIG. 3B shows how virtual content can be modified based upon votes provided by the users of a 3D collaborative workspace. FIG. 3C shows how virtual content can be modified based upon votes provided by the users of a 3D collaborative workspace. FIG. 3D shows how virtual content can be modified based upon votes provided by the users of a 3D collaborative workspace. FIG. 3E shows how virtual content can be modified based upon votes provided by the users of a 3D collaborative workspace. FIG. 3F shows how virtual content can be modified based upon votes provided by the users of a 3D collaborative workspace. FIG. 4A shows a user interface for associating weights with votes provided by different users. FIG. 4B shows a user interface for associating weights with votes provided by different users. FIG. 5 is a flow diagram showing aspects of a routine for providing a computationally efficient human-computer interface for collaborative modification of content. FIG. 6 is a computer architecture diagram illustrating an illustrative computer hardware and software architecture for a computing system capable of implementing aspects of the techniques and technologies presented herein. FIG. 7 is a diagram illustrating a distributed computing environment capable of implementing aspects of the techniques and technologies presented herein. FIG. 8 is a computer architecture diagram illustrating another computing device architecture for a computing device capable of implementing aspects of the techniques and technologies presented herein. DETAILED DESCRIPTION The Detailed Description discloses aspects of a system that provides a computationally efficient interface for the collaborative modification of content. As discussed briefly above, the disclosed system can collect and process user preferences regarding content that is shared in a collaborative workspace. By the use of an input gesture, individual users of a multi-user session can indicate a selection of content of a file and provide a vote indicating that they favor (“up-vote”) or disfavor (“down-vote”) the selected content. The system can collect and analyze the votes from each user. The system can then modify the contents of the file based on the votes. By modifying the content based upon the votes, the need for users to manually edit content and to coordinate editing efforts can be reduced or eliminated. User interaction with a computing device can also be improved by enabling users to utilize simplified gestures for selecting specified portions of content and providing an input with an indication that an individual user favors or disfavors the selected content. This can reduce the utilization of computing resources like processor cycles, memory, network bandwidth, and power. Other technical benefits not specifically mentioned herein can be realized through implementations of the disclosed technologies. FIG. 1A is a block diagram of a system for providing a computationally efficient human-computer interface for collaborative modification of content. The exemplary system shown in FIG. 1A can provide a collaborative whiteboard where multiple users can view content in a file and simultaneously provide input to manipulate the content. Implementations of the disclosed technologies such as the example system shown in FIG. 1A can reduce or eliminate the need for users to manually enter edits to the contents of a file in a multi-user sharing session. Additional details regarding these aspects will be presented below. As illustrated in FIG. 1A, a system 100 is configured to enable users 102A-102H (which might be referred to collectively as the “users 102” or individually as a “user 102”) to participate in a multi-user sharing session. The multi-user sharing session might, for example, be implemented as a digital whiteboard presenting whiteboard objects such as, but not limited to, handwriting or hand-drawn images such as digital ink created using a digital pen or touchscreen. The multi-user sharing session can also be implemented as a 3D collaborative workspace presenting virtual objects displayed in a mixed reality computing environment in another configuration. Additional details regarding one such 3D collaborative workspace will be provided below with respect to FIGS. 3A-3D. It is to be appreciated that the technologies disclosed herein can be utilized with any type of collaborative platform such as, but not limited to, collaborative whiteboards, 3D collaborative workspaces, and collaborative editing sessions of documents such a spreadsheet, word processing document, etc. Accordingly, the configurations described herein are not limited to use with a specific collaboration platform. The users 102A-102G can access the multi-user sharing session utilizing client computing devices 104A-104G (which might be referred to collectively as the “computing devices 104” or individually as a “computing device 104”), respectively, configured with an appropriate client application (not shown in FIG. 1A). The client application can present content 110 to the users 102A-102G such as, for example, on display devices 112A-112G connected to the client computing devices 102A-102G, respectively. The content 110 might be, for example, a digital whiteboard, virtual objects, a word processing document, a spreadsheet document, a presentation document, an image or video, or another type of content. In the example shown in FIGS. 1A-1C, the content 110 is a digital whiteboard containing several portions identifying different products (i.e. product 1, product 2, and product 3). The content 110 can be stored in a file 108 on a server computer 106, or in another networked location, and distributed to the client computing devices 104 participating in the multi-user sharing session via a communications network. The file 108 containing the content 110 might also be stored on one of the client devices 104 and shared to the other client devices 104 in other configurations. The content 110 can be provided to the client devices 110 in other ways in other configurations. As shown in FIG. 1A, the content 110 can also be provided to a machine learning component 116 executing on a computing device 104H in some configurations. Like the users 102, the machine learning component 116 can vote on portions of the content 110. In order to enable this functionality, the machine learning component 106 can be trained to vote on portions of content 110 based upon the previous votes of users 102, the type of content 110 being voted on, data relating to the content 110 being voted on, the results of previous votes on particular types of content 110, and/or other types of data. It is to be appreciated that various machine learning mechanisms may be utilized by the machine learning component 116. For example, a classification mechanism may be utilized to analyze portions of the content 110 to receive an up-vote or a down-vote. In other examples, a statistical mechanism may be utilized to determine whether to up-vote or down-vote a portion of the content 110. For example, a linear regression mechanism may be used to generate a score that indicates a likelihood that a particular portion of the content 110 will be up-voted or down-voted. Linear regression may refer to a process for modeling the relationship between one variable with one or more other variables. Different linear regression models might be used to calculate the probability that a portion of the content 110 will be up-voted or down-voted. For example, a least squares approach might be utilized, a maximum-likelihood estimation might be utilized, or another approach might be utilized. Such techniques may be utilized to train the machine learning component 116 to vote on portions of the content 110. FIG. 1B shows how the computing devices 104 of the system shown in FIG. 1A can provide a UI through which the users 102 can vote on portions of content 110. As described briefly above, users 102 participating in a multi-user sharing session can provide input gestures to the client application executing on the client devices 104 in order to cast a vote on portions of the displayed content 110, a digital whiteboard containing three whiteboard objects in this example. As shown in FIG. 1B, each user 102 can provide an input gesture indicating a preference for a portion of the displayed content 110. For example, and without limitation, a user 102 might provide an input gesture indicating that they favor (“up-vote”) or disfavor (“down-vote”) a portion of the displayed content 110. Input gestures include, but are not limited to, touch input, pen input (i.e. digital ink), voice input, 2D or 3D gestures, keyboard input, mouse input, touchpad input, and other types of user input that can be made to a computing system to indicate a preference for a portion of displayed content 110. An input gesture made by a user 102 identifies a portion of the content 110 and indicates the user's preference (i.e. a vote 114) for the portion of the content 110. In the example shown in FIG. 1A, for instance, the user 102A has cast a vote 114A by drawing an “X” over the first and second portions of the content 110 (i.e. product 1 and product 2) indicating a down-vote for those portions of the content 110. The user 102A has also cast a vote 114A for the third portion of the content 110 (i.e. product 3) by drawing a check-mark over third portion of the content 110, thereby indicating an up-vote for that portion of the content 110. Similarly, user 102B has cast a vote 114B by drawing an “X” over the second and third portions of the content 110 (i.e. product 2 and product 3) and drawing a check-mark on the first portion of the content 110 (i.e. product 1). The users 102E and 102G have similarly made “X” and check-mark input gestures to cast their votes 114E and 114G, respectively, on the portions of the content 110. The user 102C has utilized different gestures to cast their vote 114C on the portions of the content 110. In particular, the user 102C has drawn an up-facing arrow on the first and third portions of the content 110 (i.e. product 1 and product 3) thereby up-voting these portions of the content. The user 102C has also drawn a down-facing arrow on the second portion of the content 110 (i.e. product 2), thereby indicating a down-vote for this portion of the content 110. In this regard, it is to be appreciated that different types of input gestures can be utilized to cast a vote 114 for portions of content 110 and that the input gestures described herein are merely illustrative. In some configurations, users 102 can select a portion of the content 110 in order to cast a vote 114 for that portion. For instance, in the example shown in FIG. 1B, the user 102F has drawn a rectangle around the third portion of the content 110 (i.e. product 3) in order to cast a vote 114F (i.e. an up-vote) for that portion of the content 110. In a similar fashion, users 102 can select multiple portions of the content 110 in order to cast a single vote for the selected portions. In the example shown in FIG. 1B, for instance, the user 102D has drawn a rectangle around the second and third portions of the content 110 (i.e. product 2 and product 3) and cast a single vote 114D for the selected portions of the content 110. Thus, in some embodiments, the digital ink input gesture identifies a portion of the content, e.g., a drawing object, a section of text, a section of a document, a portion of a video. In addition, the digital ink input gesture can identify a preference for a portion of the content. The preference for the portion of the content can then be used by the system to determine a priority for the portion of the content. As described below, the priority for the portion of the content can be used to modify, delete or provide an annotation in association with the portion of the content. In some configurations, votes 114 can be cast by groups of users 102. For example, and without limitation, votes 114 can be collected from the users 102 in a group of users 102 and the majority vote 114 will be considered the vote 114 for the group as a whole. The vote cast by the group might also be weighted more heavily than votes cast by individual users 102 or specifically weighted in another manner. As discussed briefly above, the machine learning component 116 can also cast a vote 114H on the portions of the content 110. In the example shown in FIG. 1B, for instance, the machine learning component 116 has cast down-votes for the first and second portions of the content 110 (i.e. product 1 and product 2) and has cast an up-vote for the third portion of the content 110 (i.e. product 3). As also described briefly above, the server computer 106, or another computing device operating as a part of the disclosed system 100, can collect the votes 114 from the users 102 in the multi-user sharing session and the machine learning component 116. The server computer 106 can then analyze the votes 114 to determine a priority for each portion of the content 110 that was voted on. In the example shown in FIG. 1B, for instance, the first portion of the content 110 (i.e. product 1) received three down-votes and two up-votes. The second portion of the content 110 (i.e. product 2) received five down-votes and two up-votes. The third portion of the content 110 (i.e. product 3) received three down-votes and five up-votes. In this example, the third portion of the content 110 has the highest priority, the first portion of the content 110 has the second highest priority, and the second portion of the content 110 has the lowest priority. In some configurations, the server computer 106 applies weights to the votes 114 received from the users 102 in a multi-user sharing session prior to determining the priority for a portion of the content 110. As discussed briefly above, the weighting can be based on a context associated with each of the of users 102 such as, but not limited to, a user's 102 role in an organization, a user's 102 past voting history, and whether a vote was cast by a group of users 102 and, if so, the size of the group. Weights can also be manually assigned to users utilizing an appropriate UI. The weights can be saved for later use with the same user or group of users. Details regarding one illustrative UI for manually specifying weights for the votes 114 made by each user 102 will be provided below with regard to FIGS. 4A AND 4B. Once the server computer 106, or another computing device, has determined the priorities associated with individual portions of content 110 (e.g. whiteboard objects in the example shown in FIGS. 1A-1C), the server computer 106 can then modify a display of the content 110 based on the computed priorities for the portions of the content 110. Additional details regarding the modification of the content 110 based upon the computed priorities will be described below with regard to FIG. 1C. FIG. 1C shows how the computing devices of the system 100 can modify a display of the content 110 based on the computed priorities for the voted-on portions of the content 110 by rearranging the portions of the content 110. In this example, for instance, the server computer 106 has rearranged portions of the content 110 (i.e. individual whiteboard objects) based upon the computed priorities. In particular, the third portion of the content 110 (i.e. product 3) has been moved to the top of the display of the content 110, the first portion of the content 110 (i.e. product 1) has been moved to the middle of the display of the content 110, and the second portion of the content 110 (i.e. product 2) has been moved to the bottom of the display of the content 110. It is to be appreciated that the server computer 106 can modify the display of the content 110, or the content 110 itself, in various ways based upon the computed priorities. For example, and without limitation, the server computer 102 can delete a portion of the content 110 if the portion of the content 110 has a priority lower than a threshold value, generate an annotation indicating the preference for the portion of the content 110, identify high-priority portions of content 110 or low-priority portions of content 110, or add a UI element to the content 110 to bring focus to a portion of the content 110 if the portion of the content 110 has a priority exceeding a threshold value. The server computer 106, or another device or component, can modify the content 110 in other ways based upon the priority associated with portions of the content 110 in other configurations. Some additional ways that the server computer 106 can modify the content 110 are described below with reference to FIGS. 2A-2D. In some configurations, the server computer 106 can also generate a report (not shown in FIG. 1C) that indicates the user preferences for portions of the content 110 and that provides other information regarding the voting. For example, and without limitation, such a report can identify the users 102 that participated in a vote, the votes 114 cast by each user 102, the computed priorities for each portion of content 110, and the modifications made to the content 110 based on the voting. Such a report could also describe the historical participation of users 102 in voting, the historical votes 114 cast by the users 102 (e.g. a histogram showing the history of votes 114 cast or not cast by each user), the historical results of votes, and/or other types of information. FIGS. 2A-2D illustrate how votes 114 made by users 102 can cause the system 100 to modify other aspects of content 110. The example shown in FIGS. 2A-2D also illustrates how the disclosed system 100 can highlight content, delete content, or provide an annotation with respect to specific portions of content 110 based upon the votes 114. In the example illustrated in FIGS. 2A-2D, the content 110 is an architectural drawing of the layout of a house. As in the example described above with regard to FIGS. 1A-1C, the system 100 enable the users 102 to participate in a multi-user sharing session, in this case a digital whiteboard presenting the architectural drawing. As also in the example above, the users 102 can access the multi-user sharing session utilizing the client computing devices 104. FIG. 2B shows the users 102 casting votes 114 with respect to portions of the content 110. In particular, the users 102A and 102G have utilized various input gestures to cast votes 114A and 114G, respectively, down-voting a portion of the architectural diagram showing the location of a dining room. The machine learning component 116 has also cast a vote 114H down-voting the location of the dining room. The users 102B-102F have utilized various input gestures to cast votes 114B-114G, respectively, up-voting the portion of the architectural diagram showing the location of the dining room. As in the example described above, the server computer 106, or another computing device operating as a part of the disclosed system 100, can collect the votes 114 from the users 102 in the multi-user sharing session and the machine learning component 116. The server computer 106 can then tally the votes 114 and analyze the votes 114 to determine a priority for each portion of the content 110 that was voted on. In the example shown in FIG. 2B, for instance, the portion of the architectural diagram showing the location of the dining room has received four up-votes and two down-votes. FIG. 2C shows other examples of how the computing devices of the system 100 can modify content 110 based upon the votes 114 provided by the users 102. In the illustrated example, the system 100 has added UI elements to the display of the content 110 to bring focus to preferred features (e.g. the illustrated location of the dining room) in the content 110. In particular, an annotation 202B indicating the users' 102 preference for the portion of the content 110 has been added (i.e. the ‘keep’ annotation). Annotations might also include other information such as, but not limited to, data indicating the number of users that up-voted or down-voted a particular section of content. Another UI element 204 has also been added to the display of the content 110 to highlight the up-voted portion of the content 110. In some configurations, such UI elements are added to the display of the content 110 if the portion of the content has a priority exceeding a threshold value. An annotation 202A (i.e. the ‘remove’ annotation) has also been added indicating the down-voting of another portion of the content 110 (i.e. an alternate location for the dining room in the architectural plan). In some configurations, a down-voted portion of the content 110 can be deleted if the priority of the content 110 does not exceed a pre-determined threshold value. This is illustrated in FIG. 2C. The deletion or other type of modification of a portion of the content 110 can be indicated by modifying other properties of the content in other configurations. For example, the removed or modified portion can be indicated with digital ink (e.g. drawing a circle or other type of shape around a deleted or modified portion of the content using digital ink), might be highlighted, colored, or otherwise emphasized, or might be displayed with reduced brightness or translucently. Modified content might also be identified in a list. A UI control, such as a slider control, might also be utilized to transition between a view of the original content and the modified content. An audio output can also be used to bring focus to a portion of the content. For example, a voice instruction can be generated to describe a portion of content and one or more results related to the determined priority or the votes. In other embodiments, a voice output may indicate an annotation that was proposed and added to the content. FIGS. 3A-3F show a multi-user sharing session in a 3D collaborative workspace through which users 102 can vote on content 110 that includes virtual and real objects displayed in a mixed reality computing environment. In the configuration shown in FIGS. 3A-3F, a head-mounted display (“HMD”) device 302A, alone or in combination with one or more other devices (e.g. a local computer or one or more remotely-located server computers) provides a multi-user sharing session that includes a 3D collaborative workspace. It will be understood that the HMD device 302A might take a variety of different forms other than the specific configuration depicted in FIGS. 3A-3F. Moreover, although the configurations disclosed herein are discussed primarily in the context of augmented reality (“AR”) HMD devices, it is to be appreciated that the technologies disclosed herein can also be utilized with mixed reality (“MR”) and virtual reality (“VR”) HMD devices. The HMD device 302A includes one or more display panels (not shown in FIGS. 3A-3F) that display computer generated (“CG”) graphics. For example, the HMD device 302A might include a right-eye display panel for right-eye viewing and a left-eye display panel for left-eye viewing. A right-eye display panel is typically located near a right eye of the user 102A to fully or partially cover a field of view of the right eye, and a left-eye display panel is located near a left eye of the user 102A to fully or partially cover a field of view of the left eye. In another example, a unitary display panel might extend over both the right and left eyes of a user 102A and provide both right-eye and left-eye viewing via right-eye and left-eye viewing regions of the unitary display panel. In each of these implementations, the ability of the HMD device 302A to separately display different right-eye and left-eye graphical content via right-eye and left-eye displays might be used to provide a user 102A of the HMD device 302A with a stereoscopic viewing experience. For ease of illustration, the stereoscopic output of the HMD device 302A is illustrated in FIGS. 3A-3F as being presented on a two-dimensional display device 112A. The HMD device 302A might include a variety of on-board sensors forming a sensor subsystem (not shown in FIGS. 3A-3F). For example, and without limitation, the sensor subsystem might include outward facing optical cameras (e.g., cameras located on an external surface of the HMD device 302A and forward facing in a viewing direction of the user 102A). The outward facing optical cameras 102 of the HMD device 302A can be configured to observe the real-world environment and output digital images illustrating the real-world environment observed by the one or more outward facing optical cameras 102. The HMD device 302A can also include inward facing optical cameras (e.g., rearward facing toward the user 102A and/or toward one or both eyes of the user 102A). The sensor subsystem can also include a variety of other sensors including, but not limited to, accelerometers, gyroscopes, magnetometers, environment understanding cameras, depth cameras, inward or outward facing video cameras, microphones, ambient light sensors, and potentially other types of sensors. Data obtained by the sensors of the sensor subsystem can be utilized to detect the location, orientation, and movement of the HMD device 302A. The location, orientation, and movement of the HMD device 302A can be utilized to compute the view of the virtual reality objects presented to the user 102A by the HMD device 302A. The HMD device 302A might also include a processing subsystem (not shown in FIGS. 3A-3F) that includes one or more processor devices that perform some or all of the processes or operations described herein, as defined by instructions executed by the processing subsystem. Such processes or operations might include generating and providing image signals to the display panels, receiving sensory signals from sensors in the sensor subsystem, enacting control strategies and procedures responsive to those sensory signals, and enabling the voting and modification of virtual reality objects in the manner described herein. Other computing systems, such as local or remote computing systems might also perform some or all of the computational tasks disclosed herein. The HMD device 302A might also include an on-board data storage subsystem (not shown in FIGS. 3A-3F) that includes one or more memory devices storing computer-executable instructions (e.g., software and/or firmware) executable by the processing subsystem and might additionally hold other suitable types of data. The HMD device 302A might also include a communications subsystem (also not shown in FIGS. 3A-3F) supporting wired and/or wireless communications with remote devices (i.e., off-board devices) over a communications network. As an example, the communication subsystem might be configured to wirelessly send or receive a video stream, audio stream, coordinate information, virtual object descriptions, and/or other information to and from other remote computing devices, such as the HMD displays 302B and 302C. In the example shown in FIGS. 3A-3F, the HMD device 302A generates a view of the real world environment 300 surrounding the user 302A. The HMD device 302A can also overlay virtual objects on the user's view of the real world environment 300. For instance, in the illustrated example the HMD device 302A has generated a virtual table and a virtual window and presented these virtual objects overlaid on the view of the real world environment 300. The virtual objects, therefore, appear to the user 302A as if they were actually present in the real world environment 300. The HMD device 302A can also transmit its display, including the virtual objects and the real world environment 300, to other remote computing devices. In the example shown in FIGS. 3A-3F, for instance, the output of the HMD 302A has been transmitted to the HMD device 302B and the HMD device 302C. In this manner, the users 102B and 102C can see the view of the real world environment 300 as viewed by the user 102A along with any virtual objects overlaid thereupon. In this regard, it is to be appreciated that the configurations disclosed herein do not require the users 102B and 102C to utilize HMD devices. The users 102B and 102C can utilize other types of devices, such as laptop computers, smart phones, or desktop computers, to view the output of the HMD device 102A and to vote on content 110 presented therein. In the example shown in FIGS. 3A-3F, the users 102A-102C have cast votes 114A-114C, respectively, on the virtual objects (i.e. the window and the table) shown in the output of the HMD device 302A. For instance, HMDs 302A-302C can recognize hand gestures made by the users 102A-102C, respectively, indicating a preference related to real-world objects and virtual objects. The HMDs 302A-302C can also be configured to recognize voice commands made by the users 102A-102C, respectively, indicating a preference related to real-world objects and virtual objects. In the example shown in FIGS. 3A-3F, the user 102A has cast a vote 114A up-voting the table and down-voting the window. The user 102B has cast a vote 114B up-voting the window and down-voting the table. The user 102C has cast a vote 114C up-voting the table and down-voting the window. The system can analyze the votes 114A-114C to rank and/or prioritize the virtual objects in the manner described above. FIG. 3B shows how virtual content 110 can be modified based upon votes 114 provided by the users 102 of a 3D collaborative workspace. In the example shown in FIG. 3B, UI elements have been added to the view of the real-world environment 300 and the virtual objects to bring focus to the virtual objects based upon the results of the voting. In particular, an annotation 202A has been associated with the window indicating that the window has been down-voted and that it should be removed from the scene. The system 100 might also dim or change display properties of virtual objects that are not determined to be a high priority. For instance, in this example, the window can be faded or distorted if the results of the voting indicates that it is a lower priority than other virtual objects. The window or other content might also be removed from the view altogether as shown in FIG. 3C. An annotation 202B has also been associated with the table indicating that the table has been up-voted and that it should be retained in the scene. The system 100 has also added a UI element 204 to the display of the content 110 to highlight the up-voted virtual object (i.e. the table). As discussed above, such a UI element can be added to the display of the content 110 if the portion of the content 110 has a priority exceeding a threshold value. FIGS. 3D-3F illustrate one example where a display attribute of down-voted content can be gradually modified as voting progresses. In this example, the intensity of a down-voted virtual object (e.g. the illustrated window) is gradually decreased as voting progresses. In FIG. 3D, for instance, a few users 102 might have down-voted the window and, as a result, its intensity has been reduced. Later, a few more users 102 might have down-voted the window and, as a result, its intensity has been reduced further as shown in FIG. 3E. Finally, once all of the users 102 have completed voting and the votes have been tallied, the window might be removed altogether. It is to be appreciated that visual attributes other than intensity can be modified as voting progresses including, but not limited to, translucency, color, and shading. Turning now to FIGS. 4A AND 4B, an illustrative user interface for associating weights with votes 114 provided by different users 102 will be described. As discussed briefly above, the server computer 106 applies weights to the votes 114 received from the users 102 in a multi-user sharing session prior to determining the priority for a portion of the content 110 in some configurations. As also discussed briefly above, the weighting can be manually specified or based on a context associated with each of the of users 102 such as, but not limited to, a user's 102 role in an organization or a user's 102 past voting history. FIG. 4A shows a UI 402 that can be utilized to manually specify the weights for votes made by users 102 of the system 100 described herein. As illustrated in FIG. 4A, the UI 402 displays a tree structure defining a hierarchy of users 102 within an organization. Icons 404A-404F are provided for each of the users 102 and lines connecting the icons 402A-404F indicate the relative relationships between the users 102 within the organization. The UI 402 also includes UI fields 406A-406F that correspond to the icons 404A-404F and their associated users 102. A user of the UI 402 can specify weights to be applied to votes 114 made by each of the users 102 in the fields 406A-406F. For example, a weight of 2.0 is to be applied to votes 114 made by a user 102 named “Susan,” a weight of 1.0 is to be applied to votes 114 made by a user 102 named “Jim,” and a weight of 0.1 is to be applied to votes 114 made by a user 102 named “Anand.” In some configurations, the weights shown in the UI fields 406A-406F can be prepopulated by analyzing whether users 102 were in the majority during previous votes or by analyzing other information. For example, a user 102 that is regularly in the majority when voting might have an assigned weight that is higher than the weight for a user that is rarely in the majority or that does not regularly vote. As discussed above, the weights can also vary based on the content 110 or a context associated with each user 102. For instance, votes 114 made by users 102 associated with a team or certain roles of an organization can be weighted differently than others. Weights might also be modified based upon a user's participation in votes, such as by lowering the weight for a user 102 that does not vote often. A non-vote by a user might also be considered in various ways, such as counting a non-vote as a negative vote or negatively impacting the weight associated with a user 102. As shown in FIG. 4B, the UI 402 might also be configured as a list showing the users 102 and their corresponding weights. In this example, a machine learning component 116 has also been assigned weights. Additionally, in this example, different weights can be assigned to users based on certain contexts. For instance, different weights might be assigned to the users 102 depending upon the topic that is being voted on, in this case hardware design or UI design. Different weights might be assigned to users 102 for different contexts such as, but not limited to, the identities of the other people in a particular voting session, the location of a meeting, or whether votes are cast as a team or committee. The assigned weights might or might not be viewable to the voting participants. FIG. 5 is a flow diagram illustrating aspects of a routine 500 for enabling aspects of the present disclosure. It should be appreciated that the logical operations described herein with regard to FIG. 5, and the other FIGS., can be implemented (1) as a sequence of computer implemented acts or program modules running on a computing device and/or (2) as interconnected machine logic circuits or circuit modules within a computing device. The particular implementation of the technologies disclosed herein is a matter of choice dependent on the performance and other requirements of the computing device. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts and modules can be implemented in hardware, software, firmware, in special-purpose digital logic, and any combination thereof. It should be appreciated that more or fewer operations can be performed than shown in the FIGS. and described herein. These operations can also be performed in a different order than those described herein. It also should be understood that the illustrated methods can end at any time and need not be performed in their entireties. Some or all operations of the methods, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer-storage media, as defined below. The term “computer-readable instructions,” and variants thereof, as used in the description and claims, is used expansively herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like. For example, the operations of the routine 500 can be implemented by dynamically linked libraries (“DLLs”), statically linked libraries, functionality produced by an application programming interface (“API”), a compiled program, an interpreted program, a script, a network service or site, or any other executable set of instructions. Data can be stored in a data structure in one or more memory components. Data can be retrieved from the data structure by addressing links or references to the data structure. Although the following illustration refers to the components of the FIGS., it can be appreciated that the operations of the routine 500 may be also implemented in many other ways. For example, the routine 500 may be implemented, at least in part, by a processor of another remote computer, processor or circuit. In addition, one or more of the operations of the routine 500 may alternatively or additionally be implemented, at least in part, by a chipset working alone or in conjunction with other software modules. In the example described below, one or more modules of a computing system can receive and/or process the data disclosed herein. Any service, circuit or application suitable for providing the techniques disclosed herein can be used in operations described herein. With reference to FIG. 5, the routine 500 begins at operation 501 where content 110 is presented to the users 102 of a multi-user sharing session. As discussed above, the multi-user sharing session might, for example, be implemented as a digital whiteboard presenting whiteboard objects or as a 3D collaborative workspace presenting virtual objects displayed in a mixed reality computing environment in another configurations. From operation 501, the routine 500 proceeds to operation 503, where votes 114 are received from the users 102 of the multi-user sharing session. As discussed above, users 102 participating in a multi-user sharing session can provide input gestures in order to vote on portions of the displayed content. The input gestures made by the users indicate a preference for a portion of the displayed content. For example, and without limitation, a user might provide an input gesture indicating that they favor or disfavor a portion of the displayed content 110. From operation 503, the routine 500 proceeds to operation 505, where a server computer, or other computing device operating as a part of the disclosed system 100, collects the votes 114 from the users 102 in the multi-user sharing session. The server computer can also apply weights to the votes 114 received from the users 102 in a multi-user sharing session prior to determining the priority for a portion of the content 110. As discussed above, the weights can be user-specified or can be based on a context associated with each of the of users 102 such as, but not limited to, a user's role in an organization or a user's past voting history. From operation 505, the routine 500 proceeds to operation 506, where the server computer analyzes the votes 114 to determine a priority for a portion of the content 110. The routine 500 then proceeds to operation 509, where the server computer can then modify a display of the content 110 based on the priority for the portion of the content 110 by rearranging the content, deleting the portion of the content if the portion of the content has a priority not exceeding a threshold, generating an annotation indicating the preference for the portion of the content, or adding a UI element to the content to bring focus to the portion of the content if the portion of the content has a priority exceeding a threshold. The server computer can modify the content in other ways based upon the priority associated with the portions of the content in other configurations. From operation 509, the routine 500 proceeds to operation 511, where the server computer can also generate a report that indicates the user preferences for portions of the content 110 and that provides other information regarding the voting. As discussed above, the report can include data identifying the users 102 that participated in a vote, the votes 114 cast by each user 102, the computed priorities for each portion of content 110, and the modifications made to the content 110 based on the voting. Such a report could also describe the historical participation of users 102 in voting, the historical votes 114 cast by the users 102 (e.g. a histogram showing the history of votes 114 cast by each user, including non-votes), the historical results of votes, and/or other types of information in other configurations. FIG. 6 shows additional details of an example computer architecture 600 for a computer, such as the client devices and the server computer 106 shown in FIGS. 1A-2C, capable of executing the program components described herein. Thus, the computer architecture 600 illustrated in FIG. 6 illustrates an architecture for a server computer, a mobile phone, a PDA, a smart phone, a desktop computer, a netbook computer, a tablet computer, and/or a laptop computer. The computer architecture 600 may be utilized to execute any aspects of the software components presented herein. The computer architecture 600 illustrated in FIG. 6 includes a central processing unit 602 (“CPU”), a system memory 604, including a random access memory 606 (“RAM”) and a read-only memory (“ROM”) 608, and a system bus 610 that couples the memory 604 to the CPU 602. A basic input/output system containing the basic routines that help to transfer information between elements within the computer architecture 600, such as during startup, is stored in the ROM 608. The computer architecture 600 further includes a mass storage device 612 for storing an operating system 606, an application 620 such as a digital whiteboard application, the machine learning component 116, a file 108 containing content 110, and other data described herein. The mass storage device 612 is connected to the CPU 602 through a mass storage controller (not shown) connected to the bus 610. The mass storage device 612 and its associated computer-readable media provide non-volatile storage for the computer architecture 600. Although the description of computer-readable media contained herein refers to a mass storage device, such as a solid-state drive, a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available computer-readable storage media or communication media that can be accessed by the computer architecture 600. Communication media includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner so as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media. By way of example, and not limitation, computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by the computer architecture 600. For purposes of the claims, the phrase “computer storage medium,” “computer-readable storage medium” and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media, per se. According to various configurations, the computer architecture 600 may operate in a networked environment using logical connections to remote computers through the network 656 and/or another network (not shown in FIG. 6). The computer architecture 600 may connect to the network 656 through a network interface unit 614 connected to the bus 610. It should be appreciated that the network interface unit 614 also may be utilized to connect to other types of networks and remote computer systems. The computer architecture 600 also may include an input/output controller 616 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (also not shown in FIG. 6). Similarly, the input/output controller 616 may provide output to a display screen, a printer, or other type of output device (also not shown in FIG. 6). It should be appreciated that the software components described herein may, when loaded into the CPU 602 and executed, transform the CPU 602 and the overall computer architecture 600 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. The CPU 602 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the CPU 602 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the CPU 602 by specifying how the CPU 602 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 602. Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon. As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion. In light of the above, it should be appreciated that many types of physical transformations take place in the computer architecture 600 in order to store and execute the software components presented herein. It also should be appreciated that the computer architecture 600 may include other types of computing devices, including hand-held computers, embedded computer systems, personal digital assistants, and other types of computing devices known to those skilled in the art. It is also contemplated that the computer architecture 600 may not include all of the components shown in FIG. 6, may include other components that are not explicitly shown in FIG. 6, or may utilize an architecture completely different than that shown in FIG. 6. FIG. 7 depicts an illustrative distributed computing environment 700 capable of executing the software components described herein. Thus, the distributed computing environment 700 illustrated in FIG. 7 can be utilized to execute any aspects of the software components presented herein. For example, the distributed computing environment 700 can be utilized to execute aspects of the software components described herein. According to various implementations, the distributed computing environment 700 includes a computing environment 702 operating on, in communication with, or as part of the network 704. The network 704 may be or may include the network 656, described above with reference to FIG. 6. The network 704 also can include various access networks. One or more client devices 706A-706N (hereinafter referred to collectively and/or generically as “clients 706” and also referred to herein as computing devices 706) can communicate with the computing environment 702 via the network 704 and/or other connections (not illustrated in FIG. 7). In one illustrated configuration, the clients 706 include a computing device 706A such as a laptop computer, a desktop computer, or other computing device; a slate or tablet computing device (“tablet computing device”) 706B; a mobile computing device 706C such as a mobile telephone, a smart phone, or other mobile computing device; a server computer 706D; and/or other devices 706N. It should be understood that any number of clients 706 can communicate with the computing environment 702. Two example computing architectures for the clients 706 are illustrated and described herein. It should be understood that the illustrated clients 706 and computing architectures illustrated and described herein are illustrative and should not be construed as being limited in any way. In the illustrated configuration, the computing environment 702 includes application servers 708, data storage 710, and one or more network interfaces 712. According to various implementations, the functionality of the application servers 708 can be provided by one or more server computers that are executing as part of, or in communication with, the network 704. The application servers 708 can host various services, virtual machines, portals, and/or other resources. In the illustrated configuration, the application servers 708 host one or more virtual machines 714 for hosting applications or other functionality. According to various implementations, the virtual machines 714 host one or more applications and/or software modules for implementing aspects of the functionality disclosed herein. It should be understood that this configuration is illustrative and should not be construed as being limiting in any way. The application servers 708 can also host or provide access to one or more portals, link pages, Web sites, and/or other information (“web portals”) 716. According to various implementations, the application servers 708 also include one or more mailbox services 718 and one or more messaging services 720. The mailbox services 718 can include electronic mail (“email”) services. The mailbox services 718 also can include various personal information management (“PIM”) and presence services including, but not limited to, calendar services, contact management services, collaboration services, and/or other services. The messaging services 720 can include, but are not limited to, instant messaging services, chat services, forum services, and/or other communication services. The application servers 708 also may include one or more social networking services 722. The social networking services 722 can include various social networking services including, but not limited to, services for sharing or posting status updates, instant messages, links, photos, videos, and/or other information; services for commenting or displaying interest in articles, products, blogs, or other resources; and/or other services. In some configurations, the social networking services 722 are provided by or include the FACEBOOK social networking service, the LINKEDIN professional networking service, the MYSPACE social networking service, the FOURSQUARE geographic networking service, the YAMMER office colleague networking service, and the like. In other configurations, the social networking services 722 are provided by other services, sites, and/or providers that may or may not be explicitly known as social networking providers. For example, some web sites allow users to interact with one another via email, chat services, and/or other means during various activities and/or contexts such as reading published articles, commenting on goods or services, publishing, collaboration, gaming, and the like. Examples of such services include, but are not limited to, the WINDOWS LIVE service and the XBOX LIVE service from Microsoft Corporation in Redmond, Wash. Other services are possible and are contemplated. The social networking services 722 also can include commenting, blogging, and/or micro blogging services. Examples of such services include, but are not limited to, the YELP commenting service, the TWITTER messaging service, and/or other services. It should be appreciated that the above lists of services are not exhaustive and that numerous additional and/or alternative social networking services 722 are not mentioned herein for the sake of brevity. As such, the above configurations are illustrative, and should not be construed as being limited in any way. According to various implementations, the social networking services 722 may host one or more applications and/or software modules for providing the functionality described herein. For instance, any one of the application servers 708 may communicate or facilitate the functionality and features described herein. For instance, a social networking application, mail client, messaging client or a browser running on a phone or any other client 706 may communicate with a social networking service 722 and facilitate the functionality, even in part, described above with respect to FIG. 7. Any device or service depicted herein can be used as a resource for supplemental data, including email servers, storage servers, etc. As shown in FIG. 7, the application servers 708 also can host other services, applications, portals, and/or other resources (“other resources”) 724. The other resources 724 can include, but are not limited to, document sharing, rendering or any other functionality. It thus can be appreciated that the computing environment 702 can provide integration of the concepts and technologies disclosed herein with various mailbox, messaging, social networking, and/or other services or resources. As mentioned above, the computing environment 702 can include the data storage 710. According to various implementations, the functionality of the data storage 710 is provided by one or more databases operating on, or in communication with, the network 704. The functionality of the data storage 710 also can be provided by one or more server computers configured to host data for the computing environment 702. The data storage 710 can include, host, or provide one or more real or virtual datastores 726A-726N (hereinafter referred to collectively and/or generically as “datastores 726”). The datastores 726 are configured to host data used or created by the application servers 708 and/or other data. Although not illustrated in FIG. 7, the datastores 726 also can host or store web page documents, word documents, presentation documents, data structures, algorithms for execution by a recommendation engine, and/or other data utilized by any application program or another module. Aspects of the datastores 726 may be associated with a service for storing files. The computing environment 702 can communicate with, or be accessed by, the network interfaces 712. The network interfaces 712 can include various types of network hardware and software for supporting communications between two or more computing devices including, but not limited to, the computing devices and the servers. It should be appreciated that the network interfaces 712 also may be utilized to connect to other types of networks and/or computer systems. It should be understood that the distributed computing environment 700 described herein can provide any aspects of the software elements described herein with any number of virtual computing resources and/or other distributed computing functionality that can be configured to execute any aspects of the software components disclosed herein. According to various implementations of the concepts and technologies disclosed herein, the distributed computing environment 700 provides the software functionality described herein as a service to the computing devices. It should also be understood that the computing devices can include real or virtual machines including, but not limited to, server computers, web servers, personal computers, mobile computing devices, smart phones, and/or other devices. As such, various configurations of the concepts and technologies disclosed herein enable any device configured to access the distributed computing environment 700 to utilize the functionality described herein for providing the techniques disclosed herein, among other aspects. In one specific example, as summarized above, techniques described herein may be implemented, at least in part, by a web browser application, which works in conjunction with the application servers 708 of FIG. 7. Turning now to FIG. 8, an illustrative computing device architecture 800 for a computing device that is capable of executing various software components described herein. The computing device architecture 800 is applicable to computing devices that facilitate mobile computing due, in part, to form factor, wireless connectivity, and/or battery-powered operation. In some configurations, the computing devices include, but are not limited to, mobile telephones, tablet devices, slate devices, portable video game devices, and the like. The computing device architecture 800 is applicable to any of the computing devices shown in FIGS. 1A-3B. Moreover, aspects of the computing device architecture 800 may be applicable to traditional desktop computers, portable computers (e.g., phones, laptops, notebooks, ultra-portables, and netbooks), server computers, and other computer systems, such as those described herein. For example, the single touch and multi-touch aspects disclosed herein below may be applied to desktop computers that utilize a touchscreen or some other touch-enabled device, such as a touch-enabled track pad or touch-enabled mouse. The computing device architecture 800 illustrated in FIG. 8 includes a processor 802, memory components 804, network connectivity components 806, sensor components 808, input/output components 810, and power components 812. In the illustrated configuration, the processor 802 is in communication with the memory components 804, the network connectivity components 806, the sensor components 808, the input/output (“I/O”) components 810, and the power components 812. Although no connections are shown between the individual components illustrated in FIG. 8, the components can interact to carry out device functions. In some configurations, the components are arranged so as to communicate via one or more buses (not shown in FIG. 8). The processor 802 includes a processor 802 configured to process data, execute computer-executable instructions of one or more application programs, and communicate with other components of the computing device architecture 800 in order to perform various functionality described herein. The processor 802 may be utilized to execute aspects of the software components presented herein and, particularly, those that utilize, at least in part, a touch-enabled input. In some configurations, the processor 802 includes a graphics processing unit (“GPU”) configured to accelerate operations performed by the CPU, including, but not limited to, operations performed by executing general-purpose scientific and/or engineering computing applications, as well as graphics-intensive computing applications such as high-resolution video (e.g., 720P, 1080P, and higher resolution), video games, 3D modeling applications, and the like. In some configurations, the processor 802 is configured to communicate with a discrete GPU (also not shown in FIG. 8). In any case, the CPU and GPU may be configured in accordance with a co-processing CPU/GPU computing model, wherein the sequential part of an application executes on the CPU and the computationally-intensive part is accelerated by the GPU. In some configurations, the processor 802 is, or is included in, a system-on-chip (“SoC”) along with one or more of the other components described herein below. For example, the SoC may include the processor 802, a GPU, one or more of the network connectivity components 806, and one or more of the sensor components 808. In some configurations, the processor 802 is fabricated, in part, utilizing a package-on-package (“PoP”) integrated circuit packaging technique. The processor 802 may be a single core or multi-core processor. The processor 802 may be created in accordance with an ARM architecture, available for license from ARM HOLDINGS of Cambridge, United Kingdom. Alternatively, the processor 802 may be created in accordance with an x86 architecture, such as is available from INTEL CORPORATION of Mountain View, Calif. and others. In some configurations, the processor 802 is a SNAPDRAGON SoC, available from QUALCOMM of San Diego, Calif., a TEGRA SoC, available from NVIDIA of Santa Clara, Calif., a HUMMINGBIRD SoC, available from SAMSUNG of Seoul, South Korea, an Open Multimedia Application Platform (“OMAP”) SoC, available from TEXAS INSTRUMENTS of Dallas, Tex., a customized version of any of the above SoCs, or a proprietary SoC. The memory components 804 include a RAM 814, a ROM 816, an integrated storage memory (“integrated storage”) 818, and a removable storage memory (“removable storage”) 820. In some configurations, the RAM 814 or a portion thereof, the ROM 816 or a portion thereof, and/or some combination of the RAM 814 and the ROM 816 is integrated in the processor 802. In some configurations, the ROM 816 is configured to store a firmware, an operating system or a portion thereof (e.g., operating system kernel), and/or a bootloader to load an operating system kernel from the integrated storage 818 and/or the removable storage 820. The integrated storage 818 can include a solid-state memory, a hard disk, or a combination of solid-state memory and a hard disk. The integrated storage 818 may be soldered or otherwise connected to a logic board upon which the processor 802 and other components described herein also may be connected. As such, the integrated storage 818 is integrated in the computing device. The integrated storage 818 is configured to store an operating system or portions thereof, application programs, data, and other software components described herein. The removable storage 820 can include a solid-state memory, a hard disk, or a combination of solid-state memory and a hard disk. In some configurations, the removable storage 820 is provided in lieu of the integrated storage 818. In other configurations, the removable storage 820 is provided as additional optional storage. In some configurations, the removable storage 820 is logically combined with the integrated storage 818 such that the total available storage is made available as a total combined storage capacity. In some configurations, the total combined capacity of the integrated storage 818 and the removable storage 820 is shown to a user instead of separate storage capacities for the integrated storage 818 and the removable storage 820. The removable storage 820 is configured to be inserted into a removable storage memory slot (not shown) or other mechanism by which the removable storage 820 is inserted and secured to facilitate a connection over which the removable storage 820 can communicate with other components of the computing device, such as the processor 802. The removable storage 820 may be embodied in various memory card formats including, but not limited to, PC card, CompactFlash card, memory stick, secure digital (“SD”), miniSD, microSD, universal integrated circuit card (“UICC”) (e.g., a subscriber identity module (“SIM”) or universal SIM (“USIM”)), a proprietary format, or the like. It can be understood that one or more of the memory components 804 can store an operating system. According to various configurations, the operating system includes, but is not limited to IOS from Apple Inc. of Cupertino, Calif., and ANDROID OS from Google Inc. of Mountain View, Calif. Other operating systems are contemplated. The network connectivity components 806 include a wireless wide area network component (“WWAN component”) 822, a wireless local area network component (“WLAN component”) 824, and a wireless personal area network component (“WPAN component”) 826. The network connectivity components 806 facilitate communications to and from the network 856 or another network, which may be a WWAN, a WLAN, or a WPAN. Although only the network 856 is illustrated, the network connectivity components 806 may facilitate simultaneous communication with multiple networks, including the network 704 of FIG. 7. For example, the network connectivity components 806 may facilitate simultaneous communications with multiple networks via one or more of a WWAN, a WLAN, or a WPAN. The network 856 may be or may include a WWAN, such as a mobile telecommunications network utilizing one or more mobile telecommunications technologies to provide voice and/or data services to a computing device utilizing the computing device architecture 800 via the WWAN component 822. The mobile telecommunications technologies can include, but are not limited to, Global System for Mobile communications (“GSM”), Code Division Multiple Access (“CDMA”) ONE, CDMA7000, Universal Mobile Telecommunications System (“UMTS”), Long Term Evolution (“LTE”), and Worldwide Interoperability for Microwave Access (“WiMAX”). Moreover, the network 856 may utilize various channel access methods (which may or may not be used by the aforementioned standards) including, but not limited to, Time Division Multiple Access (“TDMA”), Frequency Division Multiple Access (“FDMA”), CDMA, wideband CDMA (“W-CDMA”), Orthogonal Frequency Division Multiplexing (“OFDM”), Space Division Multiple Access (“SDMA”), and the like. Data communications may be provided using General Packet Radio Service (“GPRS”), Enhanced Data rates for Global Evolution (“EDGE”), the High-Speed Packet Access (“HSPA”) protocol family including High-Speed Downlink Packet Access (“HSDPA”), Enhanced Uplink (“EUL”) or otherwise termed High-Speed Uplink Packet Access (“HSUPA”), Evolved HSPA (“HSPA+”), LTE, and various other current and future wireless data access standards. The network 856 may be configured to provide voice and/or data communications with any combination of the above technologies. The network 856 may be configured to or adapted to provide voice and/or data communications in accordance with future generation technologies. In some configurations, the WWAN component 822 is configured to provide dual-multi-mode connectivity to the network 856. For example, the WWAN component 822 may be configured to provide connectivity to the network 856, wherein the network 856 provides service via GSM and UMTS technologies, or via some other combination of technologies. Alternatively, multiple WWAN components 822 may be utilized to perform such functionality, and/or provide additional functionality to support other non-compatible technologies (i.e., incapable of being supported by a single WWAN component). The WWAN component 822 may facilitate similar connectivity to multiple networks (e.g., a UMTS network and an LTE network). The network 856 may be a WLAN operating in accordance with one or more Institute of Electrical and Electronic Engineers (“IEEE”) 802.11 standards, such as IEEE 802.11a, 802.11b, 802.11g, 802.11n, and/or future 802.11 standard (referred to herein collectively as WI-FI). Draft 802.11 standards are also contemplated. In some configurations, the WLAN is implemented utilizing one or more wireless WI-FI access points. In some configurations, one or more of the wireless WI-FI access points are another computing device with connectivity to a WWAN that are functioning as a WI-FI hotspot. The WLAN component 824 is configured to connect to the network 856 via the WI-FI access points. Such connections may be secured via various encryption technologies including, but not limited, WI-FI Protected Access (“WPA”), WPA2, Wired Equivalent Privacy (“WEP”), and the like. The network 856 may be a WPAN operating in accordance with Infrared Data Association (“IrDA”), BLUETOOTH, wireless Universal Serial Bus (“USB”), Z-Wave, ZIGBEE, or some other short-range wireless technology. In some configurations, the WPAN component 826 is configured to facilitate communications with other devices, such as peripherals, computers, or other computing devices via the WPAN. The sensor components 808 include a magnetometer 828, an ambient light sensor 830, a proximity sensor 832, an accelerometer 834, a gyroscope 836, and a Global Positioning System sensor (“GPS sensor”) 838. It is contemplated that other sensors, such as, but not limited to, temperature sensors or shock detection sensors, also may be incorporated in the computing device architecture 800. The magnetometer 828 is configured to measure the strength and direction of a magnetic field. In some configurations the magnetometer 828 provides measurements to a compass application program stored within one of the memory components 804 in order to provide a user with accurate directions in a frame of reference including the cardinal directions, north, south, east, and west. Similar measurements may be provided to a navigation application program that includes a compass component. Other uses of measurements obtained by the magnetometer 828 are contemplated. The ambient light sensor 830 is configured to measure ambient light. In some configurations, the ambient light sensor 830 provides measurements to an application program stored within one the memory components 804 in order to automatically adjust the brightness of a display (described below) to compensate for low-light and high-light environments. Other uses of measurements obtained by the ambient light sensor 830 are contemplated. The proximity sensor 832 is configured to detect the presence of an object or thing in proximity to the computing device without direct contact. In some configurations, the proximity sensor 832 detects the presence of a user's body (e.g., the user's face) and provides this information to an application program stored within one of the memory components 804 that utilizes the proximity information to enable or disable some functionality of the computing device. For example, a telephone application program may automatically disable a touchscreen (described below) in response to receiving the proximity information so that the user's face does not inadvertently end a call or enable/disable other functionality within the telephone application program during the call. Other uses of proximity as detected by the proximity sensor 832 are contemplated. The accelerometer 834 is configured to measure proper acceleration. In some configurations, output from the accelerometer 834 is used by an application program as an input mechanism to control some functionality of the application program. For example, the application program may be a video game in which a character, a portion thereof, or an object is moved or otherwise manipulated in response to input received via the accelerometer 834. In some configurations, output from the accelerometer 834 is provided to an application program for use in switching between landscape and portrait modes, calculating coordinate acceleration, or detecting a fall. Other uses of the accelerometer 834 are contemplated. The gyroscope 836 is configured to measure and maintain orientation. In some configurations, output from the gyroscope 836 is used by an application program as an input mechanism to control some functionality of the application program. For example, the gyroscope 836 can be used for accurate recognition of movement within a 3D environment of a video game application or some other application. In some configurations, an application program utilizes output from the gyroscope 836 and the accelerometer 834 to enhance control of some functionality of the application program. Other uses of the gyroscope 836 are contemplated. The GPS sensor 838 is configured to receive signals from GPS satellites for use in calculating a location. The location calculated by the GPS sensor 838 may be used by any application program that requires or benefits from location information. For example, the location calculated by the GPS sensor 838 may be used with a navigation application program to provide directions from the location to a destination or directions from the destination to the location. Moreover, the GPS sensor 838 may be used to provide location information to an external location-based service, such as E911 service. The GPS sensor 838 may obtain location information generated via WI-FI, WIMAX, and/or cellular triangulation techniques utilizing one or more of the network connectivity components 806 to aid the GPS sensor 838 in obtaining a location fix. The GPS sensor 838 may also be used in Assisted GPS (“A-GPS”) systems. The GPS sensor 838 can also operate in conjunction with other components, such as the processor 802, to generate positioning data for the computing device 800. The I/O components 810 include a display 840, a touchscreen 842, a data I/O interface component (“data I/O”) 844, an audio I/O interface component (“audio I/O”) 846, a video I/O interface component (“video I/O”) 848, and a camera 850. In some configurations, the display 840 and the touchscreen 842 are combined. In some configurations two or more of the data I/O component 844, the audio I/O component 846, and the video I/O component 848 are combined. The I/O components 810 may include discrete processors configured to support the various interface described below or may include processing functionality built-in to the processor 802. The display 840 is an output device configured to present information in a visual form. In particular, the display 840 may present graphical user interface (“GUI”) elements, text, images, video, notifications, virtual buttons, virtual keyboards, messaging data, Internet content, device status, time, date, calendar data, preferences, map information, location information, and any other information that is capable of being presented in a visual form. In some configurations, the display 840 is a liquid crystal display (“LCD”) utilizing any active or passive matrix technology and any backlighting technology (if used). In some configurations, the display 840 is an organic light emitting diode (“OLED”) display. Other display types are contemplated. The touchscreen 842, also referred to herein as a “touch-enabled screen,” is an input device configured to detect the presence and location of a touch. The touchscreen 842 may be a resistive touchscreen, a capacitive touchscreen, a surface acoustic wave touchscreen, an infrared touchscreen, an optical imaging touchscreen, a dispersive signal touchscreen, an acoustic pulse recognition touchscreen, or may utilize any other touchscreen technology. In some configurations, the touchscreen 842 is incorporated on top of the display 840 as a transparent layer to enable a user to use one or more touches to interact with objects or other information presented on the display 840. In other configurations, the touchscreen 842 is a touch pad incorporated on a surface of the computing device that does not include the display 840. For example, the computing device may have a touchscreen incorporated on top of the display 840 and a touch pad on a surface opposite the display 840. In some configurations, the touchscreen 842 is a single-touch touchscreen. In other configurations, the touchscreen 842 is a multi-touch touchscreen. In some configurations, the touchscreen 842 is configured to detect discrete touches, single touch gestures, and/or multi-touch gestures. These are collectively referred to herein as gestures for convenience. Several gestures will now be described. It should be understood that these gestures are illustrative and are not intended to limit the scope of the appended claims. Moreover, the described gestures, additional gestures, and/or alternative gestures may be implemented in software for use with the touchscreen 842. As such, a developer may create gestures that are specific to a particular application program. In some configurations, the touchscreen 842 supports a tap gesture in which a user taps the touchscreen 842 once on an item presented on the display 840. The tap gesture may be used for various reasons including, but not limited to, opening or launching whatever the user taps. In some configurations, the touchscreen 842 supports a double tap gesture in which a user taps the touchscreen 842 twice on an item presented on the display 840. The double tap gesture may be used for various reasons including, but not limited to, zooming in or zooming out in stages. In some configurations, the touchscreen 842 supports a tap and hold gesture in which a user taps the touchscreen 842 and maintains contact for at least a pre-defined time. The tap and hold gesture may be used for various reasons including, but not limited to, opening a context-specific menu. In some configurations, the touchscreen 842 supports a pan gesture in which a user places a finger on the touchscreen 842 and maintains contact with the touchscreen 842 while moving the finger on the touchscreen 842. The pan gesture may be used for various reasons including, but not limited to, moving through screens, images, or menus at a controlled rate. Multiple finger pan gestures are also contemplated. In some configurations, the touchscreen 842 supports a flick gesture in which a user swipes a finger in the direction the user wants the screen to move. The flick gesture may be used for various reasons including, but not limited to, scrolling horizontally or vertically through menus or pages. In some configurations, the touchscreen 842 supports a pinch and stretch gesture in which a user makes a pinching motion with two fingers (e.g., thumb and forefinger) on the touchscreen 842 or moves the two fingers apart. The pinch and stretch gesture may be used for various reasons including, but not limited to, zooming gradually in or out of a web site, map, or picture. Although the above gestures have been described with reference to the use of one or more fingers for performing the gestures, other appendages such as toes or objects such as styluses may be used to interact with the touchscreen 842. As such, the above gestures should be understood as being illustrative and should not be construed as being limiting in any way. The data I/O interface component 844 is configured to facilitate input of data to the computing device and output of data from the computing device. In some configurations, the data I/O interface component 844 includes a connector configured to provide wired connectivity between the computing device and a computer system, for example, for synchronization operation purposes. The connector may be a proprietary connector or a standardized connector such as USB, micro-USB, mini-USB, or the like. In some configurations, the connector is a dock connector for docking the computing device with another device such as a docking station, audio device (e.g., a digital music player), or video device. The audio I/O interface component 846 is configured to provide audio input and/or output capabilities to the computing device. In some configurations, the audio I/O interface component 846 includes a microphone configured to collect audio signals. In some configurations, the audio I/O interface component 846 includes a headphone jack configured to provide connectivity for headphones or other external speakers. In some configurations, the audio I/O interface component 846 includes a speaker for the output of audio signals. In some configurations, the audio I/O interface component 846 includes an optical audio cable out. The video I/O interface component 848 is configured to provide video input and/or output capabilities to the computing device. In some configurations, the video I/O interface component 848 includes a video connector configured to receive video as input from another device (e.g., a video media player such as a DVD or BLURAY player) or send video as output to another device (e.g., a monitor, a television, or some other external display). In some configurations, the video I/O interface component 848 includes a High-Definition Multimedia Interface (“HDMI”), mini-HDMI, micro-HDMI, DisplayPort, or proprietary connector to input/output video content. In some configurations, the video I/O interface component 848 or portions thereof is combined with the audio I/O interface component 846 or portions thereof. The camera 850 can be configured to capture still images and/or video. The camera 850 may utilize a charge coupled device (“CCD”) or a complementary metal oxide semiconductor (“CMOS”) image sensor to capture images. In some configurations, the camera 850 includes a flash to aid in taking pictures in low-light environments. Settings for the camera 850 may be implemented as hardware or software buttons. Although not illustrated, one or more hardware buttons may also be included in the computing device architecture 800. The hardware buttons may be used for controlling some operational aspect of the computing device. The hardware buttons may be dedicated buttons or multi-use buttons. The hardware buttons may be mechanical or sensor-based. The illustrated power components 812 include one or more batteries 852, which can be connected to a battery gauge 854. The batteries 852 may be rechargeable or disposable. Rechargeable battery types include, but are not limited to, lithium polymer, lithium ion, nickel cadmium, and nickel metal hydride. Each of the batteries 852 may be made of one or more cells. The battery gauge 854 can be configured to measure battery parameters such as current, voltage, and temperature. In some configurations, the battery gauge 854 is configured to measure the effect of a battery's discharge rate, temperature, age and other factors to predict remaining life within a certain percentage of error. In some configurations, the battery gauge 854 provides measurements to an application program that is configured to utilize the measurements to present useful power management data to a user. Power management data may include one or more of a percentage of battery used, a percentage of battery remaining, a battery condition, a remaining time, a remaining capacity (e.g., in watt hours), a current draw, and a voltage. The power components 812 may also include a power connector, which may be combined with one or more of the aforementioned I/O components 810. The power components 812 may interface with an external power system or charging equipment via an I/O component. Example Clause A, a computer-implemented method for processing votes (114) for content (110) shared in a multi-user sharing session (100), the method comprising: causing the content (110) to be displayed to a plurality of users (102) in the multi-user sharing session (100); receiving the votes (114) from the plurality of users (102), wherein each vote (114) is based on a digital ink input gesture made by individual users (102A) of the plurality of users (102), wherein the digital ink input gesture identifies a portion of the content and a preference for the portion of the content (110); analyzing the votes (114) to determine a priority for the portion of the content (110); and modifying a display of the portion of the content (110) based on the priority for the portion of the content (110). Example Clause B, the computer-implemented method of Clause A, wherein modifying the display of the content comprises rearranging the content based on the priority for the portion of the content. Example Clause C, the computer-implemented method of any of Clauses A or B, wherein modifying the display of the content comprises deleting the portion of the content if the portion of the content has a priority not exceeding a threshold. Example Clause D, the computer-implemented method of any of Clauses A-C, wherein modifying the display of the content comprises generating an annotation indicating the preference for the portion of the content. Example Clause E, the computer-implemented method of any of Clauses A-D, wherein modifying the display of the content comprises adding a user interface (UI) element to the content to bring focus to the portion of the content if the portion of the content has a priority exceeding a threshold. Example Clause F, the computer-implemented method of any of Clauses A-E, wherein the multi-user sharing session comprises a three-dimensional (3D) collaborative workspace and wherein the portion of the content comprises virtual objects displayed in a mixed reality computing environment. Example Clause G, the computer-implemented method of any of Clauses A-F, wherein modifying the display of the content comprises gradually modifying a visual attribute of the portion of the content as the votes are received from the plurality of users. Example Clause H, the computer-implemented method of any of Clauses A-G, further comprising weighting the votes received from the plurality of users prior to determining the priority for the portion of the content, the weighting based at least in part on a context associated with each of the plurality of users. Example Clause I, a system, comprising: one or more processing units (602); and a computer-readable storage medium (612) having computer-executable instructions encoded thereon to cause the one or more processing units (602) to cause a display of a digital whiteboard to a plurality of users (102), the digital whiteboard comprising a plurality of whiteboard objects, receive a plurality of votes (114) from the users (102), where each vote (114) is based on an input gesture received from an individual user (102A) of the plurality of users (114), the input gesture of the individual user (102A) identifying one or more whiteboard objects and a preference for the identified whiteboard objects; analyze the votes (114) to determine a priority for the plurality of whiteboard objects; and modify the display of the digital whiteboard based on the priority. Example Clause J, the system of Clause I, wherein the modification of the display of the digital whiteboard comprises rearranging the plurality of whiteboard objects based on the priority for the plurality of whiteboard objects. Example Clause K, the system of Clauses I or J, wherein the modification of the display of the digital whiteboard comprises deleting one or more whiteboard objects having a priority that does not meet a threshold. Example Clause L, the system of any of Clauses I-K, wherein the modification of the display of the digital whiteboard comprises generating an annotation indicating preferences of individual whiteboard objects. Example Clause M, the system of any of Clauses I-L, wherein the computer-readable storage medium has further computer-executable instructions encoded thereon to generate an audio output describing the modification of the display of the digital whiteboard. Example Clause N, the system of any of Clauses I-M, wherein the computer-readable storage medium has further computer-executable instructions encoded thereon to receive at least one of the votes from a machine learning component. Example Clause O, a computer-readable storage medium (612) having computer-executable instructions encoded thereupon which, when executed, cause one or more processing units (602) to: cause content (110) to be displayed to a plurality of users (102) in a multi-user sharing session (100); receive votes (114) from the plurality of users (102), wherein each vote (114) is based on an input gesture made by individual users (102A) of the plurality of users (102), the input gesture of each individual user (102A) indicating a preference for a portion of the content (110); analyze the votes (114) to determine a priority for the portion of the content (110); and modify a display of the content (110) based on the priority for the portion of the content (110) by rearranging the content (110), deleting the portion of the content (110) if the portion of the content (110) has a priority not exceeding a threshold, generating an annotation indicating the preference for the portion of the content (110), or adding a user interface (UI) element to the content (110) to bring focus to the portion of the content (110) if the portion of the content (110) has a priority exceeding a threshold. Example Clause P, the computer-readable storage medium of Clause O, wherein the multi-user sharing session comprises a three-dimensional (3D) collaborative workspace and wherein the portion of the content comprises virtual objects displayed in a mixed reality computing environment. Example Clause Q, the computer-readable storage medium of Clauses O or P, wherein the multi-user sharing session comprises a digital whiteboard and wherein the portion of content comprises a whiteboard objects displayed on the digital whiteboard. Example Clause R, the computer-readable storage medium of any of Clauses O-Q, having further computer-executable instructions encoded thereupon to generate a report indicating the preferences for the portion of the content. Example Clause S, the computer-readable storage medium of any of Clauses O-R having further computer-executable instructions encoded thereupon to weight the votes received from the plurality of users prior to determining the priority for the portion of the content, the weighting based at least in part on a context associated with each of the plurality of users. Example Clause T, the computer-readable storage medium of any of Clauses O-S, wherein the computer-readable storage medium has further computer-executable instructions encoded thereon to receive at least one of the votes from a machine learning component. In closing, although the various configurations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter. All examples are provided for illustrative purposes and is not to be construed as limiting. 16112684 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Oct 21st, 2019 12:00AM https://www.uspto.gov?id=US11315689-20220426 Interactive graphical system for estimating body measurements Utilizing graphical elements representing human bodies to estimate physical measurements of a user is described. In at least one example, a service provider can access a database storing a plurality of data items. The service provider can cause a set of data items of the plurality of data items to be presented to the user. Data items in the set of data items are associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions. The service provider can receive data indicating a selection of a data item associated with a first magnitude associated with a first dimension and a second magnitude associated with a second dimension. The service provider can estimate physical measurements associated with the user based partly on a first magnitude and/or the second magnitude. 11315689 1. A computer-implemented method for creating realistic avatars based on estimated physical measurements of a body of a user, the method comprising: accessing a database storing a plurality of data items, wherein each data item is associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions; performing the following for a plurality of iterations: causing a set of data items of the plurality of data items to be presented to the user via a user interface that is presented on a display of a device associated with the user, wherein individual data items in the set of data items are associated with the at least one graphical element representing the human body with individual magnitudes corresponding to individual dimensions of the plurality of dimensions; receiving data from the device, the data indicating a user selection of an individual data item from the individual data items, the individual data item being associated with a particular individual magnitude of the individual magnitudes associated with a particular dimension of the plurality of dimensions; and determining a subsequent set of data items of the plurality of data items to be presented to the user via the user interface based on the user selection of the individual data item, wherein individual data items in the subsequent set of data items are associated with the at least one graphical element representing the human body with a different individual magnitudes corresponding to a respective individual dimension of the plurality of dimensions; estimating physical measurements associated with the user based on the different individual magnitudes corresponding to the respective individual dimension of the plurality of dimensions and a predictive model, the predictive model trained based on user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users; and generating an avatar for the user based on the estimated physical measurements associated with the user. 2. The computer-implemented method according to claim 1, wherein a first dimension of the plurality of dimensions corresponds to a body mass index measurement, wherein a second dimension of the plurality of dimensions corresponds to a waist measurement, and wherein estimating the physical measurements comprises estimating a body mass index measurement of the user and a waist measurement of the user. 3. The computer-implemented method according to claim 1, wherein individual data items of the plurality of data items stored in the database are associated with graphical elements representing the human body that correspond to a same height and a same gender as the user. 4. The computer-implemented method according to claim 1, wherein the predictive model predicts at least one physical measurement of the physical measures and includes weights that are based on training the respective predictive model using the plurality of data items stored in the database. 5. The computer-implemented method according to claim 1, further comprising: accessing user selections of data items from each set of data items caused to be presented to the user; accessing estimated physical measurements associated with profiles corresponding to users; and training the multiple regression model based on the user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users. 6. The computer-implemented method according to claim 1, wherein individual data items of the plurality of data items in the database are associated with two graphical elements representing the human body, and wherein each graphical element of the two graphical elements corresponds to a different view of the human body. 7. The computer-implemented method according to claim 1, further comprising: generating an additional user interface that provides functionality to present the physical measurements and the avatar to the user via the display of the device; and sending the additional user interface to the device. 8. A system for creating realistic avatars based on estimated physical measurements of a body of a user, the system including: a data storage device that stores instructions for creating realistic avatars based on estimated physical measurements of a body of a user; and a processor configured to execute the instructions to perform a method including: accessing a database storing a plurality of data items, wherein each data item is associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions; performing the following for a plurality of iterations: causing a set of data items of the plurality of data items to be presented to the user via a user interface that is presented on a display of a device associated with the user, wherein individual data items in the set of data items are associated with the at least one graphical element representing the human body with individual magnitudes corresponding to individual dimensions of the plurality of dimensions; receiving data from the device, the data indicating a user selection of an individual data item from the individual data items, the individual data item being associated with a particular individual magnitude of the individual magnitudes associated with a particular dimension of the plurality of dimensions; and determining a subsequent set of data items of the plurality of data items to be presented to the user via the user interface based on the user selection of the individual data item, wherein individual data items in the subsequent set of data items are associated with the at least one graphical element representing the human body with a different individual magnitudes corresponding to a respective individual dimension of the plurality of dimensions; estimating physical measurements associated with the user based on the different individual magnitudes corresponding to the respective individual dimension of the plurality of dimensions and a predictive model, the predictive model trained based on user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users; and generating an avatar for the user based on the estimated physical measurements associated with the user. 9. The system according to claim 8, wherein a first dimension of the plurality of dimensions corresponds to a body mass index measurement, wherein a second dimension of the plurality of dimensions corresponds to a waist measurement, and wherein estimating the physical measurements comprises estimating a body mass index measurement of the user and a waist measurement of the user. 10. The system according to claim 8, wherein individual data items of the plurality of data items stored in the database are associated with graphical elements representing the human body that correspond to a same height and a same gender as the user. 11. The system according to claim 8, wherein the predictive model predicts at least one physical measurement of the physical measures and includes weights that are based on training the respective multiple regression predictive model using the plurality of data items stored in the database. 12. The system according to claim 8, further comprising: accessing user selections of data items from each set of data items caused to be presented to the user; accessing estimated physical measurements associated with profiles corresponding to users; and training the multiple regression models based on the user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users. 13. The system according to claim 8, wherein individual data items of the plurality of data items in the database are associated with two graphical elements representing the human body, and wherein each graphical element of the two graphical elements corresponds to a different view of the human body. 14. The system according to claim 8, wherein the processor is further configured to execute the instructions to perform the method including: generating an additional user interface that provides functionality to present the physical measurements and the avatar to the user via the display of the device; and sending the additional user interface to the device. 15. One or more computer storage media storing instructions that, when executed by a computer, cause the computer to perform a method for creating realistic avatars based on estimated physical measurements of a body of a user, the method including: accessing a database storing a plurality of data items, wherein each data item is associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions; performing the following for a plurality of iterations: causing a set of data items of the plurality of data items to be presented to the user via a user interface that is presented on a display of a device associated with the user, wherein individual data items in the set of data items are associated with the at least one graphical element representing the human body with individual magnitudes corresponding to individual dimensions of the plurality of dimensions; receiving data from the device, the data indicating a user selection of an individual data item from the individual data items, the individual data item being associated with a particular individual magnitude of the individual magnitudes associated with a particular dimension of the plurality of dimensions; and determining a subsequent set of data items of the plurality of data items to be presented to the user via the user interface based on the user selection of the individual data item, wherein individual data items in the subsequent set of data items are associated with the at least one graphical element representing the human body with a different individual magnitudes corresponding to a respective individual dimension of the plurality of dimensions; estimating physical measurements associated with the user based on the different individual magnitudes corresponding to the respective individual dimension of the plurality of dimensions and a predictive model, the predictive model trained based on user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users; and generating an avatar for the user based on the estimated physical measurements associated with the user. 16. The one or more computer storage media according to claim 15, wherein a first dimension of the plurality of dimensions corresponds to a body mass index measurement, wherein a second dimension of the plurality of dimensions corresponds to a waist measurement, and wherein estimating the physical measurements comprises estimating a body mass index measurement of the user and a waist measurement of the user. 17. The one or more computer storage media according to claim 15, wherein individual data items of the plurality of data items stored in the database are associated with graphical elements representing the human body that correspond to a same height and a same gender as the user. 18. The one or more computer storage media according to claim 15, wherein the predictive model predicts at least one physical measurement of the physical measures and includes weights that are based on training the respective predictive model using the plurality of data items stored in the database. 19. The one or more computer storage media according to claim 15, further comprising: accessing user selections of data items from each set of data items caused to be presented to the user; accessing estimated physical measurements associated with profiles corresponding to users; and training the multiple regression models based on the user selections of data items from each set of data items caused to be presented to the user and estimated physical measurements associated with profiles corresponding to users. 20. The one or more computer storage media according to claim 15, wherein individual data items of the plurality of data items in the database are associated with two graphical elements representing the human body, and wherein each graphical element of the two graphical elements corresponds to a different view of the human body. 20 RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 14/921,110, filed on Oct. 23, 2015, which claims the benefit of U.S. Provisional Application No. 62/173,120 filed on Jun. 9, 2015, the entire contents of which are incorporated herein by reference. BACKGROUND Physical measurements of human bodies are useful for various purposes. For instance, health professionals use physical measurements to calculate body mass index (BMI). BMI is a measurement of a person's body fat based on the person's height and weight that can be used to determine whether the person is underweight, overweight, obese, etc. BMI, waist measurement, etc. can be useful for determining whether persons are at risk for various diseases. Additionally, many apparel companies use physical measurements to determine sizes and fits of garments. Collecting physical measurements, however, is time consuming. Additionally, persons can find themselves in situations where tools used for determining physical measurements are not accessible, such as in the developing world, in online applications, etc. SUMMARY This disclosure describes utilizing graphical elements representing human bodies to estimate physical measurements of a person. In at least one example, a service provider can access a database storing a plurality of data items. The service provider can cause a set of data items of the plurality of data items to be presented to the user. Data items in the set of data items are associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions. The service provider can receive data indicating a selection of a data item associated with a first magnitude associated with a first dimension and a second magnitude associated with a second dimension. The service provider can estimate physical measurements associated with the user based partly on a first magnitude and/or the second magnitude. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS The Detailed Description is set forth with reference to the accompanying figures, in which the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in the same or different figures indicates similar or identical items or features. FIG. 1 is a schematic diagram showing an example environment for utilizing data items associated with graphical elements representing human bodies to estimate physical measurements. FIG. 2 is a flow diagram that illustrates an example process to estimate physical measurements utilizing data items associated with graphical elements representing human bodies that are presented to users. FIG. 3 is a flow diagram that illustrates another example process to estimate physical measurements utilizing data items associated with graphical elements representing human bodies that are presented to users. FIG. 4 is a flow diagram that illustrates an example process to train a predictive model for estimating physical measurements. DETAILED DESCRIPTION This disclosure describes utilizing graphical elements representing human bodies to estimate physical measurements of a person. Technologies described herein can enable physical measurements to be estimated without requiring access to measuring devices such as scales, measuring tapes, body fat meters, body fat calipers, etc. That is, the technologies described herein include estimating physical measurements when few or no measuring devices are available using data items associated with graphical elements representing human bodies and inputs associated with the data items. The technologies described herein can be used to replace, assist, and/or supplement technologies currently implemented to determine physical measurements using various measuring devices (e.g., scales, tape measures, etc.). For illustrative purposes, a physical measurement represents a definite magnitude of a physical quantity that is used as a standard for quantifying a dimension of a part of the human body and/or a characteristic of the human body. Physical measurements can be associated with any system of units (e.g., metric system, United States customary measurement system, natural unit system, etc.). A physical measurement can be a definite magnitude of a physical quantity of dimension of a user's neck (e.g., circumference, length, width, etc.), upper arm (e.g., circumference, length, width, etc.), chest (e.g., circumference, length, width, etc.), bust (e.g. circumference, etc.), waist (e.g., circumference, length, width, etc.), hips (e.g., circumference, length, width, etc.), thigh (e.g., circumference, length, width, etc.), calf (e.g., circumference, length, width, etc.), leg (e.g., circumference, length, width, etc.), arm (e.g., circumference, length, width, etc.), etc. Physical measurements can include measurements of a user's height, weight, body fat percentage, etc. Physical measurements can also include measurements that are determined based on other physical measurements, such as BMI, as described above. BMI is an estimate of a user's body fat based on the user's height and weight that can be used to determine whether a user is underweight, overweight, obese, etc. BMI can be useful for determining whether users are at risk for various diseases. BMI is determined based on dividing a user's mass in kilograms by the square of the user's height in meters, or by multiplying a user's mass in pounds by 703 and dividing the product by the square of the user's height in inches. Technologies described herein cause one or more sets of data items to be presented to a user via an interface of a device that can be associated with the user (e.g. a user device). Each data item can be associated with at least one graphical element representing a human body having individual magnitudes of individual physical measurements corresponding to individual dimensions. That is, a data item can be associated with at least one graphical element that graphically represents a human body that has proportions that are consistent with a real human body having individual physical measurements corresponding to individual magnitudes. The graphical elements can be graphical representations of human bodies, such as two-dimensional or three-dimensional graphical representations of human bodies. As described above, the technologies described herein can cause one or more sets of data items to be presented to a user via an interface of a device that can be associated with the user (e.g. a user device). A set of data items includes one or more data items. In at least one example, a set of data items can be a subset of data items from a database of data items. Individual data items in a set of data items can be associated with at least one graphical element representing a human body with magnitudes associated with physical measurements that are different from other individual data items in the set of data items. For instance, each of the data items in a set of data items can be associated with at least one graphical element representing a human body with magnitudes including a first magnitude associated with a first dimension and a second magnitude associated with a second dimension. In at least one example, the first magnitude can be the same for all data items in the set of data items and the second magnitude can be different for each data item in the set of data items. For instance, a set of data items can include data items associated with at least one graphical element that represents human bodies that have same BMIs and different waist measurements. In some examples, the magnitudes represented by graphical representations in each data item can differ from other data items in a set of data items by a single magnitude associated with a single dimension. That is, all of the magnitudes associated with all of the dimensions can be the same except for one magnitude associated with one dimension. For instance, a set of data items can include data items each associated with at least one graphical element that represents a human body that has all of the same magnitudes as the other data items in the set of data items except that each data item in the set of data items is associated with at least one graphical element that represents a human body with a different waist measurement. The user can select a data item that is associated with graphical elements representing human bodies that look most like themselves, or in some examples, another person (e.g., a friend, a family member, a suspected criminal/person of interest, etc.), as described below. The service provider can receive data indicating the user selection. Based at least in part on receiving the data associated with the user selection, the service provider can retrieve second (or subsequent) sets of data items. In at least one example, at least one of the magnitudes represented by individual data items in the second (or any subsequent) set of data items can be based on the magnitudes represented by the data item selected by the user from the previous set of data items. As a non-limiting example, if the second set of the data items includes individual data items that are associated with one or more graphical elements representing human bodies that have different BMIs, the individual data items can be associated with one or more graphical elements representing human bodies with a same waist measurement based on the waist measurement corresponding to the data item selected by the user in the first set of data items. The technologies described herein leverage the user selection (e.g., input) and other user data to estimate physical measurements of the user (or the other person). That is, the technologies described herein can estimate physical measurements such as waist measurement, BMI, etc., without needing access to measuring devices. Estimating physical measurements can be useful in various applications including, but not limited to, health and/or disease prevention applications, video gaming applications, online apparel shopping applications, law enforcement applications, etc. For instance, health and/or fitness products (e.g., Microsoft Band, Fitbit®, etc.) can utilize the technologies described herein to prompt users to select data items associated with graphical elements representing human bodies that they believe look most like themselves and to leverage the users' selections overtime to estimate how the users' bodies are changing and/or changes in physical measurements. This data can be correlated with other data collected from a health and/or fitness application (e.g., activity data, heart rate data, etc.) to enable users to track progress and enable users to see what activities, etc. impact their health and/or fitness. In other examples, the technologies described herein can be useful for disease identification and prevention. As described above, health professionals utilize physical measurements to predict whether a user is underweight, overweight, obese, etc., whether users are at risk for various diseases (e.g., cardiac, hypertension, diabetes, etc.), etc. Accordingly, the technologies described herein can be utilized to identify and prevent diseases without relying on measuring devices. In additional or alternative examples, the technologies described herein can be utilized by gaming products and/or applications to generate realistic depictions of how users appear (i.e., avatars). For instance, the technologies described herein can prompt a user to select data items associated with graphical elements representing human bodies that they believe look most like themselves and can leverage the user's selections to generate realistic looking avatars. In other examples, the technologies described herein can be useful in online shopping applications. For instance, a user may not know his or her physical measurements. Accordingly, the technologies described herein can prompt a user to select data items associated with graphical elements representing human bodies that they believe look most like themselves and can leverage the user's selections to estimate physical measurements for recommending sizes of apparel. Additionally or alternatively, a user can be shopping online for a gift for another user (e.g., family member, friend, etc.) and he or she may not know the other user's physical measurements and/or size. Accordingly, the technologies described herein can prompt a user to select data items associated with graphical elements representing human bodies that they believe look most like the other user and can leverage the user's selections to estimate physical measurements for recommending sizes of apparel. In yet another example, the technologies described herein can be useful for identifying suspected criminals/persons of interest and can therefore be useful for law enforcement applications. For instance, the technologies described herein can prompt a user to select data items associated with graphical elements representing human bodies that they believe look most like a suspect and can leverage the user's selections to estimate physical measurements for the suspect. The technologies described herein can also be used for searching through databases of data items associated with one or more graphical elements of known criminals. Illustrative Environments FIG. 1 is a schematic diagram showing an example environment 100 for utilizing data items associated with one or more graphical elements representing human bodies to estimate physical measurements. More particularly, the example environment 100 can include a service provider 102, one or more network(s) 104, one or more users 106, and one or more devices 108 associated with the one or more users 106. The service provider 102 can be any entity, server(s), platform, etc., that facilitates presenting one or more sets of data items each associated with one or more graphical elements representing human bodies and leveraging inputs associated with individual data items to estimate physical measurements associated with users 106 and/or other persons (e.g., friends, family members, suspected criminals/persons of interest, etc.), as described above. The service provider 102 can be implemented in a non-distributed computing environment or can be implemented in a distributed computing environment, possibly by running some modules on devices 108 or other remotely located devices. As shown, the service provider 102 can include one or more server(s) 110, which can include one or more processing unit(s) 112 and computer-readable media 114, such as memory. In various examples, the service provider 102 can access data items associated with one or more graphical elements representing human bodies from a database, cause the data items to be presented to a user 106 in one or more sets of data items, receive data associated with user selection of individual data items, and leverage the magnitudes associated with the individual data items selected by the user 106 and other user data to estimate physical measurements associated with the user 106 and/or another person, as described above. The technologies described herein enable the service provider 102 to search and retrieve data items that are stored in the database and estimate physical measurements based on said data items more efficiently (i.e., faster) than if humans were involved, as described below. In some examples, the network(s) 104 can be any type of network known in the art, such as the Internet. Moreover, the devices 108 can communicatively couple to the network(s) 104 in any manner, such as by a global or local wired or wireless connection (e.g., local area network (LAN), intranet, etc.). The network(s) 104 can facilitate communication between the server(s) 110 and the devices 108 associated with the users 106. In some examples, the users 106 can operate corresponding devices 108 (e.g., user devices) to perform various functions associated with the devices 108, which can include one or more processing unit(s), computer-readable storage media, and a display 130. Device(s) 108 can represent a diverse variety of device types and are not limited to any particular type of device. Examples of device(s) 108 can include but are not limited to stationary computers, mobile computers, embedded computers, or combinations thereof. Example stationary computers can include desktop computers, work stations, personal computers, thin clients, terminals, game consoles, personal video recorders (PVRs), set-top boxes, or the like. Example mobile computers can include laptop computers, tablet computers, wearable computers, implanted computing devices, telecommunication devices, automotive computers, portable gaming devices, media players, cameras, or the like. Example embedded computers can include network enabled televisions, integrated components for inclusion in a computing device, appliances, microcontrollers, digital signal processors, or any other sort of processing device, or the like. Examples support scenarios where device(s) that can be included in the one or more server(s) 110 can include one or more computing devices that operate in a cluster or other clustered configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes. Device(s) included in the one or more server(s) 110 can represent, but are not limited to, desktop computers, server computers, web-server computers, personal computers, mobile computers, laptop computers, tablet computers, wearable computers, implanted computing devices, telecommunication devices, automotive computers, network enabled televisions, thin clients, terminals, game consoles, gaming devices, work stations, media players, digital video recorders (DVRs), set-top boxes, cameras, integrated components for inclusion in a computing device, appliances, or any other sort of computing device. Device(s) that can be included in the one or more server(s) 110 can include any type of computing device having one or more processing unit(s) 112 operably connected to computer-readable media 114 such as via a bus, which in some instances can include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any variety of local, peripheral, and/or independent buses. Executable instructions stored on computer-readable media 114 can include, for example, a data collection module 116, storing user data 118, a database 120, storing data items 122 associated with one or more graphical elements 123A, 123B, etc., representing human bodies, a measurement estimation module 124, a presentation module 126, a training module 128, and other modules, programs, or applications that are loadable and executable by processing units(s) 112. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components such as accelerators. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip Systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Device(s) that can be included in the one or more server(s) 110 can further include one or more input/output (I/O) interface(s) coupled to the bus to allow device(s) to communicate with other devices such as input peripheral devices (e.g., a keyboard, a mouse, a pen, a game controller, a voice input device, a touch input device, gestural input device, and the like) and/or output peripheral devices (e.g., a display, a printer, audio speakers, a haptic output, and the like). Such network interface(s) can include one or more network interface controllers (NICs) or other types of transceiver devices to send and receive communications over a network. For simplicity, some components are omitted from the illustrated environment. Processing unit(s) 112 can represent, for example, a CPU-type processing unit, a GPU-type processing unit, a Field-programmable Gate Array (FPGA), another class of Digital Signal Processor (DSP), or other hardware logic components that can, in some instances, be driven by a CPU. For example, and without limitation, illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. In various examples, the processing unit(s) 112 can execute one or more modules and/or processes to cause the server(s) 110 to perform a variety of functions, as set forth above and explained in further detail in the following disclosure. Additionally, each of the processing unit(s) 112 can possess its own local memory, which also can store program modules, program data, and/or one or more operating systems. In at least one configuration, the computer-readable media 114 of the server(s) 110 can include components that facilitate interaction between the service provider 102 and the users 106. The components can represent pieces of code executing on a computing device. For example, the computer-readable media 114 can include the data collection module 116, the database 120, the measurement estimation module 124, the presentation module 126, the training module 128, etc. In at least some examples, the modules (116, 120, 126, 128, etc.) can be implemented as computer-readable instructions, various data structures, and so forth via at least one processing unit(s) 112 to configure a device to execute instructions and to perform operations implementing generating response templates that correspond to ontological elements. Functionality to perform these operations can be included in multiple devices or a single device. The computer-readable media 114 can also include the database 120 for storing data items 122 associated with one or more graphical elements 123A, 123B, etc. representing human bodies, as described above. As a non-limiting example, FIG. 1 shows example data items 122 associated with one or more graphical elements 123A, 123B, etc. representing human bodies having a set of magnitudes. That is, the data items 122 can be associated with one or more graphical elements 123A, 123B, etc. that graphically represent human bodies that have proportions that are consistent with a human body having a set of physical measurements whereby each individual physical measurement corresponds to an individual magnitude, as described above. Each graphical element 123A, 123B, etc. can be associated with a different view of the human body having the set of magnitudes. As a non-limiting example, FIG. 1 shows a data item 122, where one of the graphical elements 123A is a frontal view of a human body having a set of magnitudes and the other graphical element 124B is a profile view of the human body having the set of magnitudes. The database of data items 122 can include different data items 122 based on different demographics (e.g., nationalities, ethnicities, genders, ages, etc.). Depending on the exact configuration and type of the server(s) 110, the computer-readable media 114 can include computer storage media and/or communication media. Computer storage media can include volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer memory is an example of computer storage media. Thus, computer storage media includes tangible and/or physical forms of media included in a device and/or hardware component that is part of a device or external to a device, including but not limited to random-access memory (RAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), phase change memory (PRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disks (DVDs), optical cards or other optical storage media, miniature hard drives, memory cards, magnetic cassettes, magnetic tape, magnetic disk storage, magnetic cards or other magnetic storage devices or media, solid-state memory devices, storage arrays, network attached storage, storage area networks, hosted computer storage or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device. In contrast, communication media can embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. Such signals or carrier waves, etc. can be propagated on wired media such as a wired network or direct-wired connection, and/or wireless media such as acoustic, RF, infrared and other wireless media. As defined herein, computer storage media does not include communication media. That is, computer storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se. The data collection module 116 can receive data associated with users 106 (e.g., user data 118) from the users 106 and/or on behalf of the users 106 and/or access data associated with users 106 via third party sources and systems (e.g., social networks, professional networks, etc.). In some examples, users 106 can input user data 118 when they set up a user account or profile for interacting with the service provider 102, an application, a website, etc., in response to a prompt for user data 118, etc. In at least one example, the presentation module 126 can cause one or more user interfaces to be presented to the user 106. The one or more user interfaces can provide functionality for the user 106 to input information. The user 106 can input personal information including, but not limited to, gender, age, physical measurements, etc., to the data collection module 116. In other examples, the data collection module 116 can receive data from devices such as input peripheral devices (e.g., sensors, cameras, and the like). For instance, a camera and/or sensor can determine a user's 106 height and provide data indicating the user's 106 height to the data collection module 116. In additional or alternative examples, the data collection module 116 can infer user data 118 based on user interactions with the service provider 102. For instance, a user 106 can select a data item 122 representative of a female out of a set of data items 122 representing both genders. Accordingly, the data collection module 116 can infer that the user 106 is a female. In some examples, the data collection module 116 can access user data 118 from third party sources and systems (e.g., social networks, professional networks, etc.). The user data 118 can be mapped to and/or otherwise associated with profiles that correspond to individual users 106. The profiles that correspond to the individual users 106 can be stored in a database associated with the user data 118 and/or some other data repository. The data collection module 116 can also receive data associated with user selections of data items 122 from sets of data items that are presented to the user 106. In at least one example, the presentation module 126 can cause sets of data items to be presented to the user 106 via one or more personalized user interfaces, described below, that provide functionality for the user 106 to select individual data items 122 of the set of data items 122. The user 106 can interact with the data items 122 that are presented via the user interface utilizing various mechanisms. In some examples, the user 106 can interact with the data items 122 via input peripheral devices (e.g., a keyboard, a mouse, a pen, a game controller, a voice input device, a touch input device, gestural input device, cameras, sensors, and the like), touch input, etc. The data collection module 116 can analyze the data to determine the magnitudes that correspond to each of the data items 122 selected by the user 106. The data collection module 116 can log each input and associate the logs with a profile corresponding to the user 106 in the database associated with the user data 118 and/or other data repository. Each log can correspond to a set of magnitudes that are represented by the data item 122 that the user 106 selected. The database 120 stores data items 122, as described above. In at least one example, the data items 122 can be associated with one or more graphical elements that represent human bodies having different genders, heights, waist measurements, BMIs, etc. The data items 122, as described above, can be associated with one or more graphical elements 123A, 123B, etc. that represent human bodies. The one or more graphical elements 123A, 123B, etc. associated with each data item 122 each represent a human body having a same set of magnitudes, as described above. The data items 122 illustrated in FIG. 1 are examples of data items 122 that can be presented to users 106, and any other presentation or configuration can be used. The database 120 can be associated with an index, such as a lookup table, where the data items 122 can be indexed based on magnitudes and/or other characteristics (e.g., gender, etc.). In a non-limiting example, the data items 122 can be indexed by gender, height, dimension (e.g., waist measurement, BMI, etc.), etc. The index can expedite the time required to retrieve data items 122 from the database 120 and reduce runtime computations associated with retrieving the data items 122. As a result, the index enables the data collection module 116 and/or presentation module 126 to retrieve data items 122 that are stored in the database 120 more efficiently (i.e., faster) then if humans were involved. The measurement estimation module 124 accesses user data 118 and logs associated with data associated with selections of data items 122. In at least one example, as described above, the user data 118 and logs can be mapped to and/or otherwise associated with a profile corresponding to a user 106 that is stored in the database associated with the user data 118. In such examples, the measurement estimation module 124 can access the profile stored in the database for estimating physical measurements of a user 106. The measurement estimation module 124 utilizes the user data 118 and magnitudes associated with the logs to estimate physical measurements associated with the user 106. The measurement estimation module 124 can utilize one or more predictive models to compute the estimated physical measurements. The one or more predictive models can change based on the dimensions to be estimated, population of users, demographic of the population of users, etc. Predictive Models 1 and 2, reproduced below, are non-limiting examples of predictive models that the measurement estimation module 124 can use for computing the estimated physical measurements (e.g., BMI and waist measurement, respectively). In each of the predictive models below, the weights (βx) are based at least in part on training the predictive model described below using user data 118, including previous inputs. Predictive Model 1 can be used to predict a user's 106 BMI (in kilograms/meters2) and Predictive Model 2 can be used to predict a user's 106 waist measurement (in millimeters). The “selected_waist” and “selected_bmi” variables are determined from a most recent (e.g., the last) data item 122 selected by the user 106. predicted_bmi=β0+β1is_male+β2selected_bmi+β3 selected_waist+β4 selected_waist:selected_bmi  PREDICTIVE MODEL 1 predicted_waist=β0+β1is_male+β2 selected_bmi+β3 selected_waist+β4 selected_waist:selected_bmi  PREDICTIVE MODEL 2 In both examples, selected_waist:selected_bmi denotes the product of the selected_waist and selected_bmi. The presentation module 126 can generate user interfaces that provide various functionalities, as described above and also below. The presentation module 126 can cause a user interface to be presented to users 106 utilizing any communication channel, such as an e-mail message, a site (e.g., website) associated with the service provider 102, a text message, a social network site, an application that is associated with the service provider 102 and that resides on device(s) 108 of the users 106, etc. In at least one example, the presentation module 126 can generate a user interface that includes a set of data items (e.g., a subset of the data items 122 stored in the database 120). The user interface can be configured to receive input from the user 106 and/or on behalf of the user 106 indicating a selection of at least one data item 122 in the set of data items. The presentation module 126 can cause the user interface to be presented on a display 130 of a device 108. A non-limiting example of a user interface 132 is illustrated in FIG. 1. User interface 132 includes a set of data items 134, as shown by the dashed lines. The set of data items 134 includes individual data items 122, as described above. In FIG. 1, the data items 122 are associated with two graphical elements 123A and 123B that each represent a human body having a set of magnitudes (i.e., pairs of graphical elements). In additional and/or alternative examples, the presentation module 126 can generate a user interface that prompts users 106 for user information such as gender, height, etc., and/or a user interface that presents estimated physical measurements to the users 106. The training module 128 can train the predictive model based at least in part on user data 118, including determined physical measurements (e.g., physical measurements determined using a measuring device), data associated with user selections, and estimated physical measurements, as described in FIG. 4 below. The predictive model can include a multiple regression model, etc., as described above in the non-limiting examples of Predictive Model 1 and Predictive Model 2. Example Processes The processes described in FIGS. 2-4 below are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. FIG. 2 is a flow diagram that illustrates an example process 200 to estimate physical measurements utilizing data items 122 associated with one or more graphical elements 123A, 123B, etc. representing human bodies that are presented to users 106. Block 202 illustrates receiving user data 118. The data collection module 116 can receive data (e.g., user data 118) associated with users 106 from the users 106 and/or on behalf of the users 106 and/or access data associated with users 106 via third party sources and systems (e.g., social networks, professional networks, etc.), as described above. In at least one example, the presentation module 126 can generate a user interface that prompts a user 106 to input his or her gender and height as illustrated in the user interface associated with block 202. Additional and/or alternative user interfaces can be presented for prompting the user 106 for additional and/or alternative information. The presentation module 126 can cause the user interface to be presented to a user 106 via a display 130 of a device 108, as described above. A user 106 can indicate whether he or she is a male or female, respectively, and select his or her height from one or more drop down menus. The user interface associated with block 202 is an example of a user interface that can be presented to users 106 and any other presentation or configuration can be used. In FIG. 2, the user 106 has indicated that he is a male and is 5′11″ tall. The data collection module 116 can send data associated with the user's input (e.g., gender, height, etc.) to the presentation module 126. In some examples, process 200 can be executed without a user 106 indicating whether he or she is male or female, respectively, or specifying his or her height. Block 204 illustrates causing a set of data items 134 to be presented to a user 106. In at least one example, the presentation module 126 can receive user data 118 from the data collection module 116 and, leveraging the index associated with the database 120 described above, the presentation module 126 can retrieve data items 122 and generate a user interface that includes a set of data items 134 associated with the user's input (e.g., gender, height, etc.). In an example, the set of data items 134 can represent a subset of data items 122 stored in the database 120 that represent human bodies having a same set of magnitudes with respect to every dimension except one dimension. That is, the set of data items 134 can include data items 122 that represent human bodies with different magnitudes associated with one dimension, but otherwise have the same set of magnitudes. In at least one example, based at least in part on receiving the input indicating the gender and height of the user, the presentation module 126 can access a predetermined number of data items 122 associated with the same gender and height as the user 106. The predetermined number can be determined on a number of data items 122 that can be arranged on a user interface, an arbitrary number, etc. Each data item 122 in a set of data items 134 can be associated with one or more graphical elements that represent a human body with a different magnitude for a dimension than each of the other data items 122 in the set of data items 134. As a non-limiting example, each data item 122 in a set of data items 134 can be associated with one or more graphical elements that represent a male who is 5′11″; however, each data item 122 can be associated with one or more graphical elements that can represent a 5′11″ male with a different waist measurement. In at least one example, each data item 122 can be associated with other magnitudes that are associated with standardized measurements associated with a population, as described below. For instance, each data item 122 can be associated with one or more graphical elements that represent a human body with a standardized BMI. In additional and/or alternative examples, the set of data items 134 can represent a set of data items 134 stored in the database 120 that represent human bodies having a different sets of magnitudes with respect to different dimensions. For instance, in some examples, the presentation module 126 can retrieve data items 122 and generate a user interface that includes a set of data items 134 arranged in a matrix arrangement in which one axis is associated with BMI and the other axis is associated with waist measurement. That is, the data items 122 represent human bodies with different BMIs along one axis and the data items 122 represent human bodies with different waist measurements along the other axis. In other examples, the presentation module 126 can retrieve data items 122 and generate a user interface that includes a set of data items 134 with varying dimensions that can be presented in a pattern, a random configuration, etc. For instance, in at least one example, the set of data items 134 can include all of the data items 122 stored in the database 120 and the presentation module 126 can present all of the data items 122 to the user 106. Block 206 illustrates receiving data associated with a user selection. A user 106 can interact with a user interface to indicate which data item 122 of the set of data items 134 is associated with at least one graphical element representing a human body that looks most similar to him or her, as described above. In at least some examples, users 106 can leverage zoom features to enlarge the individual data items 122. The device 108 can determine an input associated with the user selection. As a non-limiting example, box 206 indicates that the user selected the data item 122 within the box 206 as the pair of graphical elements 123A, 123B, etc. that are most representative of his body or of the body of another person. The device 108 can send data associated with the input to the data collection module 116. The data collection module 116 can receive the data and can analyze the input to determine the set of magnitudes associated with the data item 122 selected by the user 106. The data collection module 116 can send the data associated with the set of magnitudes to the measurement estimation module 124 and/or the presentation module 126. Block 208 illustrates estimating physical measurements. The measurement estimation module 124 accesses user data 118 and receives and/or accesses the logs associated with the data indicating a selection of the data item 122 that represents at least one graphical element of a human body that best represents the user's 106 own body. In additional and/or alternative examples, the measurement estimation module 124 can access user data 118 and receive and/or access the logs associated with the data indicating a selection of the data item 122 that represents at least one graphical element of a human body that best represents a body of another person (e.g., a friend, a family member, a suspected criminal/person of interest, etc.), as described above. The measurement estimation module 124 can utilize a predictive model (e.g., Predictive Model 1, Predictive Model 2, etc.) to compute estimated physical measurements based at least in part on the user data 118 (e.g., gender, height, etc.) and the magnitudes associated with user selections of data items 122. In at least one example, the measurement estimation module 124 can estimate at least a waist measurement and/or BMI using a predictive model, like Predictive Models 1 and 2 described above. Block 210 illustrates causing the physical measurements to be presented to the user 106. The presentation module 126 can generate a user interface that provides functionality to present estimated physical measurements to the user 106 via the display 130 of the device 108. The presentation module 126 can send the user interface to the device 108 for presenting the user interface to the user 106 on the display 130 of the device 108. An example user interface is associated with block 210. The user interface associated with block 210 is an example of a user interface that can be presented to users 106 and any other presentation or configuration can be used. The physical measurements can be presented to the user 106 as finite physical measurements, ranges of physical measurements, physical measurements including a confidence interval, etc. As a non-limiting example, the physical measurements are presented as physical measurements including confidence intervals. For instance, the measurement estimation module 124 estimated the waist measurement of the user 106 represented in FIG. 2 as 868.9 millimeters with a confidence interval of ±100 millimeters. That is, the user's 106 waist is likely to measure 768.9-968.9 millimeters. One or more graphical representations can also be presented with the physical measurements to visually represent physical measurements associated with the user 106. Block 212 illustrates iteratively causing sets of data items 134 to be presented to the user 106 and iteratively receiving data associated with the user selections. In at least some examples, based at least in part on receiving data associated with a first input, the presentation module 126 can receive and/or access data from the data collection module 116 that is associated with the magnitudes of the data item 122 selected by the user 106 and can leverage the index associated with the database 120 to retrieve a new set of data items 134. The new set of data items 134 can include a same number of data items 122 as the first set of data items 134, more data items 122, or fewer data items 122. The presentation module 126 can generate a user interface that includes the new set of data items 134 that are associated with one or more graphical elements that represent human bodies with at least some new magnitudes. Data items 122 in the new set of data items 134 can each be associated with one or more graphical elements that represent human bodies with at least one magnitude associated with a dimension that is different from the other data items 122 in the new set of data items 134 but otherwise have the same magnitudes as the other data items 122 in the new set of data items 134 with respect to other dimensions. In some examples, at least one of the magnitudes associated with all of the data items 122 in the new set of data items 134 is determined based at least in part on a magnitude associated with the data item 122 selected by the user 106 in the first set of data items 134. The presentation module 126 can cause a user interface associated with the new set of data items 134 to be presented on a display 130 of a device 108, as described above. In at least one example, based at least in part on receiving data associated with the first input indicating which data item 122 in a set of data items 134 is associated with at least one graphical element that represents a human body that best represents the user 106, the data collection module 114 can analyze the selection and determine the magnitudes associated with the data item 122 selected by the user 106. The data collection module 114 can log each of the inputs and associate the logs with the profile corresponding to the user 106 in the database associated with the user data 118 and/or other data repository. For instance, if the user 106 selected a data item 122 associated with a graphical element that represents a human body with a waist measurement of 35 inches, the data collection module 114 can associate that log the corresponds to the user selection with the user profile corresponding to the user 106. Based at least in part on determining the magnitudes associated with the data item 122 selected by the user 106, the presentation module 126 can access a predetermined number of data items 122 that are associated with the same gender, height, and magnitude (e.g., waist measurement of 35 inches) to select a new set of data items 134. The new set of data items 134 can include data items 122 that are associated with graphical elements that represent human bodies that have different magnitudes associated with a different dimension than the data items 122 in the first set of data items 134. As a non-limiting example, if the data items 122 of the first set of data items 134 are associated with one or more graphical elements that represent human bodies that each have a waist measurement, the data items 122 of the new set of data items 122 can be associated with one or more graphical elements that represent human bodies that are associated with one or more graphical elements that represent human bodies that each have a different BMI. As such, in the non-limiting example, the new set of data items 122 can be associated with graphical elements that represent human bodies that have a same gender, same height, waist measurement, and varying BMIs. Each data item 122 can be associated with at least one graphical element that represents a human body with a different magnitude for the new dimension (e.g., BMI). As a non-limiting example, every data item 122 in the new set of data items 134 can be associated with graphical elements that represent a male who is 5′11″ with a waist measurement of 35 inches; however, each data item 122 can be associated with graphical elements that represent a 5′11″ male with a waist measurement of 35 inches and a different BMI. In at least one example, a new set of data items 134 can include at least one data item 122 that was in the first set of data items 134. For instance, as a non-limiting example, the first set of data items 134 can be associated with data items 122 associated with one or more graphical elements that represent human bodies that have a same gender, same height, same BMIs, and various waist measurements. The user can select a data item 122 associated with a graphical element representing a human body having a BMI of 26 and a waist measurement (e.g., 35 inches). Accordingly, the new set of data items 134 can be associated with data items 122 associated with one or more graphical elements representing a human body having a waist measurement of 35 inches and varying BMIs, including the BMI associated with the data items 122 in the first set of data items 134 (e.g., 26). Therefore, in at least one example, a data item 122 in an immediately preceding set of data items 134 can also be included in the new set of data items 134. The presentation module 126 can iteratively define new sets of data items 134 to be presented to the user 106. As described above, each new set of data items 134 can be associated with data items 122 that are associated with one or more graphical elements that represent human bodies that have a different magnitude associated with a new dimension. The presentation module 126 can cause the sets of data items 134 to be presented to the user 106 via a user interface configured to receive input. Based at least in part on causing the sets of data items 134 to be presented to the user 106, the device 108 can determine inputs and send data corresponding to the inputs to the data collection module 114. The data collection module 114 can receive inputs indicating which data item 122 is associated with at least one graphical element that best represents the user 106. The measurement estimation module 124 can leverage the user data 118, including the data associated with the inputs, and the predictive model to estimate the physical measurements associated with the user 106, as described above. In some examples, the measurement estimation module 124 can leverage the user data 118, including the data associated with the inputs, and the predictive model to estimate the physical measurements associated with another person, as described above. FIG. 3 is a flow diagram that illustrates another example process 300 to estimate physical measurements utilizing data items 122 associated with graphical elements that represent human bodies that are presented to users 106. Block 302 illustrates determining user data 118. The data collection module 116 can receive data associated with users 106 from the users 106 and/or on behalf of the user 106 and/or access data associated with users 106 via third party sources and systems (e.g., social networks, professional networks, etc.), as described above. In at least one example, the presentation module 126 can generate a user interface that prompts a user 106 for his or her gender, height measurement, and/or other dimensions, as described above. Block 304 illustrates causing a first set of data items 122 to be presented to the user 106. In at least one example, the presentation module 126 can access user data 118 and can utilize the index associated with the database 120 to retrieve data items 122 associated with one or more graphical elements that represent human bodies with at least some of the same physical measurements as the user 106 (per the user data 118). The presentation module 126 can utilize the retrieved data items 122 to generate a user interface that includes a first set of data items 134. The user interface can be configured to receive user selection of one or more data items 122. In at least one example, based at least in part on receiving the input indicating the gender and height of the user 106, the presentation module 126 can retrieve a predetermined number of data items 122 associated with graphical elements that represent human bodies that are associated with the same gender and height. Each data item 122 in the first set of data items 134 can be associated with one or more graphical elements that represent a human body with at least one different magnitude for a dimension and same magnitudes for each of the other dimensions. As a non-limiting example, each data item 122 in the first set of data items 134 associated with block 304 can be associated with one or more graphical elements that represent a human body of the same gender and height, but each data item 122 can have a different waist measurement. In at least one example, each data item 122 can be associated with one or more graphical elements that represent human bodies that have a same predetermined BMI. In at least one example, the predetermined BMI can be a standardized BMI (e.g., average, median, etc.) for a population of users who have been previously measured. That is, each data item 122 can be associated with graphical elements that represent human bodies with a gender and a height that are the same as the user 106, a same predetermined BMI, and a different waist measurement. Block 306 illustrates receiving data associated with a first input. A user 106 can interact with a user interface to indicate which data item 122 of the first set of data items 134 is associated with one or more graphical elements that look most similar to him or her. As described above, in some examples, a user 106 can interact with a user interface to indicate which data item 122 of the second set of data items 134 is associated with one or more graphical elements that look most similar to another person (e.g., a friend, a family member, a suspected criminal/person of interest, etc.). As a non-limiting example, box 306 indicates that the user 106 selected the data item 122 within the box 306 as the data item 122 associated with graphical elements most representative of his body. The device 108 can determine the selection and can send data corresponding to the input to the data collection module 116. The data collection module 116 can receive the data associated with the first input and can determine the magnitudes associated with the data item 122 selected by the user 106 in the first input. The data collection module 116 can log the first input and associate the log with the user profile corresponding to the user 106. The data collection module 116 can send data associated with the first input (e.g., the magnitudes associated with the data item 122 selected by the user 106) to the presentation module 126. Block 308 illustrates causing a second set of data items 134 to be presented to the user 106. As described above, the presentation module 128 can iteratively cause additional sets of data items 134 to be presented to the user 106. In at least one example, based at least in part on receiving the first input 306 indicating which data item 122 is associated with one or more graphical elements that best represent the user 106, the data collection module 114 can analyze the selection and determine the magnitudes associated with the data item 122 selected by the user 106, as described above. The presentation module 126 can receive data associated with the input 306 (e.g., the magnitudes associated with the data item 122 selected by the user 106) and can utilize the data and the index associated with the database 120 to access a predetermined number of data items 122 that represent human bodies having the same gender, height, and set of magnitudes, with the exception of magnitudes associated with one dimension. The second set of data items 134 can be associated with one or more graphical elements that represent human bodies that have different magnitudes associated with a different dimension than the first set of data items 134. As a non-limiting example, if the data items 122 in the first set of data items 122 are associated with one or more graphical elements that represent human bodies that have different waist measurements, the data items 122 in the second set of data items 134 can be associated with one or more graphical elements that represent human bodies that have a different BMI. Each data item 122 can represent a human body with a different magnitude for the dimension. For instance, each data item 122 can be associated with one or more graphical elements that represent a human body with a height and gender that is the same as the user 106, a waist measurement that corresponds to the waist measurement associated with the previously selected data item 122, and a different BMI. In some examples, as described above, one of the data items 122 in the second set of data items 134 can have a same BMI as the data item 122 selected in the first set of data items 134. Block 310 illustrates receiving data associated with a second input from the user 106. A user 106 can interact with a user interface to indicate which data item 122 of the second set of data items 134 is associated with one or more graphical elements that look most similar to him or her. As described above, in some examples, a user 106 can interact with a user interface to indicate which data item 122 of the second set of data items 134 is associated with one or more graphical elements that look most similar to another person (e.g., a friend, a family member, a suspected criminal/person of interest, etc.). As a non-limiting example, the box 310 indicates that the user 106 selects the data item within the box 310 as the data item associated with one or more graphical elements that are most representative of his body. The device 108 can determine the user selection and can send data associated with the second input to the data collection module 116. The data collection module 116 can receive the data associated with the second input and can determine the magnitude associated with the data item 122 selected by the user 106. The data collection module 116 can log the second input and associate the log with the user 106 in the user data 118. The data collection module 116 can send data associated with the second input (e.g., the magnitudes associated with the data item 122 selected by the user 106) to the presentation module 126. Block 312 illustrates causing a third set of data items 134 to be presented to the user 106. As described above, the presentation module 128 can iteratively cause additional sets of data items 134 to be presented to the user 106. In at least one example, based at least in part on receiving data associated with the second input indicating data item 122 is associated with one or more graphical elements that best represent the user 106, the data collection module 114 can analyze the selection and determine the magnitudes associated with the data item 122 selected by the user 106, as described above. The presentation module 126 can receive data associated with the second input (e.g., the magnitudes associated with the data item 122 selected by the user 106) and can utilize the data and the index associated with the database 120 to access a predetermined number of data items 122 that are associated with one or more graphical elements that represent human bodies having the same gender, height, and set of magnitudes, with the exception of magnitudes associated with one dimension. The third set of data items 134 can be associated with one or more graphical elements that represent human bodies that have different magnitudes associated with a different dimension than the first set and/or second set of data items 134. As a non-limiting example, if the second set of data items 134 are associated with one or more graphical elements that represent human bodies that have a different BMI, the third set of data items 134 can be associated with one or more graphical elements that represent human bodies that have a different waist measurement and/or other magnitude associated with another dimension. Each data item 122 can be associated with one or more graphical elements that represent a human body with a different magnitude for the dimension. For instance, data item 122 can be associated with one or more graphical elements that represent a human body with a height and gender that is the same as the user 106, a BMI that corresponds to the BMI associated with the previously selected graphical element 122, and different waist measurements (or other dimension). Block 314 illustrates receiving data associated with a third input. A user 106 can interact with a user interface to indicate which data item 122 of the third set of data items 134 is associated with one or more graphical elements that look most similar to him or her. As described above, in some examples, a user 106 can interact with a user interface to indicate which data item 122 of the second set of data items 134 is associated with one or more graphical elements that look most similar to another person (e.g., a friend, a family member, a suspected criminal/person of interest, etc.). As a non-limiting example, the box 314 indicates that the user 106 selects the data item 122 within the box 314 as the data item 126 associated with one or more graphical elements that are most representative of his body. The device 108 can determine the user selection and can send data associated with the third input to the data collection module 116. The data collection module 116 can receive the third input and can determine the magnitudes associated with the data item 122 selected by the user 106. The data collection module 116 can log the third input and associate the log with the user 106 in the user data 118. The data collection module 116 can send data associated with the third input (e.g., the magnitudes associated with the data item 122 selected by the user 106) to the measurement estimation module 124. Block 316 illustrates estimating physical measurements associated with the user 106 or the other person, as described above. The measurement estimation module 124 accesses user data 118 including inputs indicating a selection of at least one data item 122 associated with one or more graphical elements that best represents the user's 106 body. In additional and/or alternative examples, the measurement estimation module 124 can access user data 118 including inputs indicating a selection of at least one data item 122 associated with one or more graphical elements that best represents another person's body. The measurement estimation module 124 can utilize a predictive model (e.g., Predictive Model 1, Predictive Model 2, etc.) to compute estimated physical measurements based at least in part on the user data 118 (e.g., user gender, height, etc.) and the inputs. In at least one example, the measurement estimation module 124 can estimate at least a waist measurement and/or BMI using a predictive model like the predictive models described above. Process 300 includes three iterations of causing sets of data items 122 to be presented to users 106 and receiving data associated with inputs, but any number of iterations can be used to estimate physical measurements. In some examples, the data collection module 116 can receive data associated with inputs following each presentation of a set of data items 134 and can subsequently send the data associated with each input to the measurement estimation module 124. In other examples, after the last iteration, the data collection module 116 can receive data associated with a last input (e.g., the magnitudes associated with the last data item 122 selected by the user 106) and can subsequently send the data to the measurement estimation module 124 for estimating the physical measurements. FIG. 4 is a flow diagram that illustrates an example process 400 to train a predictive model. Block 402 illustrates accessing user data 118 and data associated with inputs. The training module 128 can receive and/or access user data 118 from the data collection module 116. In at least one example, the training module 128 can access determined physical measurements associated with users 106 (i.e., physical measurements ascertained by measurement devices). Additionally, the training module 128 can access data associated with inputs associated with users 106. As described above, the inputs can be associated with user selections of data items 122 from each set of data items 134 caused to be presented to the user 106. The determined physical measurements and the data associated with the inputs can be associated with profiles corresponding to the users 106 and stored in the database associated with the user data 118 and/or some other data repository associated with the user data 118. Block 404 illustrates accessing estimated physical measurements. The training module 128 can access the estimated physical measurements determined by the measurement estimation module 124. The estimated physical measurements can also be associated with each individual user 106 and/or the determined physical measurements and data associated with the inputs. Block 406 illustrates training predictive models. The training module 128 can leverage machine learning algorithms (e.g., supervised learning, unsupervised learning, semi-supervised learning, deep learning, etc.) to learn predictive models. The measurement estimation module 124 can leverage the predictive models to estimate physical measurements based at least in part on one or more determined physical measurements and the inputs. Predictive Model 1 and Predictive Model 2, described above, are examples of predictive models learned using the machine learning algorithms. Example Clauses A. A computer-implemented method for estimating physical measurements associated with users based at least in part on generating personalized user interfaces that provide functionality for receiving user input, the computer-implemented method comprising: accessing a database storing a plurality of data items; causing a set of data items of the plurality of data items to be presented to a user via a user interface that is presented on a display of a device associated with the user, wherein first individual data items in the set of data items are associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions; receiving first data from the device, the first data indicating a selection of a first individual data item from the first individual data items, the first individual data item being associated with a first individual magnitude of the individual magnitudes associated with a first dimension of the plurality of dimensions and a second individual data item being associated with a second individual magnitude of the individual magnitudes associated with a second dimension of the plurality of dimensions; and estimating physical measurements associated with the user based at least in part on at least one of the first individual magnitude or the second individual magnitude. B. A computer-implemented method as paragraph A recites, further comprising: based at least in part on receiving the first data, causing an additional set of data items of the plurality of data items to be presented to the user via the user interface, wherein second individual data items in the additional set of data items are associated with at least an additional graphical element representing the human body with the second individual magnitude and third individual magnitudes of the individual magnitudes associated with a third dimension of the plurality of dimensions; receiving second data from the device, the second data indicating an additional selection of a second individual data item from the second individual data items, the second individual data item being associated with a third individual magnitude of the third individual magnitudes; and estimating the physical measurements associated with the user further based at least in part on the third individual magnitude. C. A computer-implemented method as paragraph B recites, wherein the third dimension corresponds to the first dimension. D. A computer-implemented method as paragraph C recites, wherein at least one data item in the additional set of data items comprises the first individual data item. E. A computer-implemented method as any of paragraphs A-D recite, wherein: the first dimension corresponds to a body mass index measurement; the second dimension corresponds to a waist measurement; and estimating the physical measurements comprises estimating a body mass index measurement of the user and a waist measurement of the user. F. A computer-implemented method as any of paragraphs A-E recite, wherein individual data items of the plurality of data items stored in the database are associated with graphical elements representing the human body that correspond to a same height and a same gender as the user. G. A computer-implemented method as any of paragraphs A-F recite, wherein the estimating the physical measurements associated with the user comprises estimating the physical measurements further based at least in part on a multiple regression predictive model. H. A computer-implemented method as any of paragraphs A-G recite, wherein: individual data items of the plurality of data items in the database are associated with two graphical elements representing the human body; and each graphical element of the two graphical elements corresponds to a different view of the human body. I. A computer-implemented method as any of paragraphs A-H recite, further comprising: generating an additional user interface that provides functionality to present the physical measurements to the user via the display of the device; and sending the additional user interface to the device. J. One or more computer-readable media encoded with instructions that, when executed by a processor, configure a computer to perform a method as any of paragraphs A-I recite. K. A device comprising one or more processors and one or more computer readable media encoded with instructions that, when executed by the one or more processors, configure a computer to perform a computer-implemented method as any of paragraphs A-I recite. L. A computer-implemented method for estimating physical measurements associated with users based at least in part on generating personalized user interfaces that provide functionality for receiving user input, the computer-implemented method comprising: means for accessing a database storing a plurality of data items; means for causing a set of data items of the plurality of data items to be presented to a user via a user interface that is presented on a display of a device associated with the user, wherein first individual data items in the set of data items are associated with at least one graphical element representing a human body with individual magnitudes corresponding to individual dimensions of a plurality of dimensions; means for receiving first data from the device, the first data indicating a selection of a first individual data item from the first individual data items, the first individual data item being associated with a first individual magnitude of the individual magnitudes associated with a first dimension of the plurality of dimensions and a second individual data item being associated with a second individual magnitude of the individual magnitudes associated with a second dimension of the plurality of dimensions; and means for estimating physical measurements associated with the user based at least in part on at least one of the first individual magnitude or the second individual magnitude. M. A computer-implemented method as paragraph L recites, further comprising: based at least in part on receiving the first data, means for causing an additional set of data items of the plurality of data items to be presented to the user via the user interface, wherein second individual data items in the additional set of data items are associated with at least an additional graphical element representing the human body with the second individual magnitude and third individual magnitudes of the individual magnitudes associated with a third dimension of the plurality of dimensions; means for receiving second data from the device, the second data indicating an additional selection of a second individual data item from the second individual data items, the second individual data item being associated with a third individual magnitude of the third individual magnitudes; and means for estimating the physical measurements associated with the user further based at least in part on the third individual magnitude. N. A computer-implemented method as paragraph M recites, wherein the third dimension corresponds to the first dimension. O. A computer-implemented method as paragraph N recites, wherein at least one data item in the additional set of data items comprises the first individual data item. P. A computer-implemented method as any of paragraphs L-0 recite, wherein: the first dimension corresponds to a body mass index measurement; the second dimension corresponds to a waist measurement; and estimating the physical measurements comprises means for estimating a body mass index measurement of the user and a waist measurement of the user. Q. A computer-implemented method as any of paragraphs L-P recite, wherein individual data items of the plurality of data items stored in the database are associated with graphical elements representing the human body that correspond to a same height and a same gender as the user. R. A computer-implemented method as any of paragraphs L-Q recite, wherein the estimating the physical measurements associated with the user comprises means for estimating the physical measurements further based at least in part on a multiple regression predictive model. S. A computer-implemented method as any of paragraphs L-R recite, wherein: individual data items of the plurality of data items in the database are associated with two graphical elements representing the human body; and each graphical element of the two graphical elements corresponds to a different view of the human body. T. A computer-implemented method as any of paragraphs L-S recite, further comprising: means for generating an additional user interface that provides functionality to present the physical measurements to the user via the display of the device; and means for sending the additional user interface to the device. U. A system comprising: one or more processors; memory; and one or more modules stored in the memory and executable by the one or more processors to perform operations comprising: generating a user interface for presenting a first set of data items of a plurality of data items to a user, wherein first individual data items in the first set of data items represent human bodies having at least first magnitudes of a set of magnitudes associated with first dimensions and second magnitudes of the set of magnitudes associated with second dimensions; receiving data indicating a selection of a first individual data item of the first individual data items; computing estimated physical measurements associated with the user based at least in part on the selection; and generating a second user interface for presenting the estimated physical measurements to the user. V. The system as paragraph U recites, wherein the operations further comprise, determining, based at least in part on the selection of the first individual data item, a first magnitude of the first magnitudes represented by the first individual data item and a second magnitude of the second magnitudes represented by the first individual data item. W. The system as paragraph V recites, wherein: the operations further comprise based at least in part on receiving the data, generating at least one additional user interface for presenting an additional set of data items of the plurality of data items to the user; second individual data items in the additional set of data items are associated with the first magnitude or the second magnitude; and the second individual data items are each associated with a different magnitude associated with a third magnitude of the set of magnitudes associated with a third dimension. X. The system as paragraph W recites, wherein the operations further comprise: receiving additional data indicating an additional selection of a second individual data item from the second individual data items; and computing the estimated physical measurements further based at least in part on the additional selection. Y. A first device comprising: one or more processors; memory; and one or more modules stored in the memory and executable by the one or more processors to perform operations comprising: causing a first user interface of a first set of data items to be presented to a user associated with the first device, wherein first individual data items in the first set of data items represent human bodies having at least first magnitudes associated with a first dimension and second magnitudes associated with a second dimension, wherein individual second magnitudes of the second magnitudes are different for each first individual data item of the first individual data items; determining a first input indicating a first selection of a first individual data item of the first individual data items, wherein the first individual data item is associated with an individual second magnitude of the individual second magnitudes; sending first data corresponding to the first input to a second device; receiving second data from the second device, the second data corresponding to estimated physical measurements determined based in part on the individual second magnitude; and causing a data item corresponding to the estimated physical measurements to be presented to the user via the first device. Z. A first device as paragraph Y recites, wherein the first magnitudes associated with first dimensions correspond to a standardized measurement associated with a population of previously measured users. AA. A first device as any of paragraphs Y or Z recite, wherein the operations further comprise: based at least in part on sending the first data, causing a second user interface of a second set of data items to be presented to the user, wherein second individual data items in the second set of data items represent human bodies having the second magnitude and third magnitudes associated with a third dimension, wherein individual third magnitudes of the third magnitudes are different for each second individual data item of the second individual data items; determining a second input indicating a second selection of a second individual data item of the second individual data items, wherein the second individual data item is associated with an individual third magnitude of the individual third magnitudes; sending third data corresponding to the second input to the second device; and receiving the second data from the second device, the second data corresponding to the estimated physical measurements further determined based in part on the individual second magnitude and the individual third magnitude. AB. A first device as paragraph AA recites, wherein the third dimension corresponds to the first dimension. AC. A first device as paragraph AA recites, wherein the operations further comprise: based at least in part on sending the third data, causing a third user interface of a third set of data items to be presented to the user, wherein third individual data items in the third set of data items represent human bodies having the third magnitude and fourth magnitudes associated with a fourth dimension, wherein individual fourth magnitudes of the fourth magnitudes are different for each third individual data item of the third individual data items; determining a third input indicating a third selection of a third individual data item of the third individual data items, wherein the third individual data item is associated with an individual fourth magnitude of the individual fourth magnitudes; sending fourth data corresponding to the third input to the second device; and receiving the second data from the second device, the second data corresponding to the estimated physical measurements further determined based in part on the individual second magnitude, the individual third magnitude, and the individual fourth magnitude. AD. A first device as paragraph AC recites, wherein the fourth dimension corresponds to the second dimension. AE. A first device as paragraph AC recites, wherein: the first dimension and the third dimension correspond to a body mass index measurement; the second dimension and the fourth dimension correspond to a waist measurement; and the estimated physical measurements include an estimated body mass index measurement and an estimated waist measurement. Conclusion Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are described as illustrative forms of implementing the claims. Conditional language such as, among others, “can,” “could,” “might” or “can,” unless specifically stated otherwise, are understood within the context to present that certain examples include, while other examples do not necessarily include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that certain features, elements and/or steps are in any way required for one or more examples or that one or more examples necessarily include logic for deciding, with or without input or prompting, whether certain features, elements and/or steps are included or are to be performed in any particular example Conjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is to be understood to present that an item, term, etc. can be either X, Y, or Z, or a combination thereof. 16658411 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Dec 3rd, 2018 12:00AM https://www.uspto.gov?id=US11314409-20220426 Modeless augmentations to a virtual trackpad on a multiple screen computing device The disclosed technologies address various technical and user experience problems by augmenting the functionality provided by virtual on-screen trackpads without requiring users to switch between modes. In this way, users can access extended functionality without interfering with expected traditional trackpad functionality (e.g. moving a cursor, clicking, and scrolling). In particular, technologies are disclosed for modeless digital pen input to a virtual trackpad, modeless gestures for summoning user interfaces, modeless gestures utilizing position relative to adjacent user interfaces, and modeless cursor control and interaction with virtual touch targets. 11314409 1. A computing device, comprising: a processor; a first display device providing a first display region having an edge; a touch-sensitive display device providing a second display region having an edge parallel with and adjacent to the edge of the first display region, wherein the touch-sensitive display device is physically attached to maintain physical alignment between the first display region and the second display region; and a memory storing instructions executable by the processor to: execute a first application configured to present a graphical user interface (GUI) in a first portion of the first display region, the first portion of the first display region being physically aligned with a first virtual trackpad portion of the second display region; execute a second application configured to present a GUI in a second portion of the first display region concurrently with the presentation of the GUI of the first application in the first portion of the first display region, the second portion of the first display region being physically aligned with a second virtual trackpad portion of the second display region; based on receiving first input in the first virtual trackpad portion of the second display region, process the first input by way of an operating system in a context of the first application or provide the first input to the first application; and based on receiving second input in the second virtual trackpad portion of the second display region, process the second input by way of the operating system in a context of the second application or provide the second input to the second application. 2. The computing device of claim 1, wherein the first input or second input comprise a multi-touch gesture. 3. The computing device of claim 1, wherein the first input or second input comprise user input received from a digital pen. 4. The computing device of claim 1, wherein the memory stores further instructions executable by the processor to: detect a user input gesture originating at a location in a third portion of the second display region and terminating in the first virtual trackpad portion of the second display region or the second virtual trackpad portion of the second display region; and responsive to detecting the user input gesture, perform a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. 5. The computing device of claim 4, wherein third portion displays a keyboard having a plurality of keys, and wherein the gesture originates at one of the plurality of keys. 6. The computing device of claim 1, wherein the first virtual trackpad portion of the second display region defines a first input region that is configured to only provide input to the first application, and wherein the second virtual trackpad portion of the second display region defines a second input region that is configured to only provide input to the second application. 7. The computing device of claim 1, wherein the first virtual trackpad portion of the second display region does not overlap with the second virtual trackpad portion of the second display region. 8. The computing device of claim 1, wherein the first portion of the first display region does not overlap with the second portion of the first display region. 9. The computing device of claim 1, wherein a width of the first portion of the first display region corresponds with a width of the first virtual trackpad portion of the second display region. 10. The computing device of claim 1, wherein a width of the second portion of the first display region is based on a width of the second virtual trackpad portion of the second display region. 11. The computing device of claim 1, wherein the first display device and the touch-sensitive display are mechanically attached by a hinge. 12. A computer-implemented method, comprising: executing a first application configured to present a graphical user interface (GUI) in a first portion of a first display region of a first display device, the first portion of the first display region being spatially aligned with a first virtual trackpad portion of a second display region of a touch-sensitive display, wherein the first display device and the touch-sensitive display are physically attached to maintain a physical alignment between the first portion of the first display region and the first virtual trackpad portion of the second display region; executing a second application configured to present a GUI in a second portion of the first display region concurrently with the presentation of the GUI of the first application in the first portion of the first display region, the second portion of the first display region being physically aligned with a second virtual trackpad portion of the second display region, wherein the first display device and the touch-sensitive display are physically attached to maintain a physical alignment between the second portion of the first display region and the second virtual trackpad portion of the second display region; responsive to receiving first input in the first virtual trackpad portion of the second display region, processing the first input in a context of the first application or providing the first input to the first application; and responsive to receiving second input in the second virtual trackpad portion of the second display region, processing the second input in a context of the second application or providing the second input to the second application. 13. The computer-implemented method of claim 12, wherein the first input or second input comprise a multi-touch gesture. 14. The computer-implemented method of claim 12, further comprising: detecting a user input gesture originating at a location in a third portion of the second display region and terminating in the first virtual trackpad portion of the second display region or the second virtual trackpad portion of the second display region; and responsive to detecting the user input gesture, performing a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. 15. The computer-implemented method of claim 12, wherein the first portion of the first display region and the first virtual trackpad portion of the second display region have a same width. 16. The computer-implemented method of claim 12, wherein the second portion of the first display region and the second virtual trackpad portion of the second display region have a same width. 17. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a computer, cause the computer to: execute a first application configured to present a graphical user interface (GUI) in a first portion of a first display region, the first portion of the first display region being physically aligned with a first virtual trackpad portion of a second display region positioned concurrently with a second virtual trackpad portion of the second display region, wherein the GUI of the first application displayed in the first portion of the first display region is concurrently displayed with a GUI displayed in a second portion of the first display region, wherein a first display device comprising the first display region and a touch-sensitive display comprising the second display region are physically attached to maintain a physical alignment between the second portion of the first display region and the second virtual trackpad portion of the second display region; and responsive to receiving first input in the first virtual trackpad portion of the second display region, process the first input in a context of the first application or provide the first input to the first application. 18. The computer-readable storage medium of claim 17, having further computer-executable instructions stored thereupon to: execute a second application configured to be presented in the UI displayed in the second portion of the first display region, the second portion of the first display region being spatially aligned with the second virtual trackpad portion of the second display region; and responsive to receiving second input in the second virtual trackpad portion of the second display region, process the second input in a context of the second application or provide the second input to the second application. 19. The computer-readable storage medium of claim 18, having further computer-executable instructions stored thereupon to: detect a user input gesture originating at a location in a third portion of the second display region and terminating in the first virtual trackpad portion of the second display region or the second virtual trackpad portion of the second display region; and responsive to detecting the user input gesture, perform a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. 20. The computer-readable storage medium of claim 18, wherein the first portion of the first display region and the first virtual trackpad portion of the second display region have a first width and wherein the second portion of the first display region and the second virtual trackpad portion of the second display region have a second width. 20 BACKGROUND Trackpads, which might also be referred to as touchpads, are user input pointing devices having a specialized flat surface that is capable of detecting finger contact. The surface can translate the position and motion of a user's finger, or fingers, to a relative position on the screen of a computing device. The tracked position and motion of the user's finger can then be used to move an on-screen cursor or to perform other types of functionality. Trackpads also include functionality for performing an activation operation, such as by clicking a physical button or tapping the trackpad. The functionality provided by traditional trackpads, however, is typically limited to cursor control, selection and, potentially, basic single or multi-touch gestures. Virtual on-screen trackpads enable touch-screen displays to provide functionality that is similar to traditional physical trackpads. Existing virtual onscreen trackpads, however, merely replicate, often poorly, the behaviors of traditional trackpads. Moreover, on-screen regions that support traditional trackpad functions in addition to other functionality rely on users to perform explicit mode switches, which require additional steps and take the user out of their workflow. The user input required to perform mode switches can also be difficult to remember and confusing to users, thereby resulting in inadvertent or incorrect user input, which unnecessarily consumes computing resources, like processor cycles and memory. Performing a mode switch can also result in the execution of additional program code, which can also consume computing resources like processor cycles and memory. It is with respect to these and other technical challenges that the disclosure made herein is presented. SUMMARY Technologies are disclosed herein that can modelessly augment the functionality provided by virtual trackpads. The disclosed technologies address the technical problems described above by augmenting the functionality provided by virtual on-screen trackpads without requiring users to switch between modes. In this way, users can access extended functionality without interfering with expected traditional trackpad functionality (e.g. moving a cursor, clicking, and scrolling) and, therefore, have an improved user experience. Additionally, the utilization of computing resources can be reduced through simplified user interaction and execution of less program code as compared to previous modal virtual trackpad solutions that require mode switching. Other technical benefits not specifically mentioned herein can also be realized through implementations of the disclosed subject matter. The technologies disclosed herein are implemented in conjunction with a multiple screen computing device in some configurations. Multiple screen computing devices come in multiple form factors, such as a two screen hinged device resembling a traditional laptop computer. A device configured in this way includes a first display device located in the same location as a display on a traditional laptop computer. The first display device may or may not be touch-sensitive in various configurations. The first display device provides a first display region that occupies all or a portion of the physical display. In some configurations of a multiple screen computing device, a second display device is located where a physical keyboard is traditionally located on a laptop computer. The second display device might occupy the entire area where a physical keyboard is traditionally located or might only occupy a portion of that area. The second display device is equipped with sensors for detecting user input, such as single and multi-touch gestures. The second display device provides a second display region that occupies all or a portion of the physical display. An edge (e.g. the bottom edge) of the first display region is arranged adjacent to and parallel with an edge (e.g. the top edge) of the second display region. The distance between the two display regions might be greater or smaller depending upon the particular mechanical connection (e.g. a hinge) connecting the respective housings of the display devices providing the display regions. Respective sides of the first and second display regions can also be spatially aligned. As will be described in greater detail below, the spatial alignment of the display regions enables certain types of modeless gestures to be made in the second display region with respect to corresponding areas of the first display region. The second display region can be divided into a virtual trackpad area and a virtual keyboard area. The virtual trackpad area can be utilized as a virtual trackpad. For example, the virtual trackpad can determine the position and motion of a user's finger, which can then be used to move an on-screen cursor or to perform other types of functionality. The virtual trackpad area can also be utilized to perform an activation operation such as, for example, by tapping within the virtual trackpad area. The virtual trackpad area encompasses the entire width of the second display region and is located in a portion of the second display region adjacent to the first display region in some configurations. Modeless Digital Pen Input to a Virtual Trackpad In one configuration, the virtual trackpad area can also receive input from a digital pen without requiring a user to change the mode of operation of the virtual trackpad. In this configuration, a user can write directly in the virtual trackpad area at any time using a digital pen. In particular, when input is received within the virtual trackpad area, the computing device determines whether the input is touch input (i.e. a user's finger) or input from a digital pen. If the input is touch input, the input is processed as touch input to a virtual trackpad to move a cursor or perform an activation operation, for example. If the input is input received from a digital pen, the input is processed as digital ink. Processing the input as digital ink can include, but is not limited to, converting the digital ink to text and providing the text to a program for presentation of the text within a field of a user interface (“UI”) presented in the first display region. Processing the input as digital ink can further include providing the digital ink to a program for presentation of the digital ink within a field of a UI presented in the second display region. Processing the input as digital ink can also include converting the digital ink to text, recognizing a command in the text, and causing the computing device to execute the command. Processing the input as digital ink might additionally include providing the digital ink to a default program for storing notes or converting the digital ink to text and providing the text to a default program for storing notes. In some configurations, it is not necessary to convert the digital ink to text. Rather, commands can be recognized directly from the digital ink such as, for instance, when the ink represents non-text information such as shapes or arrows. Modeless Gestures for Summoning User Interfaces In another configuration, the virtual trackpad area can be utilized to initiate the presentation of transient UIs in the second display region for performing commands or viewing information without requiring a user to change the mode of operation of the virtual trackpad area. In particular, when input is received within the virtual trackpad area, the computing device determines whether the touch input comprises a touch gesture originating outside an edge of the virtual trackpad area of the second display region and terminating inside the virtual trackpad area. For example, a user might utilize a finger, or multiple fingers, to swipe from a location outside the virtual trackpad area to a location within the virtual trackpad area. If the computing device detects a touch gesture originating outside an edge of the virtual trackpad area of the second display region and terminating inside the virtual trackpad area, the computing device can display a transient UI in the virtual trackpad area. The transient UI might be animated as it is displayed such as, for instance, by “sliding” into the virtual trackpad area from the edge of the virtual trackpad area where the gesture originated. In some configurations, the transient UI includes selectable UI elements, such as UI buttons or other types of UI elements. If the computing device determines that touch input has been made in the virtual trackpad area selecting one of the one or more selectable UI elements, the computing device can initiate a command corresponding to the selected UI element. For instance, a command might be initiated to launch an application that can present a UI in the first display region, switch to an already-launched application (i.e. to switch tasks), or to perform another function. In some configurations, the display of the transient UI is removed from the virtual trackpad area in response to the selection of one of the UI elements. In other configurations, a touch gesture can be utilized to dismiss the transient UI. For example, a user might utilize a finger, or multiple fingers, to swipe from a location inside the virtual trackpad area to a location outside the virtual trackpad area to dismiss the transient UI. The display of the transient UI can also be animated as it is removed such as, for example, by “sliding” the transient UI out of the virtual trackpad area to the edge of the virtual trackpad area where the gesture terminates. In other configurations, an input gesture made from outside an edge of the virtual trackpad area to a location inside the virtual trackpad area can perform functions based upon the relationship between the ending location of the gesture and UI elements presented in the first display region or in the keyboard area of the second display region. For example, a user might set the output volume of the computing device by swiping right from an area outside the virtual trackpad area to a location within the virtual trackpad area. The location at which the gesture ends with respect to a virtual keyboard (e.g. horizontally aligned) specifies the output volume. For instance, if the gesture ends at a location that is spatially aligned with the ‘1’ key on the virtual keyboard, the output volume will be set to one (e.g. 10% of maximum). If the gesture ends at a location that is spatially aligned (e.g. horizontally aligned) with the ‘9’ key on the virtual keyboard, the output volume will be set to nine (e.g. 90% of maximum). In this way, many different specific edge gestures can be performed along a common edge of the virtual trackpad area. Modeless Gestures Utilizing Relative Position In another configuration, the virtual trackpad area can be utilized to enable gestures having functionality that is determined based upon the starting point of the gestures relative to adjacent on-screen UI, another type of object, or a keyboard. In this configuration, a first application executing on the computing device can present a UI in a first portion of the first display region: the left half of the first display region, for example. A first portion of the virtual trackpad area is spatially aligned with the first portion of the first display region. For instance, the first portion of the virtual trackpad area might include the left half of the second display region, which is spatially aligned with the left half of the first display region. Similarly, a second application executing on the computing device can present a UI in a second portion of the first display region: the right half of the first display region, for example. A second portion of the virtual trackpad area is spatially aligned with the second portion of the first display region. For instance, the second portion of the virtual trackpad area might include the right half of the second display region, which is spatially aligned with the right half of the first display region. In this way, one application is assigned a portion of the first display region that corresponds to a spatially aligned portion of the virtual trackpad area, and another application is assigned another portion of the first display region that is spatially aligned with another portion of the virtual trackpad area. If the computing device receives input in the first portion of the virtual trackpad area, the computing device provides the input to the application or interprets the input (e.g. a command) with respect to the application that is presenting its UI in the adjacent portion of the first display region. For example, if input is received in the left half of the virtual trackpad area, the input will be provided to the application presenting its UI in the left half of the first display region. If the computing device receives input in the second portion of the virtual trackpad area, the computing device provides the input to the application that is presenting its UI in the adjacent portion of the first display region. For example, if input is received in the right half of the virtual trackpad area, the input will be provided to the application presenting its UI in the right half of the first display region. The input might be, for example, a multi-touch input gesture or input received from a digital pen in order to distinguish the input from traditional virtual trackpad input for controlling a cursor, for example. Specific examples of gestures include gestures for minimizing or closing windows associated with an application. In some configurations, the computing device can detect a user input gesture that originates in an area outside the virtual trackpad area and that terminates within the virtual trackpad area. For example, a user input gesture like a swipe gesture might originate from within a keyboard area and terminate within the virtual trackpad area. Responsive to detecting such a user input gesture, the computing device can perform a command that is selected based, at least in part, upon the location at which the gesture originated. For instance, when a gesture originates on a key of a virtual keyboard and ends in the virtual trackpad area, the command can be selected based upon the particular key upon which the gesture originated. Modeless Cursor Control and Interaction with Touch Targets In another configuration, a display region encompassing a virtual trackpad area on a touch-sensitive display can show touch targets for initiating various types of functionality. Cursor control can be performed using the virtual trackpad area and the touch targets can be selected without changing the mode of the virtual trackpad. For example, one or more UI controls can be displayed outside a virtual trackpad area. When selected, such as by using a touch gesture, the UI controls can initiate various types of functionality such as, for instance, launching a program or summoning a digital assistant. In this configuration, a touch gesture can originate inside the virtual trackpad area and terminate within one of the UI controls without causing selection of the UI control. In this way, inadvertent exiting of the virtual trackpad area while performing a cursor control gesture will not cause the selection of one of the UI controls. Similarly, a touch gesture can originate inside one of the UI controls and end inside the virtual trackpad area, or another area, without causing the selection of the UI control. In this way, a cursor control gesture can be performed outside of the virtual trackpad area, even if it begins within a UI control. The starting location and timing of a user input gesture can be used to disambiguate cursor control and interactions with the touch targets, without requiring a mode switch. In some configurations, the computing device can detect a user input gesture that originates within the virtual trackpad area and that terminates outside the virtual trackpad area. In this example, the computing device can perform a command that is selected based, at least in part, upon the object upon which the gesture terminates. In another configuration, physical input objects can be placed on one of several regions of a virtual trackpad area to enable additional functionality such as, but not limited to, providing direct control to volume, brightness, and scrolling of a computing device. For example, a digital dial or other type of object might be placed in a virtual trackpad area and manipulated to adjust settings such as these and to perform other types of functionality. Although the embodiments disclosed herein are primarily presented in the context of a multiple screen computing device, the disclosed technologies can also be implemented in conjunction with single-screen computing devices that utilize a single display screen to provide a first display region and a second display region. It is also to be appreciated that although generally described separately, the various embodiments described briefly above and in further detail below can be utilized in combination with one another. It should also be appreciated that the above-described subject matter can be implemented as a computer-controlled apparatus, a computer-implemented method, a computing device, or as an article of manufacture such as a computer readable medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings. This Summary is provided to introduce a brief description of some aspects of the disclosed technologies in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram illustrating aspects of the configuration and operation of a multiple screen computing device that implements the disclosed technologies in one particular configuration; FIGS. 2-5 are schematic diagrams showing aspects of a mechanism disclosed herein for modeless digital pen input on a virtual trackpad; FIG. 6 is a flow diagram showing a routine that illustrates aspects of the operation of the computing device shown in FIG. 1 for modeless digital pen input on a virtual trackpad as shown in FIGS. 2-5; FIGS. 7-11 are schematic diagrams showing aspects of a mechanism disclosed herein for modelessly summoning user interfaces in a virtual trackpad area; FIG. 12 is a flow diagram showing a routine that illustrates aspects of the operation of the computing device shown in FIG. 1 for modelessly summoning user interfaces in a virtual trackpad area as shown in FIGS. 7-11; FIGS. 13 and 14 are schematic diagrams showing aspects of a mechanism disclosed herein for enabling modeless gestures on a virtual trackpad that have functionality that is determined based upon the position of the starting point of the gestures relative to on-screen UI or a physical keyboard; FIG. 15 is a flow diagram showing a routine that illustrates aspects of the operation of the computing device shown in FIG. 1 for enabling modeless gestures on a virtual trackpad that have functionality determined based upon the position of the starting point of the gestures relative to on-screen UI as shown in FIGS. 13 and 14; FIGS. 16-19 are schematic diagrams showing aspects of a mechanism disclosed herein for enabling modeless cursor control and interaction with virtual touch targets; FIG. 20 is a flow diagram showing a routine that illustrates aspects of the operation of the computing device shown in FIG. 1 for modeless cursor control and interaction with virtual touch targets as shown in FIGS. 16-19; and FIG. 21 is a computer architecture diagram showing an illustrative computer hardware and software architecture for a computing device that can implement aspects of the technologies presented herein. DETAILED DESCRIPTION The following detailed description is directed to technologies for modelessly augmenting the functionality provided by virtual trackpads. As discussed briefly above, implementations of the disclosed technologies can enable user to access extended functionality without interfering with expected traditional trackpad functionality (e.g. moving a cursor, clicking, and scrolling). Consequently, the utilization of computing resources can be reduced through simplified user interaction and execution of less program code as compared to previous modal virtual trackpad solutions. Other technical benefits not specifically mentioned herein can also be realized through implementations of the disclosed subject matter. Those skilled in the art will recognize that the subject matter disclosed herein can be implemented with various types of computing systems and modules, at least some of which are described in detail below. Those skilled in the art will also appreciate that the subject matter described herein can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, computing or processing systems embedded in devices (such as wearables, automobiles, home automation etc.), computing or processing systems embedded in devices (such as wearable computing devices, automobiles, home automation etc.), and the like. In the following detailed description, references are made to the accompanying drawings that form a part hereof, and which are shown by way of illustration specific configurations or examples. Referring now to the drawings, in which like numerals represent like elements throughout the several FIGS., aspects of various technologies for modelessly augmenting the functionality provided by virtual trackpads will be described. FIG. 1 is a schematic diagram illustrating aspects of the configuration and operation of a multiple screen computing device 100 (which might also be referred to herein as “the computing device 100” or simply “the device 100”) that implements the disclosed technologies in one particular configuration. As mentioned above, multiple screen computing devices such as the computing device 100 come in multiple form factors. The computing device 100 shown in FIG. 1, for example, is configured as a two screen hinged device resembling a traditional laptop computer. The disclosed technologies can, however, also be utilized with multiple screen computing devices having other configurations. As discussed above, the disclosed technologies can also be practiced with computing devices having a single folding display screen, such as computing devices utilizing flexible screen technology. As shown in FIG. 1, the illustrative computing device 100 includes two display devices. The first display device is mounted in a housing 104A and is located in the same location as on a traditional laptop computer when in use. The first display device provides a first display region 102A that encompasses all or a part of the physical display. The first display device may or may not be touch-sensitive. The second display device is located in the area where a physical keyboard is traditionally located on a laptop computer and provides a second display region 102B. The second display region might occupy the entire area where a physical keyboard is traditionally located or might only occupy a portion of that area. The second display device is equipped with sensors for detecting user input, such as single and multi-touch gestures. In the example configuration shown in FIG. 1, the first display region 102A has a height 106A. The second display region 102B has a height 106B, which may or may not be the same as the height 106A. The first display region 102A also has a width 108A and the second display region 102B has a width 108B. The width 108B of the second display region 102B is the same as the width 108A of the first display region 102A in some configurations. The width 108B of the second display region 102B can be larger or smaller than the width 108A of the first display region 102A in other configurations. In the configuration shown in FIG. 1, the bottom edge of the first display region 102A is parallel to the top edge of the second display region 102B. The bottom edge of the first display region 102A is also adjacent to the top edge of the second display region 102B. In the example shown in FIG. 1, the housing 104A of the first display region 102A is separated from the housing 104B of the second display region by a distance 114. The distance 114 might be greater or smaller depending upon the particular mechanical connection (e.g. a hinge) connecting the housing 104A and the housing 104B. The distance 114 might also be zero in the case where two display regions are part of the same physical, potentially bendable, display screen. In the configuration illustrated in FIG. 1, the left and right edges of the second display region 102B are aligned with the left and right edges of the first display region 102A. In this manner, the second display region 102B is horizontally aligned with the first display region 102A. As will be described in greater detail below, the spatial alignment of the first display region 102A and the second display region 102B enables certain types of modeless gestures to be made in the second display region 102B with respect to corresponding areas of the first display region 102A. In the example configuration shown in FIG. 1, the second display region 102B has been divided into areas 110A (which might be referred to herein as “the virtual trackpad area 110A”) and 110B (which might be referred to herein as “the keyboard area 110B”). The virtual trackpad area 110A can be utilized as a virtual trackpad. For example, and as shown in FIG. 1, a user might utilize a finger 116A to perform a gesture in the virtual trackpad area 110A. In this example, for instance, the user has dragged their finger horizontally in the virtual trackpad area 110A to perform a drag gesture 117. The illustrative drag gesture 117 shown in FIG. 1 causes the computing device 100 to move a cursor 118 across the first display region 102A along a path corresponding to the path of the drag gesture 117. A user can also perform a press or tap operation in the virtual trackpad area 110A with their finger 116A in order to perform an activation operation (e.g. a mouse click) at the location of the cursor 118. Other types of functionality can also be performed in the virtual trackpad area 110A (e.g. a multi-touch gesture, like a two or three finger swipe). In the example shown in FIG. 1, the virtual trackpad area 110A encompasses only a portion of the second display region 102B. In particular, the virtual trackpad area 110A has a width equal to the width 108B of the second display region 102B, but has a height 112 that is less than the height 106B of the second display region 102B. In this example, the height 112 of the virtual trackpad area 110A is selected to provide the functionality disclosed herein, while at the same time enabling a touch-sensitive virtual keyboard to be presented in a virtual keyboard area 110B in one configuration. The virtual trackpad area 110A can have a different height 112 in other configurations. The virtual trackpad area 110A encompasses the entire width of the second display region 102B and is located in a portion of the second display region 102B adjacent to the first display region 102A in some configurations. Additionally, the top (i.e. the top row of keys) of the keyboard area 110B is adjacent to and parallel with the bottom edge of the virtual trackpad area 110A in the illustrated example. In other configurations, the keyboard area 110B of the computing device 100 is replaced with a physical keyboard. For example, in some configurations, a physical keyboard can be placed on top of the region normally occupied by the virtual keyboard, thereby providing the tactility of physical keys but similar functionality otherwise. It is to be appreciated that certain relative terms (e.g. height, width, top, bottom, left, right) have been utilized herein to describe the configuration of the display regions 102A and 102B shown in FIG. 1. In this regard, it is to also be appreciated that these terms have been utilized herein for ease of discussion and are not to limit the configuration of the regions 102A and 102B or the device 100. Other terms can be utilized to describe the display regions 102A and 102B and their spatial relationships to one another. Additionally, although the computing device 100 shown in FIG. 1 is illustrated in a landscape orientation, the device 100 can also be operated in a portrait configuration (i.e. by rotating the device 100 ninety degrees). Other types of multiple screen computing devices can be utilized in other configurations. Modeless Digital Pen Input To a Virtual Trackpad FIGS. 2-5 are schematic diagrams showing aspects of a mechanism disclosed herein for modeless digital pen input to a virtual trackpad. In the configuration shown in FIGS. 2-5, the virtual trackpad area 110A can receive input from a digital pen 202 without requiring a user to change the mode of operation of the virtual trackpad area 110A. In the configuration illustrated in FIG. 2, a user can utilize a finger 116A, or fingers, to perform gestures, such as the gesture 208, in the virtual trackpad area 110A of the second display region 102B. The illustrative gesture 208 shown in FIG. 2 causes the computing device 100 to move the cursor 118 across the first display region 102A along a path corresponding to the path of the gesture 208. A user can also perform a press or tap operation in the virtual trackpad area 110A with their finger 116A in order to perform an activation operation (e.g. a mouse click) at the location of the cursor 118. Other types of functionality can also be performed in the virtual trackpad area 110A (e.g. a multi-touch gesture, like a two or three finger swipe). Additionally, a user can write directly in the virtual trackpad area 110A at any time using a digital pen 202. The user does not need to switch modes to utilize the virtual trackpad functionality described above and the digital pen 202 in the virtual trackpad area 110A. In the example shown in FIG. 2, for instance, the user has utilized the digital pen 202 to write the digital ink 206A in the virtual trackpad area 110A without changing input modes. When input is received within the virtual trackpad area 110A, the computing device 100 determines whether the input is touch input (e.g. input generated by a user's finger 116A) or input generated by a digital pen 202. In order to make such a determination, the digital pen 202 can be communicatively coupled to the computing device 100 through a suitable interface. Through this connection, the computing device 100 can determine if the digital pen 202 has been utilized to write in the virtual trackpad area 110A. If the input is touch input (e.g. a user's finger 116A), the computing device 100 processes the received input as touch input to the virtual trackpad to move the cursor 118 or perform an activation operation, for example. If the input is received from a digital pen 202, the computing device 100 processes the input as digital ink. In this manner, a user can provide both touch input and input from a digital pen 202 to the virtual trackpad area 110A simultaneously without requiring a user to change from a touch input mode to a digital pen mode. Processing input received in the virtual trackpad area 110A as digital ink can include, but is not limited to, converting the digital ink to text and providing the text to a program for presentation of the text within a field of a UI presented in the first display region 102A. In the example illustrated in FIG. 3, for instance, a user has utilized the digital pen 202 to write the digital ink 206A (i.e. “Hello, World”) in the virtual trackpad area 110A. In response thereto, the computing device 100 has converted the digital ink 206A to text and provided the text to an email application executing on the computing device 100. In turn, the email application has presented the recognized text in a field 304A of a UI window 302A that has been selected using the cursor 118. The recognized text can be provided to other types of programs executing on the computing device 100 or on another local or remote computing device in other configurations. Processing input received in the virtual trackpad area 110A as digital ink can further include providing the digital ink to a program for presentation of the digital ink within a field of a UI presented in the first display region 102A. In the example illustrated in FIG. 4, for instance, a user has utilized the digital pen 202 to write the digital ink 206A (i.e. “Ruby Lu”) in the virtual trackpad area 110A. In response thereto, the computing device 100 has provided the digital ink 206B to a program executing on the computing device 100 that is utilized to sign documents. In turn, the program has presented the digital ink 206B in a field 304B of a UI window 302B that has been selected using the cursor 118. The digital ink 206B can be provided to other types of programs executing on the computing device 100 or on another local or remote computing device in other configurations. Processing input received in the virtual trackpad area 110A as digital ink can also include, in some configurations, converting the digital ink to text, recognizing a command in the text, and causing the computing device 100 to execute the command. In the example illustrated in FIG. 5, for instance, a user has utilized the digital pen 202 to write the digital ink 206C (i.e. “Call 1-(234)-555-1212”) in the virtual trackpad area 110A. In response thereto, the computing device 100 has converted the digital ink 206C to text and provided the text to a communication application (not shown in the FIGS.). In turn, the communication application has presented the recognized command in a UI window 302C and performed the intended command (i.e. calling the specified number). The recognized text can be utilized to perform other types of commands on the computing device 100 or on another local or remote computing device in other configurations. Processing input received in the virtual trackpad area 110A as digital ink might additionally, or alternative, include the performance of other functions. For example, and without limitation, processing the input as digital ink might include providing the digital ink to a default program for storing notes or digital ink executing on the computing device 100 or another computing system, or converting the digital ink to text and providing the text to a default program for storing notes executing on the computing device 100 or another computing system. The digital ink or text can be provided to such a program responsive to receiving the digital ink without additional input from the user. The computing device 100 can provide the digital ink or text recognized from the digital ink to other types of programs in other configurations. FIG. 6 is a flow diagram showing a routine 600 that illustrates aspects of the operation of the computing device 100 shown in FIG. 1 for modeless digital pen input on a virtual trackpad as shown in FIGS. 2-5. It should be appreciated that the logical operations described herein with regard to FIG. 5, and the other FIGS., can be implemented (1) as a sequence of computer implemented acts or program modules running on a computing device and/or (2) as interconnected machine logic circuits or circuit modules within a computing device. The particular implementation of the technologies disclosed herein is a matter of choice dependent on the performance and other requirements of the computing device. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts and modules can be implemented in hardware, software, firmware, in special-purpose digital logic, and any combination thereof. It should be appreciated that more or fewer operations can be performed than shown in the FIGS. and described herein. These operations can also be performed in a different order than those described herein. The routine 600 begins at operation 602, where the computing device 100 monitors the virtual trackpad area 110A for input. If input is detected in the virtual trackpad area 110A, the routine 600 proceeds from operation 604 to operation 606. At operation 606, the computing device 100 determines if the detected input is touch input (e.g. input made by a user's finger 116A). If so, the routine 600 proceeds from operation 606 to operation 610, where the computing device 100 processes the input as touch input to the virtual trackpad. For example, the computing device 100 might move the cursor 118 or perform an activation operation based on the input. The routine 600 then proceeds from operation 610 back to operation 602. If the received input is not touch input, the routine 600 proceeds from operation 606 to operation 608, where the computing device 100 determines if the received input is input made by a digital pen 202. If not, the routine 600 proceeds from operation 608 back to operation 602. If the input is digital pen input (i.e. digital ink), then the routine 600 proceeds from operation 608 to operation 612, where the computing device 100 processes the received input as digital ink. For example, the computing device 100 might process the digital ink according to one or more of the examples given above with regard to FIGS. 3-5. The routine 600 then proceeds from operation 612 to operation 602, described above. In this manner, the computing device 100 can process both touch input and pen input made in the virtual trackpad area 110A without requiring user input to change modes. Modeless Gestures for Summoning User Interfaces FIGS. 7-12 are schematic diagrams showing aspects of a mechanism disclosed herein for modelessly summoning the display of transient UIs in the virtual trackpad area 110A. In this configuration, the virtual trackpad area 110A can be utilized to initiate the presentation of transient UIs in the virtual trackpad area 110A for performing commands and/or viewing information without requiring a user to change the mode of operation of the virtual trackpad area 110A. In order to modelessly provide transient UIs in the virtual trackpad area 110A, the computing device 100 determines whether received touch input comprises a touch gesture originating outside an edge of the virtual trackpad area 110A of the second display region 102B and terminating inside the virtual trackpad area 110B. For instance, in the example illustrated in FIG. 7, a user has utilized a finger 116A to perform a swipe gesture 706 that originates outside the left edge of the virtual trackpad area 110A and terminates at a location within the virtual trackpad area 110A (e.g. “an edge swipe” or a multi-finger gesture that does not involve crossing an edge of the virtual trackpad). If the computing device 100 detects a touch gesture originating outside an edge of the virtual trackpad area 110A of the second display region 102B and terminating inside the virtual trackpad area 110A, the computing device 100 can display a transient (i.e. temporary) UI in the virtual trackpad area 110A, such as the transient UI 702 shown in FIG. 7. The transient UI might be animated as it is displayed in the second display region 102B such as, for instance, by “sliding” into the virtual trackpad area 110A from the edge of the virtual trackpad area 110A where the gesture originated. In some configurations, the transient UI presented in the virtual trackpad area 110A includes selectable UI elements, such as UI buttons or other types of UI elements. In the example shown in FIG. 7, for instance, the transient UI 702 includes UI elements 704A and 704B. If the computing device 100 determines that touch input has been made in the virtual trackpad area 110A selecting one of the one or more selectable UI elements, the computing device 100 can initiate a command corresponding to the selected UI element. For instance, a command might be initiated to launch an application that can present a UI in the first display region 102A, switch to an already-launched application (i.e. to switch tasks), or to perform another function. In the example shown in FIG. 7, the UI element 704A corresponds to a web browser application and the UI element 704B corresponds to an email application. As shown in FIG. 8, a user has utilized a finger 116A to select the UI element 704B in this example. In response to detecting the selection of the UI element 704B, the computing device has launched an email application which, in turn, has presented a UI 802 in the first display region 102A. Other types of functions can be performed responsive to the selection of a UI element 704 in a transient UI 702 displayed within the virtual trackpad area 110A. In some configurations, such as that illustrated in FIG. 9, the display of the transient UI 702 is removed from the virtual trackpad area 110A immediately in response to the selection of one of the UI elements. In other configurations, a touch gesture can be utilized to dismiss the transient UI 702. For instance, in the example shown in FIGS. 10 and 11, a user has utilized a finger 116A to perform a swipe gesture 1004 that originates outside the virtual trackpad area 110A and terminates within the virtual trackpad area 110A. As a result, the computing system 100 has presented a transient UI 1002 in the virtual trackpad area 110A that shows notifications. In order to dismiss the transient UI 1002, the user might utilize a finger 116A, or multiple fingers, to perform a gesture for dismissing the transient UI 1002. For example, a user might perform a swipe gesture 1006 from a location inside the virtual trackpad area 110A to a location outside the virtual trackpad area 110A to dismiss the transient UI. This is illustrated in FIG. 11. Other types of gestures, such as a tap gesture outside the transient UI 1002, can also be utilized to dismiss the transient UI 1002. As when displaying the transient UI in the virtual trackpad area 110A, the display of the transient UI can also be animated as it is removed from the virtual trackpad area 110A such as, for example, by “sliding” the transient UI out of the virtual trackpad area 110A to the edge of the virtual trackpad area 110A where the gesture terminates. In other configurations, an input gesture made from outside an edge of the virtual trackpad area 110A to a location inside the virtual trackpad area 110A can perform functions based upon the relationship between the ending location of the gesture and UI elements presented in the first display region 102A or in the keyboard area 110B of the second display region 102B. For example, a user might set the audio output volume of the computing device 100 by swiping right from an area outside the virtual trackpad area 110A to a location within the virtual trackpad area 110A. The location at which the gesture ends with respect to a virtual keyboard (e.g. horizontally aligned) shown in the keyboard area 110B specifies the output volume. For instance, if the gesture ends at a location that is spatially aligned with the location of the ‘1’ key on the virtual keyboard, the output volume will be set to one (e.g. 10% of maximum). If the gesture ends at a location that is spatially aligned (e.g. horizontally aligned) with the location of the ‘9’ key on the virtual keyboard, the output volume will be set to nine (e.g. 90% of maximum). In this way, many different specific edge gestures along a common edge of the virtual trackpad area 110A can be disambiguated by a computing system. Such gestures can also be performed with respect to a horizontally-aligned physical keyboard. FIG. 12 is a flow diagram showing a routine 1200 that illustrates aspects of the operation of the computing device 100 shown in FIG. 1 for modelessly summoning transient UIs in the virtual trackpad area 110A as described above with reference to FIGS. 7-11. The routine 1200 begins at operation 1202, where the computing device 100 monitors the virtual trackpad area 110A for input. If input is detected in the virtual trackpad area 110A, the routine 1200 proceeds from operation 1204 to operation 1206. At operation 1206, the computing device 100 determines if the detected input is a touch gesture that started outside the virtual trackpad area 110A and that ended inside the virtual trackpad area 110A. If so, the routine 1200 proceeds from operation 1208 to operation 1210, where the computing device displays a transient UI in the virtual trackpad area 110A. The routine 1200 then continues from operation 1210 back to operation 1202. If the computing device 100 determines that the received input is not a gesture originating outside the virtual trackpad area 110A and ending inside the virtual trackpad area 110A, the routine 1200 proceeds from operation 1208 to operation 1212. At operation 1212, the computing device 100 determines if the detected input is a touch gesture for dismissing the transient UI, such as a swipe or tap gesture. If so, the routine 1200 proceeds from operation 1212 to operation 1214, where the computing device removes the transient UI from the virtual trackpad area 110A. The routine 1200 then continues from operation 1216 back to operation 1202. Modeless Gestures Utilizing Relative Position FIGS. 13 and 14 are schematic diagrams showing aspects of a mechanism disclosed herein for enabling modeless gestures on a virtual trackpad that have functionality that is determined based upon the starting point of the gestures relative to adjacent on-screen UI or a physical keyboard. In the example shown in FIG. 13, a first application executing on the computing device 100 presents its UI in a first portion 1302A of the first display region 102A: the left half of the first display region 102A in this configuration. A first portion 1304A of the virtual trackpad area 110A in the second display region 102B is adjacent to and spatially aligned with the first portion 1302A of the first display region 102A. In the example shown in FIG. 13, for instance, the portion 1302A of the first display region 102A has a width 1306A that is the same as the width 1308A of the portion 1304A of the second display region 102B. In this example, therefore, the first portion 1304A of the virtual trackpad area 110A includes the left half of the second display region 102B, which is spatially aligned with the left half of the first display region 102A (i.e. the portion 1302A), which includes UI generated by a first application. In the example shown in FIG. 13, a second application executing on the computing device 100 presents its UI in a second portion 1302B of the first display region 102A: the right half of the first display region 102A in this example. A second portion 1304B of the virtual trackpad area 110A is spatially aligned with the second area 1302B of the first display region 102A. In the example shown in FIG. 13, for instance, the portion 1302B of the first display region 102A has a width 1306B that is the same as the width 1308B of the portion 1304B of the second display region 102B. In this example, therefore, the second portion 1304B of the virtual trackpad area 110A includes the right half of the second display region 102B, which is spatially aligned with the right half (i.e. the portion 1302B) of the first display region 102A. In this way, one application is assigned a portion 1302A of the first display region 102A that corresponds to a spatially aligned portion 1304A of the virtual trackpad area 110A in the second display region 102B, and another application is assigned another portion 1302B of the first display region 102A that is spatially aligned with another portion 1304B of the virtual trackpad area 110A. If the computing device 100 receives input in the first portion 1304A of the virtual trackpad area 110A, the computing device 100 provides the input to the application that is presenting its UI in the adjacent portion 1302A of the first display region 102A. For instance, in the illustrated configuration, if input is received in the left half of the virtual trackpad area 110A (i.e. the portion 1304A), the input will be provided to the application presenting its UI in the left half (i.e. the portion 1302A) of the first display region 102A or processed by an operating system with reference to that application. The input might be, for example, a multi-touch input gesture such as that shown in FIG. 13, or input received from a digital pen. These and other types of inputs can be distinguished from traditional virtual trackpad input for controlling a cursor, for example. If the computing device 100 receives input in the second portion 1304B of the virtual trackpad area 110A, the computing device 100 provides the received input to the application that is presenting its UI in the adjacent portion of the first display region 102A. For example, if input is received in the right half of the virtual trackpad area 110A (i.e. the portion 1304B), the input will be provided to the application presenting its UI in the right half (i.e. the portion 1302B) of the first display region 102A. In this manner, user input can be provided to different applications executing on the computing device 100 without requiring a user to change the mode of the virtual trackpad area 110A. In this regard, it is to be appreciated that while the example shown in FIG. 13 includes only two applications, input can be modelessly provided to more than two applications in a similar fashion in other configurations. Input might also be provided to two or more regions presented by a single application, such as to an application that features multiple task panes or content regions in its UI. FIG. 14 illustrates another configuration in which the computing device 100 can detect a user input gesture that originates in an area outside the virtual trackpad area 110A and that terminates within the virtual trackpad area 110A. In the illustrated example, for instance, a vertical swipe gesture using a finger 116A originates from within the keyboard area 110B and terminates within the virtual trackpad area 110A. Responsive to detecting such a user input gesture, the computing device 100 can perform a command that is selected based, at least in part, upon the location at which the gesture originated. In the illustrated example, for instance, the gesture originates on a key of a virtual keyboard and ends in the virtual trackpad area 110A. In this example, the command to be executed is selected based upon the particular key upon which the gesture originated. For instance, if a gesture originates on the ‘1’ key, the volume of the computing device 100 might be set to 10% of its maximum. If a gesture originates on the ‘9’ key, the volume might be set to 90% of maximum. Other types of commands can be initiated and executed in a similar fashion. FIG. 15 is a flow diagram showing a routine 1500 that illustrates aspects of the operation of the computing device 100 shown in FIG. 1 for enabling modeless gestures on a virtual trackpad that have functionality determined based upon the position of the starting point of the gestures relative to adjacent on-screen UI as illustrated in FIGS. 13 and 14. The routine 1500 begins at operation 1502, where a first application presents its UI in a first portion of 1306A of the first display region 102A. The routine 1500 then proceeds to operation 1504, where a second application presents its UI in a second portion 1306B of the first display region 102A. From operation 1504, the routine 1500 proceeds to operation 1506, where the computing device 100 receives input in the virtual trackpad area 110A of the second display region 102B. The routine 1500 then proceeds to operation 1508, where the computing device 100 determines if the input was made in a portion of the virtual trackpad area 110A that is adjacent to the portion 1306A of the first display region 102A containing the UI of the first application. If so, the routine 1500 proceeds from operation 1510 to operation 1512, where the input received at operation 1506 is provided to the first application or processed relative to the first application (e.g. by an operating system). If not, the routine 1500 proceeds from operation 1510 to operation 1514. At operation 1514, the computing device 100 determines if the input was made in a portion of the virtual trackpad area 110A that is adjacent to the portion 1306B of the first display region 102A containing the UI of the second application. If so, the routine 1500 proceeds from operation 1516 to operation 1518, where the input received at operation 1506 is provided to the second application or processed relative to the second application (e.g. by an operating system). If not, the routine 1500 proceeds from operation 1516 back to operation 1506 where additional input can be received and processed in a similar fashion. Modeless Cursor Control and Interaction with Virtual Touch Targets FIGS. 16-19 are schematic diagrams showing aspects of a mechanism disclosed herein for enabling modeless cursor control and interaction with virtual touch targets. In the configuration shown in FIGS. 16-19, areas of a display region that include a virtual trackpad area 110A on a touch-sensitive display 102B can show touch targets (e.g. UI controls) for initiating various types of functionality. In the example shown in FIG. 16, for example, the size of the virtual trackpad area 110A has been reduced and several UI controls 1602A-1602E have been presented in the areas of the second display region 102B around the virtual trackpad area 110A. When selected, such as by using a tap gesture in the second display region 102B, the UI controls 1602A-1602E can initiate various types of functionality such as, for instance, launching a program or summoning a digital assistant on the computing device 100. Cursor control can be performed using the virtual trackpad area 110A and the UI controls 1602A-1602E can be selected without changing the mode of the virtual trackpad. For example, and as illustrated in FIG. 17, a touch gesture 1702 for controlling the cursor 118 can originate inside the virtual trackpad area 110A and terminate within one of the UI controls 1602A-1602E without causing selection of the UI control in which the gesture 1702 ends or interrupting cursor control. In this way, if a user's touch gesture inadvertently exits the virtual trackpad area 110A while controlling the cursor 118 and ends in one of the UI controls 1602A-1602E, the UI control will not be selected. Similarly, and as shown in FIG. 18, a touch gesture 1802 for controlling the cursor 118 can originate in the region that encompasses the virtual trackpad, in the UI control 1602B in this example, and end inside the virtual trackpad area 110A, or another area of the second display region 102B, without causing the selection of the UI control on which the gesture originated. In this way, a cursor control gesture, such as the gesture 1802, can be performed outside of the virtual trackpad area 110A, even if it begins at a UI control 1602. This can be useful, for example, if a user is not looking at the virtual trackpad area 110A when they begin a mouse control gesture. The starting location and timing of a user input gesture can be used to disambiguate cursor control and interactions with the UI controls without requiring a mode switch, and without interrupting cursor control. For example, if a touch gesture is detected within a UI control 1602, the gesture can be considered a selection of the UI control 1602 if the gesture ends (e.g. a user lifts their finger) within a certain predetermined period of time (e.g. 100 ms). If not, the gesture will be considered the start of a gesture to control the location of the cursor 118. In some configurations, a previous gesture can be considered when disambiguating a current gesture. For example, and without limitation, a cursor movement followed by a tap gesture that is performed near to the ending location of a preceding cursor movement might be interpreted as a selection, even if the user's finger is now also over a UI control. Generally, a cursor move followed by cursor click at the same location, or cursor move followed by tapping on a UI control somewhere else on the virtual trackpad, can be interpreted as a selection. In some configurations, the computing device can detect a user input gesture that originates within the virtual trackpad area and that terminates outside the virtual trackpad area. In this example, the computing device can perform a command that is selected based, at least in part, upon the object or location upon which the gesture terminates. In another configuration, which is illustrated in FIG. 19, physical input objects can be placed on one of several regions of the virtual trackpad area 110A to enable additional functionality such as, but not limited to, providing direct control to volume, brightness, and scrolling of the computing device 100. In the example shown in FIG. 19, for instance, a digital dial 1902 (represented in FIG. 19 as a circular outline of the portion of the digital dial that touches the second display region 102B), has been placed in the second display region 102B. The bottom portion of the digital dial 1902 is made of a suitable material for detection by the second display region 102B. In this example, a user has rotated the digital dial 1902 using a gesture 1904. Responsive to such input, the computing device 100 can adjust settings or perform other types of functionality. FIG. 20 is a flow diagram showing a routine 2000 that illustrates aspects of the operation of the computing device shown in FIG. 1 for modeless cursor control and interaction with virtual touch targets as shown in FIGS. 16-19. The routine 2000 begins at operation 2002, where one or more selectable UI controls 1602 are displayed outside or inside the virtual trackpad area 110A. The routine 2000 then proceeds to operation 2004, where the computing device 100 detects a touch input in the second display region 102B. From operation 2004, the routine 2000 proceeds to operation 2006, where the computing device 100 determines if the touch is located within one of the UI controls 1602. If so, the routine 2000 proceeds from operation 2006 to operation 2008, where the computing device 100 determines if the touch ends within a predetermined amount of time (e.g. 100 ms). If the touch has ended within the predetermined amount of time, the computing device 100 considers the UI control in which the touch is located to have been selected at operation 2012. If the touch does not end within the predetermined amount of time, the routine 2000 proceeds from operation 2010 to operation 2014, where the touch gesture is utilized to move the cursor 118. If, at operation 2006, the computing device 100 determines that the touch detected at operation 2004 was not located in a UI control 1602, the routine 2000 proceeds from operation 2006 to operation 2016. At operation 2016, the computing device 100 determines if the detected touch is located in the virtual trackpad area 110A. if so, the computing device 100 moves the cursor 118 based on the detected gesture. If not, the routine 2000 proceeds back to operation 2004, where additional gestures can be detected and processed in the manner described above. FIG. 21 is a computer architecture diagram showing an illustrative computer hardware and software architecture for a computing device that can implement the various technologies presented herein. In particular, the architecture illustrated in FIG. 21 can be utilized to implement the multiple screen computing device 100 described herein. The illustrated architecture can also be utilized to implement other types of computing systems. The computer 2100 illustrated in FIG. 21 includes a central processing unit 2102 (“CPU”), a system memory 2104, including a random-access memory 2106 (“RAM”) and a read-only memory (“ROM”) 2108, and a system bus 2110 that couples the memory 2104 to the CPU 2102. A basic input/output system (“BIOS” or “firmware”) containing the basic routines that help to transfer information between elements within the computer 2100, such as during startup, can be stored in the ROM 2108. The computer 2100 further includes a mass storage device 2112 for storing an operating system 2122, application programs, and other types of programs. The functionality described above for augmenting a virtual trackpad is implemented by one or more of these programs in various configurations. The mass storage device 2112 can also be configured to store other types of programs and data. The mass storage device 2112 is connected to the CPU 2102 through a mass storage controller (not shown) connected to the bus 2110. The mass storage device 2112 and its associated computer readable media provide non-volatile storage for the computer 2100. Although the description of computer readable media contained herein refers to a mass storage device, such as a hard disk, CD-ROM drive, DVD-ROM drive, or USB storage key, it should be appreciated by those skilled in the art that computer readable media can be any available computer storage media or communication media that can be accessed by the computer 2100. Communication media includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner so as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media. By way of example, and not limitation, computer storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. For example, computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and which can be accessed by the computer 2100. For purposes of the claims, the phrase “computer storage medium,” and variations thereof, does not include waves or signals per se or communication media. According to various configurations, the computer 2100 can operate in a networked environment using logical connections to remote computers through a network such as the network 2120. The computer 2100 can connect to the network 2120 through a network interface unit 2116 connected to the bus 2110. It should be appreciated that the network interface unit 2116 can also be utilized to connect to other types of networks and remote computer systems. The computer 2100 can also include an input/output controller 2118 for receiving and processing input from a number of other devices, including a keyboard, mouse, touch input, a digital pen 202, or a physical sensor such as a video camera. Similarly, the input/output controller 2118 can provide output to one or more displays screens, such as the first display region 102A and the second display region 102B. As discussed above, the first display region 102A and the second display region 102B are output devices configured to present information in a visual form. In particular, the first display region 102A and the second display region 102B can present graphical user interface (“GUI”) elements, text, images, video, notifications, virtual buttons, virtual keyboards, messaging data, Internet content, device status, time, date, calendar data, preferences, map information, location information, and any other information that is capable of being presented in a visual form. In some configurations, the first display region 102A and the second display region 102B are liquid crystal displays (“LCD”) utilizing any active or passive matrix technology and any backlighting technology (if used). In some configurations, the first display region 102A and the second display region 102B are organic light emitting diode (“OLED”) displays. Other display types are contemplated. As also discussed above, the second display region 102B can be a touch-sensitive display that is configured to detect the presence and location of a touch. Such a display can be a resistive touchscreen, a capacitive touchscreen, a surface acoustic wave touchscreen, an infrared touchscreen, an optical imaging touchscreen, a dispersive signal touchscreen, an acoustic pulse recognition touchscreen, or can utilize any other touchscreen technology. In some configurations, the touchscreen is incorporated on top of a display as a transparent layer to enable a user to use one or more touches to interact with objects or other information presented on the display. The second display region 102B can be configured to detect multiple touches simultaneously. In some configurations, the second display region 102B is configured to detect discrete touches, single touch gestures, and/or multi-touch gestures. These are collectively referred to herein as “gestures” for convenience. Several gestures will now be described. It should be understood that these gestures are illustrative and are not intended to limit the scope of the appended claims. Moreover, the described gestures, additional gestures, and/or alternative gestures can be implemented in software for use with the second display region 102B. As such, a developer can create gestures that are specific to a particular application program. In some configurations, the second display region 102B supports a tap gesture in which a user taps the second display region 102B once on an item presented in the second display region 102B. In some configurations, the second display region 102B supports a double tap gesture in which a user taps the second display region 102B twice on an item presented in the second display region 102B. The double tap gesture can be used for various reasons including, but not limited to, zooming in or zooming out in stages. In some configurations, the second display region 102B supports a tap and hold gesture in which a user taps the second display region 102B and maintains contact for at least a pre-defined time. The tap and hold gesture can be used for various reasons including, but not limited to, opening a context-specific menu. In some configurations, the second display region 102B supports a pan gesture in which a user places a finger in the second display region 102B and maintains contact with the second display region 102B while moving the finger in the second display region 102B. The pan gesture can be used for various reasons including, but not limited to, moving through screens, images, or menus at a controlled rate. Multiple finger pan gestures are also contemplated. In some configurations, the second display region 102B supports a flick gesture in which a user swipes a finger in the direction the user wants the screen to move. The flick gesture can be used for various reasons including, but not limited to, scrolling horizontally or vertically through menus or pages. In some configurations, the second display region 102B supports a pinch and stretch gesture in which a user makes a pinching motion with two fingers (e.g., thumb and forefinger) in the second display region 102B or moves the two fingers apart. The pinch and stretch gesture can be used for various reasons including, but not limited to, zooming gradually in or out of a website, map, or picture. Although the gestures described above have been presented with reference to the use of one or more fingers for performing the gestures, other appendages such as digital pens can be used to interact with the second display region 102B. As such, the above gestures should be understood as being illustrative and should not be construed as being limiting in any way. It should be appreciated that the software components described herein, when loaded into the CPU 2102 and executed, can transform the CPU 2102 and the overall computer 2100 from a general-purpose computing device into a special-purpose computing device customized to facilitate the functionality presented herein. The CPU 2102 can be constructed from any number of transistors or other discrete circuit elements, which can individually or collectively assume any number of states. More specifically, the CPU 2102 can operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions can transform the CPU 2102 by specifying how the CPU 2102 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 2102. Encoding the software modules presented herein can also transform the physical structure of the computer readable media presented herein. The specific transformation of physical structure depends on various factors, in different implementations of this description. Examples of such factors include, but are not limited to, the technology used to implement the computer readable media, whether the computer readable media is characterized as primary or secondary storage, and the like. For example, if the computer readable media is implemented as semiconductor-based memory, the software disclosed herein can be encoded on the computer readable media by transforming the physical state of the semiconductor memory. For instance, the software can transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software can also transform the physical state of such components in order to store data thereupon. As another example, the computer readable media disclosed herein can be implemented using magnetic or optical technology. In such implementations, the software presented herein can transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations can include altering the magnetic characteristics of particular locations within given magnetic media. These transformations can also include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion. In light of the above, it should be appreciated that many types of physical transformations take place in the computer 2100 in order to store and execute the software components presented herein. It also should be appreciated that the architecture shown in FIG. 21 for the computer 2100, or a similar architecture, can be utilized to implement other types of computing devices, including hand-held computers, video game devices, embedded computer systems, mobile devices such as smartphones, tablets, and AR/VR devices, and other types of computing devices known to those skilled in the art. It is also contemplated that the computer 2100 might not include all of the components shown in FIG. 21, can include other components that are not explicitly shown in FIG. 21, or can utilize an architecture completely different than that shown in FIG. 21. It should be appreciated that the computing architecture shown in FIG. 21 has been simplified for ease of discussion. It should also be appreciated that the illustrated computing architecture can include and utilize many more computing components, devices, software programs, networking devices, and other components not specifically described herein. The disclosure presented herein also encompasses the subject matter set forth in the following clauses: Clause 1. A computing device, comprising: a processor; a first display device providing a first display region having an edge; a touch-sensitive display device providing a second display region having an edge parallel with and adjacent to the edge of the first display region; and a memory storing instructions executable by the processor to: execute a first application configured to present a user interface (UI) in a first portion of the first display region, the first portion of the first display region being spatially aligned with a first portion of the second display region; execute a second application configured to present a UI in a second portion of the first display region, the second portion of the first display region being spatially aligned with a second portion of the second display region; responsive to receiving first input in the first portion of the second display region, process the first input by way of an operating system in a context of the first application or provide the first input to the first application; and responsive to receiving second input in the second portion of the second display region, process the second input by way of an operating system in a context of the second application or provide the second input to the second application. Clause 2. The computing device of clause 1, wherein the first input or second input comprise a multi-touch gesture. Clause 3. The computing device of any of clauses 1-2, wherein the first input or second input comprise user input received from a digital pen. Clause 4. The computing device of any of clauses 1-3, wherein the memory stores further instructions executable by the processor to: detect a user input gesture originating at a location in a third portion of the second display region and terminating in the first portion of the second display region or the second portion of the second display region; and responsive to detecting the user input gesture, perform a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. Clause 5. The computing device of any of clauses 1-4, wherein third portion displays a keyboard having a plurality of keys, and wherein the gesture originates at one of the plurality of keys. Clause 6. The computing device of any of clauses 1-5, wherein the first portion of the first display region and the first portion of the second display region have a same width. Clause 7. The computing device of claim 1-6, wherein the second portion of the first display region and the second portion of the second display region have a same width. Clause 8. A computer-implemented method, comprising: executing a first application configured to present a user interface (UI) in a first portion of a first display region, the first portion of the first display region being spatially aligned with a first portion of a second display region; executing a second application configured to present a UI in a second portion of the first display region, the second portion of the first display region being spatially aligned with a second portion of the second display region; responsive to receiving first input in the first portion of the second display region, processing the first input in a context of the first application or providing the first input to the first application; and responsive to receiving second input in the second portion of the second display region, processing the second input in a context of the second application or providing the second input to the second application. Clause 9. The computer-implemented method of clause 8, wherein the first input or second input comprise a multi-touch gesture. Clause 10. The computer-implemented method of any of clauses 8-9, wherein the first input or second input comprise user input received from a digital pen. Clause 11. The computer-implemented method of any of clauses 8-10, further comprising: detecting a user input gesture originating at a location in a third portion of the second display region and terminating in the first portion of the second display region or the second portion of the second display region; and responsive to detecting the user input gesture, performing a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. Clause 12. The computer-implemented method of any of clauses 8-11, wherein third portion displays a keyboard having a plurality of keys, and wherein the gesture originates at one of the plurality of keys. Clause 13. The computer-implemented method of any of clauses 8-12, wherein the first portion of the first display region and the first portion of the second display region have a same width. Clause 14. The computer-implemented method of any of clauses 8-13, wherein the second portion of the first display region and the second portion of the second display region have a same width. Clause 15. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a computer, cause the computer to: execute a first application configured to present a user interface (UI) in a first portion of a first display region, the first portion of the first display region being spatially aligned with a first portion of a second display region; and responsive to receiving first input in the first portion of the second display region, process the first input in a context of the first application or provide the first input to the first application. Clause 16. The computer-readable storage medium of clause 15, having further computer-executable instructions stored thereupon to: execute a second application configured to present a UI in a second portion of the first display region, the second portion of the first display region being spatially aligned with a second portion of the second display region; and responsive to receiving second input in the second portion of the second display region, process the second input in a context of the second application or provide the second input to the second application. Clause 17. The computer-readable storage medium of any of clauses 15-16, wherein the first input or second input comprise a multi-touch gesture. Clause 18. The computer-readable storage medium of any of clauses 16-17, wherein the first input or second input comprise user input received from a digital pen. Clause 19. The computer-readable storage medium of any of clauses 16-18, having further computer-executable instructions stored thereupon to: detect a user input gesture originating at a location in a third portion of the second display region and terminating in the first portion of the second display region or the second portion of the second display region; and responsive to detecting the user input gesture, perform a command, the command selected based, at least in part, upon the location in the third portion at which the user input gesture originated. Clause 20. The computer-readable storage medium of any of clauses 16-19, wherein the first portion of the first display region and the first portion of the second display region have a first width and wherein the second portion of the first display region and the second portion of the second display region have a second width. Based on the foregoing, it should be appreciated that technologies for modelessly augmenting the functionality provided by virtual trackpads have been disclosed herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological and transformative acts, specific computing machinery, and computer readable media, it is to be understood that the subject matter set forth in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claimed subject matter. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes can be made to the subject matter described herein without following the example configurations and applications illustrated and described, and without departing from the scope of the present disclosure, which is set forth in the following claims. 16207840 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Sep 24th, 2019 12:00AM https://www.uspto.gov?id=US11314624-20220426 Reducing trace recording overheads with targeted recording via partial snapshots Performing a targeted partial recording of an executable entity includes executing the executable entity at a processor. While executing the executable entity, it is determined that a target chunk of executable instructions are to be executed as part of the execution of the executable entity. Each input to the target chunk of executable instructions is identified, including identifying at least one non-parameter input. A corresponding value for each identified input is recorded into a trace, along with information identifying the target chunk of executable instructions. 11314624 1. A method, implemented at a computer system that includes at least one processor, for performing a targeted partial recording of an executable entity, the method comprising: executing the executable entity at the at least one processor; while executing the executable entity, determining that a target chunk of executable instructions that is to be executed as part of the execution of the executable entity is to be recorded during the execution of the executable entity; prior to executing the target chunk of executable instructions, identifying one or more inputs of the target chunk of executable instructions, including identifying at least one non-parameter input of the target chunk of executable instructions; and recording a corresponding value for each identified input into a partial recording of the execution of the executable entity, along with information identifying the target chunk of executable instructions; and after recording the corresponding value for each identified input, executing the target chunk of executable instructions at the at least one processor. 2. The method of claim 1, further comprising recording information usable to validate whether a replayed execution of the target chunk of executable instructions deviated from execution of the target chunk of executable instructions as part of the execution of the executable entity, the information comprising at least one of: a corresponding value for each output of execution of the target chunk of executable instructions; one or more hashes based on the corresponding value for each output of the execution of the target chunk of executable instructions; or a partial trace of the execution of the target chunk of executable instructions, including at least one of: one or more snapshots of processor state, one or more hashes of processor state, a control flow trace, or one or more processor event based samples. 3. The method of claim 1, wherein identifying the one or more inputs of the target chunk of executable instructions is based on having performed at least one of: a static analysis of the target chunk of executable instructions; a static analysis of a recorded execution of the target chunk of executable instructions; an emulation of the target chunk of executable instructions; or an analysis of at least one of debugging symbols or code annotations. 4. The method of claim 1, further comprising: priming a cache with cache entries covering each identified input; and after priming the cache, emulating execution of the target chunk of executable instructions while recording the emulated execution of the target chunk of executable instructions into the partial recording of the execution of the executable entity. 5. The method of claim 4, further comprising, while emulating the execution of the target chunk of executable instructions, deferring validation of one or more primed cache entries with a backing memory until a synchronization event. 6. The method of claim 5, further comprising, in connection with reaching the synchronization event, performing at least one of: validating each of the one or more primed cache entries with the backing memory; or tagging each of the one or more primed cache entries for a lazy validation. 7. The method of claim 1, further comprising: executing the target chunk of executable instructions at the at least one processor; forking execution of the executable entity, and executing a forked target chunk of executable instructions; and comparing outputs of executing the target chunk of executable instructions with outputs of executing the forked target chunk of executable instructions, to determine if the execution of the forked target chunk of executable instructions deviated from the execution of the target chunk of executable instructions. 8. The method of claim 1, further comprising: marking one or more page table entries (PTEs) corresponding to the one or more inputs of the target chunk of executable instructions as invalid for an executable entity other than the executable entity; and based on marking the one or more PTEs, detecting an access by the executable entity other than the executable entity; and based on detecting the access, determining that the executable entity other than the executable entity interfered with an input of the target chunk of executable instructions. 9. The method of claim 1, further comprising: marking one or more page table entries (PTEs) not corresponding to the one or more inputs of the target chunk of executable instructions as invalid for the executable entity; and based on marking the one or more PTEs, detecting an access by the executable entity; and based on detecting the access, determining that the identified one or more inputs of the target chunk of executable instructions were incomplete for the target chunk of executable instructions. 10. A computer system comprising: a processor; and a computer-readable medium having stored thereon computer-executable instructions that are executable by the processor to cause the computer system to perform a targeted partial recording of an executable entity, the computer-executable instructions including instructions that are executable by the processor to cause the computer system to at least: execute the executable entity at the processor; while executing the executable entity, determine that a target chunk of executable instructions that is to be executed as part of the execution of the executable entity is to be recorded during the execution of the executable entity; prior to executing the target chunk of executable instructions, identify one or more inputs of the target chunk of executable instructions, including identifying at least one non-parameter input of the target chunk of executable instructions; and record a corresponding value for each identified input into a partial recording of the execution of the executable entity, along with information identifying the target chunk of executable instructions; and after recording the corresponding value for each identified input, execute the target chunk of executable instructions at the processor. 11. The computer system of claim 10, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to record information usable to validate whether a replayed execution of the target chunk of executable instructions deviated from execution of the target chunk of executable instructions as part of the execution of the executable entity, the information comprising at least one of: a corresponding value for each output of execution of the target chunk of executable instructions; one or more hashes based on the corresponding value for each output of the execution of the target chunk of executable instructions; or a partial trace of the execution of the target chunk of executable instructions, including at least one of: one or more snapshots of processor state, one or more hashes of processor state, a control flow trace, or one or more processor event based samples. 12. The computer system of claim 10, wherein identifying the one or more inputs of the target chunk of executable instructions is based on having performed at least one of: a static analysis of the target chunk of executable instructions; a static analysis of a recorded execution of the target chunk of executable instructions; an emulation of the target chunk of executable instructions; or an analysis of at least one of debugging symbols or code annotations. 13. The computer system of claim 10, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to: prime a cache with cache entries covering each identified input; and after priming the cache, emulate execution of the target chunk of executable instructions while recording the emulated execution of the target chunk of executable instructions into the partial recording of the execution of the executable entity. 14. The computer system of claim 13, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to, while emulating the execution of the target chunk of executable instructions, defer validation of one or more primed cache entries with a backing memory until a synchronization event. 15. The computer system of claim 10, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to: execute the target chunk of executable instructions at the processor; fork execution of the executable entity, and execute a forked target chunk of executable instructions; and compare outputs of executing the target chunk of executable instructions with outputs of executing the forked target chunk of executable instructions, to determine if the execution of the forked target chunk of executable instructions deviated from the execution of the target chunk of executable instructions. 16. The computer system of claim 10, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to: mark one or more page table entries (PTEs) corresponding to the one or more inputs of the target chunk of executable instructions as invalid for an executable entity other than the executable entity; and based on marking the one or more PTEs, detect an access by the executable entity other than the executable entity; and based on detecting the access, determine that the executable entity other than the executable entity interfered with an input of the target chunk of executable instructions. 17. The computer system of claim 10, the computer-executable instructions also including instructions that are executable by the processor to cause the computer system to: mark one or more page table entries (PTEs) not corresponding to the one or more inputs of the target chunk of executable instructions as invalid for the executable entity; and based on marking the one or more PTEs, detect an access by the executable entity; and based on detecting the access, determine that the identified one or more inputs of the target chunk of executable instructions were incomplete for the target chunk of executable instructions. 18. A computer program product comprising a hardware storage device having stored thereon computer-executable instructions that are executable by a processor to cause a computer system to perform a targeted partial recording of an executable entity, the computer-executable instructions including instructions that are executable by the processor to cause the computer system to at least: execute the executable entity at the processor; while executing the executable entity, determine that a target chunk of executable instructions that is to be executed as part of the execution of the executable entity is to be recorded during the execution of the executable entity; prior to executing the target chunk of executable instructions, identify one or more inputs of the target chunk of executable instructions, including identifying at least one non-parameter input of the target chunk of executable instructions; and record a corresponding value for each identified input into a partial recording of the execution of the executable entity, along with information identifying the target chunk of executable instructions; after recording the corresponding value for each identified input into the partial recording of the execution of the executable entity, execute the target chunk of executable instructions at the processor, and perform at least one of: recording a corresponding value for each output of execution of the target chunk of executable instructions; recording one or more hashes based on the corresponding value for each output of the execution of the target chunk of executable instructions; recording a partial trace of the execution of the target chunk of executable instructions, including at least one of: one or more snapshots of processor state, one or more hashes of processor state, a control flow trace, or one or more processor event based samples; or forking execution of the executable entity, execute a forked target chunk of executable instructions, and compare outputs of executing the target chunk of executable instructions with outputs of executing the forked target chunk of executable instructions, to determine if the execution of the forked target chunk of executable instructions deviated from the execution of the target chunk of executable instruction. 18 CROSS-REFERENCE TO RELATED APPLICATIONS Not Applicable. BACKGROUND Tracking down and correcting undesired software behaviors is a core activity in software development. Undesired software behaviors can include many things, such as execution crashes, runtime exceptions, slow execution performance, incorrect data results, data corruption, and the like. Undesired software behaviors might be triggered by a vast variety of factors such as data inputs, user inputs, race conditions (e.g., when accessing shared resources), etc. Given the variety of triggers, undesired software behaviors can be rare and seemingly random, and extremely difficult reproduce. As such, it can be very time-consuming and difficult for a developer to identify a given undesired software behavior. Once an undesired software behavior has been identified, it can again be time-consuming and difficult to determine its root cause(s). Developers have conventionally used a variety of approaches to identify undesired software behaviors, and to then identify the location(s) in an application's code that cause the undesired software behavior. For example, a developer might test different portions of an application's code against different inputs (e.g., unit testing). As another example, a developer might reason about execution of an application's code in a debugger (e.g., by setting breakpoints/watchpoints, by stepping through lines of code, etc. as the code executes). As another example, a developer might observe code execution behaviors (e.g., timing, coverage) in a profiler. As another example, a developer might insert diagnostic code (e.g., trace statements) into the application's code. While conventional diagnostic tools (e.g., debuggers, profilers, etc.) have operated on “live” forward-executing code, an emerging form of diagnostic tools enable “historic” debugging (also referred to as “time travel” or “reverse” debugging), in which the execution of at least a portion of a program's thread(s) is recorded into one or more trace files (i.e., a recorded execution). Using some tracing techniques, a recorded execution can contain “bit-accurate” historic trace data, which enables the recorded portion(s) the traced thread(s) to be virtually “replayed” (e.g., via emulation) down to the granularity of individual instructions (e.g., machine code instructions, intermediate language code instructions, etc.). Thus, using “bit-accurate” trace data, diagnostic tools can enable developers to reason about a recorded prior execution of subject code, as opposed to a “live” forward execution of that code. For example, a historic debugger might provide user experiences that enable both forward and reverse breakpoints/watchpoints, that enable code to be stepped through both forwards and backwards, etc. A historic profiler, on the other hand, might be able to derive code execution behaviors (e.g., timing, coverage) from prior-executed code. As an example, a tracer might emulate execution of a subject thread, while recording information sufficient to reproduce initial processor state for at least one point in a thread's prior execution (e.g., by recording a snapshot of processor registers), along with the data values that were read by the thread's instructions as they executed after that point in time (e.g., the memory reads). This bit-accurate trace can then be used to replay execution of the thread's code instructions (starting with the initial processor state) based on supplying the instructions with the recorded reads. Such trace recording can introduce significant overheads on execution of the subject thread or threads. For instance, to accomplish recording of a thread, the thread may need to be executed via emulation, rather than directly on a processor, in order to observe and record the thread's reads from memory and/or registers. Additionally, significant challenges arise when recording multi-threaded applications in this manner, since those threads can interact with one another via shared memory. In order to overcome these challenges, many trace recorders emulate multiple threads of an application by executing each thread one-by-one in a linear, rather than a parallel, manner—essentially forcing these applications to execute single-threaded. While this eliminates many of the challenges arising from recording multi-threaded applications, it imposes a significant performance penalty on those applications—both in terms of executing them one thread at a time, and in terms of executing them via emulation rather than directly on a processor. BRIEF SUMMARY At least some embodiments described herein reduce the overheads of trace recording by performing a limited recording of an entity (e.g., a process, a thread, etc.) based on recording only targeted code portions of the entity. These targeted recording techniques rely on identifying all of the inputs to a targeted code portion, and using a snapshot of those inputs to record the targeted code portion. These targeted recording techniques can eliminate the need to emulate execution of any portion of the entity during trace recording, or reduce the overheads of emulation if it is performed, while still providing the ability to replay execution of the targeted portion(s) of the entity later. In embodiments, these targeted recording techniques balance a tradeoff between reducing tracing overheads with the absolute accuracy of the resulting trace. Some embodiments are directed to performing a targeted partial recording of an executable entity. These embodiments execute the executable entity at a processor. While executing the executable entity, these embodiments determine that a target chunk of executable instructions are to be executed as part of the execution of the executable entity. These embodiments identify each input to the target chunk of executable instructions, including identifying at least one non-parameter input, and then record a corresponding value for each identified input into a trace, along with information identifying the target chunk of executable instructions. This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: FIG. 1A illustrates an example computing environment that facilitates performing targeted partial recordings of executable entities; FIG. 1B illustrates an example tracing component that performs a limited recording of an entity based on recording only targeted code portions of the entity; and FIG. 2 illustrates an example of forking a thread to verify if a data race occurred on an original thread; FIG. 3 illustrates an example of priming a cache as part of performing an emulation of a target code portion, and of deferred cache entry validation; and FIG. 4 illustrates a flow chart of an example method for performing a targeted partial recording of an executable entity. DETAILED DESCRIPTION At least some embodiments described herein reduce the overheads of trace recording by performing a limited recording of an entity (e.g., a process, a thread, etc.) based on recording only targeted code portions of the entity. These targeted recording techniques rely on identifying all of the inputs to a targeted code portion, and using a snapshot of those inputs to record the targeted code portion. These targeted recording techniques can eliminate the need to emulate execution of any portion of the entity during trace recording, or reduce the overheads of emulation if it is performed, while still providing the ability to replay execution of the targeted portion(s) of the entity later. In embodiments, these targeted recording techniques balance a tradeoff between reducing tracing overheads with the absolute accuracy of the resulting trace. Some embodiments trace a targeted code portion by identifying each input consumed by the targeted code portion, and by recording a partial snapshot comprising all of those inputs. These embodiments let the targeted code portion execute normally—without emulating the targeted code portion—which greatly reduces the overheads of recording the code portion as compared to prior techniques. The traced code portion can be replayed via emulation later, based on supplying it with the recorded inputs. In multithreaded recording environments, a recording of a target code portion executing on a first thread that is recorded using these techniques could potentially miss recording of an interference by a second thread (e.g., if the second thread modifies one or more of the target code portion's inputs). To address these situations, embodiments might use one or more techniques to detect when a recording of a target code portion does not accurately capture the code portion's actual execution at recording time. For example, one technique might fork the executing entity during recording, causing a forked entity to execute a copy of code portion using a memory space that is separate from a memory space used by the original entity and original code portion. Since the forked entity is executing in a separate memory space, any interference by other threads to the original code portion's inputs should not occur to the forked copy of the code portion's inputs. This technique can then compare the outputs of executing the original code portion with the outputs of executing the forked code portion. If the outputs are the same, there was likely no interference from other threads, or if there was interference that interference had no effect on the code portion's outputs. Thus, the recording of that code portion might be deemed trustworthy. If the outputs differ, there was likely an interference from other threads that affected the code portion's outputs, and the recording of that code portion might be deemed untrustworthy. Another technique might record, into the trace, information indicative of the target code portion's outputs. For instance, this technique might record the value of each output, record a hash of each output, record a hash over a plurality of outputs, etc. Then, after a later replay of the target code portion based on the recorded inputs snapshot, these techniques can compare the output(s) generated by the target code portion during replay with the recorded information indicative of the target code portion's outputs. If they are the same, then the replay might be deemed to reliably represent the original execution. If they are not, then the replay might be deemed to unreliable. Another technique might record, into the trace, information at least partially indicative of processor state during execution of the target code portion. For example, information indicative of processor state could include at least a portion of a control flow trace (sometimes also referred to as a branch trace). For instance, a control flow trace could comprise a trace generated by INTEL Processor Trace (IPT) or similar technologies. Then, during replay of the target code portion, this control flow trace could be compared to control flow observed during the replay to determine whether or not the replayed code flow matches the original code flow (and, by extension, whether or not the replay is reliable). As another example, information indicative of processor state could include occasional snapshots(s) of at least a portion of processor state (e.g., a copy or a hash of one or more processor registers). Then, during replay of the target code portion, these snapshot(s) can be compared to processor state generated during replay to determine whether or not the replayed processor state matches the original processor state (and, by extension, whether or not the replay is reliable). As another example, information indicative of processor state could include occasional processor event-based samples. For instance, an event sample could comprise samples generated using INTEL Processor Event-Based Sample (PEBS) or similar technologies. Then, during replay of the target code portion, these samples can be compared to processor samples generated during replay to determine whether or not the replayed samples match the original samples (and, by extension, whether or not the replay is reliable). Other embodiments might also emulate a targeted code portion to capture a bit-accurate trace of the targeted code portion, but leverage the identified input(s) to the targeted code portion to reduce the overheads of performing that emulation as compared to conventional emulation-based recording techniques. In embodiments, overheads of performing the emulation are reduced by using the identified input(s) to prime a cache (e.g., processor cache, emulate cache, etc.) with the inputs needed by the targeted code portion prior to emulating the code portion. In this way, cache misses on memory corresponding to the identified inputs are avoided during the emulation, increasing emulation performance. Additional embodiments might further defer validating these primed cache entries against a backing memory when the targeted code portion accesses a cache entry during emulation, further increasing emulation performance. To the accomplishment of the foregoing, FIG. 1A illustrates an example computing environment 100a that facilitates performing targeted partial recordings of executable entities. As depicted, computing environment 100a may comprise or utilize a special-purpose or general-purpose computer system 101, which includes computer hardware, such as, for example, one or more processors 102, system memory 103, durable storage 104, and/or network device(s) 105, which are communicatively coupled using one or more communications buses 106. Embodiments within the scope of the present invention can include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions and/or data structures are computer storage media. Computer-readable media that carry computer-executable instructions and/or data structures are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media. Computer storage media are physical storage media (e.g., system memory 103 and/or durable storage 104) that store computer-executable instructions and/or data structures. Physical storage media include computer hardware, such as RAM, ROM, EEPROM, solid state drives (“SSDs”), flash memory, phase-change memory (“PCM”), optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage device(s) which can be used to store program code in the form of computer-executable instructions or data structures, which can be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Transmission media can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures, and which can be accessed by a general-purpose or special-purpose computer system. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer system, the computer system may view the connection as transmission media. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computer system components, program code in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., network device(s) 105), and then eventually transferred to computer system RAM (e.g., system memory 103) and/or to less volatile computer storage media (e.g., durable storage 104) at the computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at one or more processors, cause a general-purpose computer system, special-purpose computer system, or special-purpose processing device to perform a certain function or group of functions. Computer-executable instructions may be, for example, machine code instructions (e.g., binaries), intermediate format instructions such as assembly language, or even source code. Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. As such, in a distributed system environment, a computer system may include a plurality of constituent computer systems. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed. A cloud computing model can be composed of various characteristics, such as on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud computing model may also come in the form of various service models such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). The cloud computing model may also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. Some embodiments, such as a cloud computing environment, may comprise a system that includes one or more hosts that are each capable of running one or more virtual machines. During operation, virtual machines emulate an operational computing system, supporting an operating system and perhaps one or more other applications as well. In some embodiments, each host includes a hypervisor that emulates virtual resources for the virtual machines using physical resources that are abstracted from view of the virtual machines. The hypervisor also provides proper isolation between the virtual machines. Thus, from the perspective of any given virtual machine, the hypervisor provides the illusion that the virtual machine is interfacing with a physical resource, even though the virtual machine only interfaces with the appearance (e.g., a virtual resource) of a physical resource. Examples of physical resources including processing capacity, memory, disk space, network bandwidth, media drives, and so forth. As shown in FIG. 1A, each processor 102 can include (among other things) one or more processing units 107 (e.g., processor cores) and one or more caches 108. Each processing unit 107 loads and executes machine code instructions via the caches 108. During execution of these machine code instructions at one more execution units 107b, the instructions can use internal processor registers 107a as temporary storage locations and can read and write to various locations in system memory 103 via the caches 108. In general, the caches 108 temporarily cache portions of system memory 103; for example, caches 108 might include a “code” portion that caches portions of system memory 103 storing application code, and a “data” portion that caches portions of system memory 103 storing application runtime data. If a processing unit 107 requires data (e.g., code or application runtime data) not already stored in the caches 108, then the processing unit 107 can initiate a “cache miss,” causing the needed data to be fetched from system memory 103—while potentially “evicting” some other data from the caches 108 back to system memory 103. As illustrated, the durable storage 104 can store computer-executable instructions and/or data structures representing executable software components; correspondingly, during execution of this software at the processor(s) 102, one or more portions of these computer-executable instructions and/or data structures can be loaded into system memory 103. For example, the durable storage 104 is shown as potentially storing computer-executable instructions and/or data structures corresponding to a tracing component 109, a debugging component 110, an emulation component 111, and one or more application(s) 112. The durable storage 104 can also store data, such as one or more recorded execution(s) 113 (e.g., traces of application(s) 112 that are generated using historic debugging technologies). In general, the tracing component 109 records or “traces” execution of one or more of application(s) 112 into the recorded execution(s) 113. The tracing component 109 can record execution of application(s) 112 whether that execution be a “live” execution on the processor(s) 102 directly, whether that execution be a “live” execution on the processor(s) 102 via a managed runtime, and/or whether that execution be an emulated execution via the emulation component 111. Thus, FIG. 1A also shows that the tracing component 109 is also loaded into system memory 103 (i.e., tracer component 110′). An arrow between tracing component 109′ and recorded execution(s) 113′ indicates that the tracing component 109′ can record trace data into recorded execution(s) 113′ (which might then be persisted to the durable storage 104 as recorded execution(s) 113). The tracing component 109 can correspond to any type of tool that records a recorded execution 113 as part of execution or emulation of an application 112. For instance, the tracing component 109 might be part of a hypervisor, an operating system kernel, a debugger, a profiler, etc. As will be explained in more detail in connection with FIG. 1B, in accordance with the embodiments herein the tracing component 109 can reduce the overheads of trace recording by performing a limited recording of an entity based on recording only targeted code portions of the entity. In general, the debugging component 110 leverages the emulation component 111 in order to emulate execution of code of executable entities, such as application(s) 112, based on execution state data obtained from one or more of the recorded execution(s) 113. Thus, FIG. 1A shows that the debugging component 110 and the emulation component 111 are loaded into system memory 103 (i.e., debugging component 110′ and emulation component 111′), and that the application(s) 112 are being emulated within the emulation component 111′ (i.e., application(s) 112′). The debugging component 110 can correspond to any type of tool that consumes a recorded execution 113 as part of analyzing a prior execution of an application 112. For instance, the debugging component 109 might be a debugger, a profiler, a cloud service, etc. It is noted that, while the tracing component 109, the debugging component 110, and/or the emulation component 111 might each be independent components or applications, they might alternatively be integrated into the same application (such as a debugging suite), or might be integrated into another software component—such as an operating system component, a hypervisor, a cloud fabric, etc. As such, those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment of which computer system 101 is a part. For instance, while these components 109-111 might take the form of one or more software applications executed at a user's local computer, they might also take the form of a service provided by a cloud computing environment. It was mentioned previously that, in embodiments, the tracing component 109 can provide functionality for reducing the overheads of trace recording by performing a limited recording of an entity based on recording only targeted code portions of the entity. To demonstrate how the tracing component 109 might accomplish the foregoing embodiments, FIG. 1B illustrates an example 100b of a tracing component 109 that is configured to perform limited recordings of entities based on recording only targeted code portions of the entity using knowledge of inputs to those targeted code portions. The depicted tracing component 109 in FIG. 1B includes a variety of components (e.g., inputs/outputs identification 114, execution supervision 115, target recording 116, etc.) that represent various functions that the tracing component 109 might implement in accordance with various embodiments described herein. It will be appreciated that the depicted components—including their identity, sub-components, and arrangement—are presented merely as an aid in describing various embodiments of the tracing component 109 described herein, and that these components are non-limiting to how software and/or hardware might implement various embodiments of the tracing component 109 described herein, or of the particular functionality thereof. In general, the tracing component 109 performs a limited recording of an entity based on an identification of inputs to targeted code portions to that entity. In embodiments, a targeted code portion comprises a chunk of a sequence of executable instructions that consume zero or more inputs and that produce one or more outputs. A targeted chunk of executable instructions could comprise, for example, one or more functions, one or more modules, one or more basic blocks, a sequence of instructions between thread synchronization events, a sequence of instructions between thread transitions, etc. In embodiments, it is possible for a targeted chunk of instructions to include sequences of instructions that have one or more gaps within their execution. For example, a chunk of instructions might include a sequence of instructions that make a kernel call in the middle of their execution. In this case, the gap might be dealt with by recording any side effects of having executed the kernel call (e.g., by recording memory locations and registers modified by the kernel call). Alternatively, the gap might be avoided by dividing the chunk into two different chucks—one including the instruction(s) before the gap and one including the instruction(s) after the gap. As shown, the tracing component 109 can include an inputs/outputs identification component 114 that can analyze an entity in order to identify all inputs to chunks of instructions that will, or could, be targeted for trace recording. Optionally, the inputs/outputs identification component 114 might also identify all outputs from those chunks of instructions. As used herein, an “input” to a chunk of instructions is defined as any data location from which the chunk of instructions reads, and to which the chunk itself has not written prior to the read. These data locations could include, for example, registers as they existed the time the chunk was entered, and/or any memory location from which the chunk reads and which it did not itself allocate. An edge case may arise if a chunk allocates memory and then reads from that memory prior to writing to it (i.e., a read from uninitialized memory). In these instances, embodiments might either treat the read to uninitialized memory as an input, or as a bug. It is noted that, by the foregoing definition, the “inputs” to a chunk of instructions is more expansive than just those parameters that are passed to that chunk. For instance, if a chunk of instructions corresponds to a function, the chunk's inputs would include each memory and/or register location corresponding to each parameter passed to the function (if any) and which are read by the function. However, in addition, the chunk's inputs would also include such things as memory and/or register locations corresponding to global variables that are read by the function, and memory and/or register locations derived from other inputs and which are read by the function. For example, if an input to a function includes a reference to the beginning of an array or a linked list, each element of that array or linked list that is read by the function is also an input to the function. As another example, if an input to a function comprises pointer to a memory address, any memory location that is read by the function based on an offset from that memory address is also an input to the function. As used herein, an “output” is defined as any data location (e.g., register and/or memory location) to which the chunk of instructions writes that it does not later deallocate. As examples, outputs can include global variables written to by the chunk, memory locations written to by the chunk based on a pointer passed to the chunk, function return values (i.e., if the chunk corresponds to a function), and the like. Notably, a stack allocation at entry of the chunk, followed by a write by the chunk to the allocated area, followed by a stack deallocation at exit from the chunk, and thus could be excluded as an output for the chunk, since that memory was deallocated by the chunk. In addition, if a chunk is delimited by application binary interface (ABI) boundaries (e.g., if the chunk corresponds to a function), then any volatile registers (i.e., registers not used to pass a return value) at exit are implicitly “deallocated” (i.e., they are discarded by the ABI)—and could be excluded as outputs for the chunk. Notably, the identified inputs and/or outputs could be more expansive than just the locations meeting the foregoing definitions. For example, implementations might treat all written named locations as outputs from a chunk (even if they are deallocated by the chunk) as these written-to locations would be a superset of the outputs meeting the foregoing definition of an output, or might treat all read named locations as inputs to a chunk as these read-from locations would be a superset of all the inputs meeting foregoing definition of an input. It might be less computationally-intensive to identify inputs/outputs when using broader definitions of inputs and/or outputs, with the tradeoff of needing to track more locations which might not strictly be inputs/outputs and which can result in larger snapshots. In embodiments, the inputs/outputs identification component 114 might take as input an identity of one or more targeted chunk(s) of instructions (e.g., by a named reference such as a function or module name, by instruction address range, etc.) and identify inputs/outputs for only those identified chunk(s) of instructions. However, in embodiments the inputs/outputs identification component 114 might alternatively identify different chunk(s) of instructions in the entity automatically, and then identify inputs/outputs for each identified chunk of instructions. For example, the inputs/outputs identification component 114 might identify chunks corresponding to each function in an entity, and then identify inputs/outputs for each of those functions. As potentially more granular example, the inputs/outputs identification component 114 might alternatively identify chunks corresponding to each basic block in an entity, and then identify inputs/outputs for each of those basic blocks. Of course, the inputs/outputs identification component 114 could use many other techniques to automatically identify chunks, and these examples are for illustrative purposes only. In embodiments, the inputs/outputs identification component 114 operates prior to, and separate from, a tracing session. Thus, the inputs/outputs identification component 114 can operate in an “offline” mode that is separate from any actual tracing process. However, is also possible to for the inputs/outputs identification component 114 to operate in an “online” mode during a tracing session. For instance, the inputs/outputs identification component 114 might operate at initiation of a tracing session, but prior to performing any tracing, to identify inputs/outputs for one or more portion(s) of an entity. Alternatively, the inputs/outputs identification component 114 might operate on-demand when a targeted chunk of instructions is identified for execution and tracing. The inputs/outputs identification component 114 can perform one or more types of analysis to identify inputs to a given chunk of instructions and/or outputs from chunk of instructions. In one type of analysis, the inputs/outputs identification component 114 might perform a static analysis of the instructions of the chunk (and of other chunks, if needed for context) to determine which memory and/or register locations the chunk can read from and/or which memory and/or register locations the chunk can write to. In another type of analysis, the inputs/outputs identification component 114 might perform a runtime analysis of the instructions of the chunk (and of other chunks, if needed for context). This type of runtime analysis might, for example, be based on emulating/replaying that chunk based on one or more prior recorded executions 113 of that chunk. In another type of analysis, the inputs/outputs identification component 114 might perform a static analysis of one or more recorded executions 113 of the chunk. As will be appreciated by one of ordinary skill in the art, number and location of the inputs that a given chunk of instructions consumes, and the number and location of the outputs that a given chunk of instructions writes to, might vary from one instance to another based on the values of the inputs. As such, runtime and/or static analysis of recorded executions 113 might be particularly useful to produce a comprehensive list of inputs and outputs, particularly as a number of the recorded executions 113 analyzed increases. In embodiments, debugging symbols and/or code annotations might additionally, or alternatively, be used to identify inputs and/or outputs. For instance, embodiments might leverage NATVIS type descriptors, SAL annotations, code contracts, and the like. In some implementations, the inputs/outputs identification component 114 might alternatively identify the inputs and/or outputs of a chunk's instructions by instrumenting those instructions (e.g., via instrumentation of source code from which the instructions were compiled, or via instrumentation of the executable instructions directly) so that that inputs and/or outputs are identified by the instrumented instructions during their execution (or emulation). For instance, instrumentations might cause each read and/or write by these instructions to be trapped, so that the locations that are subject to the read and/or write can be logged while handling the trap. In embodiments, the inputs/outputs identification component 114 can use varying levels of granularity when identifying inputs and outputs. For example, the inputs/outputs identification component 114 might granularly determine that the memory corresponding to a given input or output is a particular number of bits starting at a particular memory address. Alternatively, the inputs/outputs identification component 114 might less granularly determine that memory corresponding to a given input or output is the memory covered by a cache line that was accessed when reading the input (or writing to the output), that memory corresponding to a given input is a memory page was accessed when reading the input (or writing to the output), etc. As will be appreciated, a more granular identification of inputs and/or outputs can result in smaller snapshots of a chunk's inputs than a less granular identification of inputs. However, a performing a more granular identification of inputs and/or outputs can also take more compute time and resources than a less granular identification of inputs. As such, implementation of the inputs/outputs identification component 114 might choose a tradeoff between granularity and use of compute resources. The execution supervision component 115 can observe and control a target entity's execution at the processor(s) 102. If the target entity comprises native code, the execution supervision component 115 might observe and interact with the entity directly. If the target entity comprises managed code, the execution supervision component 115 might additionally, or alternatively, interact with a managed runtime. In embodiments, the execution supervision component 115 might attach to a currently-running instance of the target entity in order to trace the entity, and/or might initiate execution of the target entity in order to trace the entity. The target recording component 116 operates to perform a limited, targeted, recording of an entity that is being supervised by the execution supervision component 115, based on recording only targeted chunks of the entity. During a supervised execution of a target entity by the execution supervision component 115, the target identification component 117 can determine when a target chunk of executable instructions are to be executed as part of the execution of the executable entity. For example, the target identification component 117 might identify when a function, basic block, module, etc. of interest is to be executed as part of the entity. Then, based on the upcoming execution of the target chunk of executable instructions, the inputs identification component 118 identifies each input to that target chunk (e.g., input registers, memory locations, etc.). In some embodiments, the inputs identification component 118 might consult inputs data that was previously generated by the inputs/outputs identification component 114 (e.g., in an offline mode), while in other embodiments, the inputs identification component 118 might call the inputs/outputs identification component 114 to have those inputs identified for the identified target chunk of executable instructions (e.g., in an online mode). In some embodiments, the inputs identification component 118 relies on execution of the target chunk of executable instructions, themselves, for identification for the inputs. For example, the target chunk of executable instructions might have been previously instrumented so that the instrumentations identify the inputs as they are read by the instrumented instructions. Regardless of how the inputs were identified, the inputs recording component 119 can record a snapshot of those inputs into a recorded execution 113, as part of a trace of the target chunk of executable instructions. For example, the inputs recording component 119 can identify a value for each input (e.g., by consulting the memory location(s) or register(s) corresponding to each input), and store an identification of those memory location(s)/register(s), and their corresponding value(s), into a recorded execution 113 of the target entity. In embodiments, this snapshot data is stored along with information identifying the target chunk of executable instructions. For instance, this information could include a copy of the target chunk of executable instructions, themselves, it could include a memory address or memory address range corresponding to those instructions, it could include an identification of the instructions by function or module name, and the like. For most chunks of instructions, recording a snapshot of the inputs to the chunk is sufficient to fully and faithfully replay that chunk based on the snapshot. As such, in embodiments, the target recording component 116 concludes recording additional data for the target chunk of executable instructions, and lets those instructions execute normally at the processor(s) 102. As such, in these instances, the target recording component 116 has traced execution of that target chunk without actually emulating the instructions in the target chunk. Thus, while a relatively small amount of compute and memory/storage resources may have been consumed to create and store the snapshot, the tracing has had no additional impact on execution of the instructions, themselves, at the processor(s) 102. Therefore, the overheads of recording those instructions has been greatly reduced, as compared to prior emulation-based tracing techniques. However, a snapshot of the inputs to a target chunk of instructions may sometimes not be sufficient to fully and faithfully replay that chunk if another entity (e.g., another thread in the same process) wrote to that chunk's inputs during the chunk's execution at the processor(s) 102. These situations are commonly referred to as data races. In embodiments, the tracing component 109 may also record, into the recorded execution 113, additional information usable to verify whether a data race may have affected execution of a target chunk of instructions in a manner that cannot be reproduced from the snapshot of its inputs. For example, the target recording component 116 is shown as potentially including an outputs recording component 120. If included, the outputs recording component 120 may also record, into the recorded execution 113, information indicative of the output(s) that were generated by the execution of the target chunk of executable instructions at the processor(s) 102. For example, the outputs recording component 120 might record a snapshot of the output(s) (e.g., each output's memory address or register name, and corresponding value), it might record a hash for each output (e.g., a hash over the output's address/name and or its value), it might record a hash over an aggregation of different outputs, and the like. Then, during replay, this information indicative of the output(s) can be compared to output data that is generated by replay of the target chunk of executable instructions based on the inputs snapshot. If the data does not match, then a data race probably occurred during tracing of the target chunk, and the replay does not accurately represent what actually occurred during tracing. If the data does match, then the replay can be deemed reliable/trustworthy, at least as to the outputs. Notably, matching output data may not conclusively indicate that a data race did not occur during tracing, since a data race might have actually occurred during the execution of the chunk at the processor(s) 102, but that data race may have had no effect on the outputs of the chunk. As another example, the target recording component 116 is shown as potentially including a processor state recording component 121. If included, the processor state recording component 121 may also record, into the recorded execution 113, information indicative of at least a portion of processor state while the target chunk executed at the processor(s) 102. For example, the processor state recording component 121 might record all, or part, of a processor control flow trace. Also referred to as a branch trace, a control flow trace is generally generated by a processor 102, itself, and records a record of which control flow instructions resulted a branch being taken or not taken. While many processor architectures support generation of control flow traces, one example is INTEL's IPT. A recorded control flow trace for a given chunk of instructions can be compared to the control flow taken by those instructions during replay to determine if the replay accurately reproduces the original execution. Additionally, or alternatively, the processor state recording component 121 might record occasional snapshots of processor state. For instance, during execution of a target chunk, the processor state recording component 121 might store occasional hashes based on processor registers at a given point in execution, or record the actual values of those registers. Then, during replay of the target chunk, this recorded processor state can be compared to emulated processor state to determine if the replay accurately reproduces the original execution. Additionally, or alternatively, the processor state recording component 121 might record occasional processor event-based samples in connection with execution of a target chunk, such as those generated by technologies such as INTEL's PEBS. Then, during replay of the target chunk, these recorded samples can be compared to emulated samples to determine if the replay accurately reproduces the original execution. Notably, using any of the foregoing processor state, recorded processor state might be usable, at replay time, to help estimate when/where a memory race occurred. For instance, if outputs generated during replay don't match the outputs generated during tracing, embodiments might identify which output(s) are different and work backwards through the chunk of instructions to identify those instructions whose execution affected (or could affect) each of those outputs. Using recorded processor state can reduce the search space of this analysis, by helping to pinpoint where execution of those instructions diverged. In other embodiments, the tracing component 109 actually verifies, during recoding, whether a data race affected execution of a target chunk of instructions in a manner that cannot be reproduced from the snapshot of its inputs alone. For instance, the target recording component 116 might take the outputs verification concept above even further by verifying outputs itself during trace recording. For example, the target recording component 116 is shown as potentially including an execution validation component 122. In embodiments, the execution validation component 122 creates a fork of the entity that is under observation prior to executing the target chunk of instructions. Then, both the target entity and the fork of the target entity are permitted to execute their respective copy of the target chunk of instructions. FIG. 2, for example, illustrates an example 200 representing a timeline of execution of related threads 201. In particular, thread 201a represents an entity that under observation by the execution supervision component 115. At execution time point 203a, example 200 shows that execution of thread 201a is forked (e.g., by the execution validation component 122) to initiate execution of thread 201a′. Execution time point 203a might correspond, for example, to the beginning of execution of a target chunk of instructions that is begin traced by the target recording component 116. In embodiments, creating a fork of the entity creates a separate memory space for the forked entity. Thus, thread 201a′ executes its copy of the target chunk of instructions using different a memory space than thread 201a, and is therefore isolated from data races occurring on thread 201a. For example, arrow 202 shows that, during execution of the target chunk of instructions on forked thread 201a′, thread 202b performs a write to memory used by thread 201a. However, as shown, this write does not affect forked thread 201a′. After execution of the target chunk of instructions, the execution validation component 122 can compare the outputs of target chunk of instructions generated by thread 201a with the outputs of target chunk of instructions generated by forked thread 201a′. For example, a line at execution time point 203b represents a comparison (e.g., by the execution validation component 122) of the outputs of executing the target instructions on each of original thread 201a and forked thread 201a′. If these outputs match, then the inputs snapshot generated by the inputs recording component 119 can be deemed a reliable representation of execution of the target chunk on thread 201a; If they don't match, however, then a data race probably occurred on thread 201a (i.e., the write by thread 201b at arrow 202), and the inputs snapshot cannot be deemed a reliable representation of execution of the target chunk on thread 201a. In this latter case, the target recording component 116 might record an indication that the recorded snapshot is unreliable, might choose not to record the snapshot, might raise an alert, etc. In embodiments, the execution validation component 122 can support execution of a chunk of instructions in a forked thread, even when that chunk of instructions make a kernel call or a call to other non-traced code. For instance, the execution validation component 122 might determine whether or not the call is idempotent (e.g., based on whether it writes to any of the chunk's inputs and/or outputs). If the call is non-idempotent, the execution validation component 122 might simply allow the forked thread to make the call. If the call is idempotent, however, the execution validation component 122 might treat the chunk of instructions as two different chunks—a first leading up to the call, and a second after the call. Thus, the inputs identification component 118 can identify inputs and outputs for each of these chunks. Then, the execution validation component 122 can compare the outputs of executing the first chunk in a forked thread with the outputs of executing the first chunk in the original thread, and also compare the outputs of executing the second chunk in a forked thread with the outputs of executing the second chunk in the original thread. In embodiments, the first and second chunks might be executed in different forked threads. However, it might also be possible to execute them in the same forked thread if the call takes the same inputs in both the original and forked threads, by applying the side effects of executing the call on the original thread (i.e., its writes) to the memory space of the forked thread prior to the second chunk's execution on the forked thread. If the precise set of inputs to the call cannot be determined, they might be able to be proxied as the inputs to the first chunk plus the set of outputs produced by the first chunk in both forks. In other embodiments, the execution validation component 122 additionally, or alternatively, relies on use of page table entries (PTEs) to determine, during recording, if another thread interferes with the thread being traced. In this embodiment, the execution validation component 122 might even be able to identify the particular input(s) that were interfered with. In particular, the execution validation component 122 can modify the PTEs for any pages corresponding to inputs to the target chunk of instructions as being protected—e.g., as being valid for the subject thread and invalid for other threads. Then, if any other thread attempts to write to memory in those pages during execution of the target chunk of instructions on the subject thread, a page fault will occur and a potential interference can be noted. In embodiments, the execution validation component 122 could even use PTEs to determine if the subject thread tried to access a memory location that was not included in its list of identified inputs. For example, the execution validation component 122 can modify the PTEs for any pages not corresponding to inputs to the target chunk of instructions as being protected—e.g., as being invalid for the subject thread. Then, if the subject thread attempts to access memory in those pages during execution of the target chunk of instructions, a page fault will occur and an access to memory not identified as an input can be noted. In embodiments, the target recording component 116 might choose to actually emulate the target chunk of executable instructions. As such, the target recording component 116 is shown as potentially including an emulation component 123. In connection with emulation by the emulation component 123, the target recording component 116 can record a detailed trace of those instruction's execution. For example, a user might have provided an indication that the target chunk of executable instructions should be recorded with a greater level of detail (e.g., due to a particular interest in the behaviors of that target chunk). As another example, the target recording component 116 might determine (e.g., based on the execution validation component 122) that a prior recording of the target chunk exhibited signs of a data race, triggering a more granular trace of the current instance of the target chunk. While emulation and recording of the target chunk will incur additional overheads verses simply letting the chunk execute normally at the processor(s) 102, the emulation component 123 can leverage knowledge of the chunk's inputs (i.e., as identified by the inputs identification component 118) to improve the performance of that emulation. In embodiments, prior to emulating the target chunk, the emulation component 123 primes a cache (e.g., cache(s) 108) with cache entries covering memory addresses/values stored in the snapshot recorded by the inputs recording component 119. For example, FIG. 3 illustrates an example 300 of priming a cache with inputs. In particular, example 300 shows a portion of a memory 301 (e.g., system memory 103) and a portion of a cache 302 (e.g., cache(s) 108). In FIG. 3, memory locations 0x110 through 0x160 and memory locations 0x1C0 through 0x1D0 have been identified by the emulation component 123 as corresponding to inputs to a target chunk of instructions. For example, these inputs might correspond to four 32-bit variables, stored beginning at memory addresses 0x110, 0x130, 0x150, and 0x1C0. As shown, prior to emulating the target chunk of instructions, the emulation component 123 can prime the cache 302 with cache entries that cover these memory locations. Then, when the emulation component 123 emulates the target chunk of instructions, cache misses can be avoided when the target chunk of instructions accesses these memory locations, greatly speeding up the speed of the emulation. When a cache utilizes an existing cache entry, the cache may validate the value stored in the cache entry against the cache's backing store (e.g., system memory 103) to ensure that the data in the cache entry is current. In embodiments, after priming a cache, the emulation component 123 causes these validations to be deferred—again, greatly speeding up the speed of emulation. In embodiments, these deferrals are based on an assumption that the entity being recorded does not perform cross-thread coordination (e.g., via shared memory with other threads) without first properly using cross-thread synchronization techniques (e.g., mutexes, semaphores, etc.). Thus, the emulation component 123 might cause these validations to be deferred until the next cross-thread synchronization event. At a cross-thread synchronization event, the emulation component 123 might cause the cache entries to be fully validated (e.g., by confirming each one with system memory 103). Alternatively, the emulation component 123 might cause the cache entries to be lazily validated. For example, in FIG. 3, the cache 302 is shown as including a flag for each cache entry. In embodiments, this flag can be used to indicate if the corresponding cache entry should be validated against the backing store the next time it is accessed. Thus, for example, the emulation component 123 might cause these flags to be cleared when a cache entry is primed. Then, at a cross-thread synchronization event, the emulation component 123 might cause these flags to be set (at least for the primed cache entries). Later, if one of these cache entries is accessed it can be validated against the backing store, updated if needed, and its flag can be cleared. Notably, by priming a cache, the emulation component 123 can determine, at tracing time, if the identification of inputs to a given chunk of instructions actually included all of the inputs. For instance, after priming the cache for a given chunk of instructions, if execution of the target instructions results in a cache miss, the emulation component 123 can determine that the memory accessed as part of the cache miss should have been included as an input. This cache miss can be recorded to ensure a complete trace, and the identification of inputs for that chunk of instructions can be updated to include this memory address. In embodiments, the target recording component 116 might additionally, or alternatively, record a more granular execution of a chunk of instructions based on instrumentation of those instructions. For example, just as instructions might be instrumented to generate an identification of their inputs (and/or outputs), they may additionally, or alternatively, be instrumented to generate a record of their reads (and the value read) and/or their writes (and/or the value written). As such, execution of an instrumented target chunk of instructions could result in generation of trace data that is then recorded into a recorded execution 113. FIG. 4 illustrates a flowchart of an example method 400 for performing a targeted partial recording of an executable entity. Method 400 will now be described within the context of with FIGS. 1-3. While, for ease in description, the acts of method 400 are shown in a sequential linear order, it will be appreciated that some of these acts might be implemented in different orders, and/or in parallel. As shown in FIG. 4, method 400 can include an act 401 of pre-processing inputs. In some embodiments, act 401 comprises pre-processing an executable entity to identify each input to one or more target chunks of executable instructions. For example, the inputs/outputs identification component 114 can analyze a chunk of executable instructions of an application 112 to identify the chunk's inputs and/or its outputs. Act 401 is shown in broken lines, since the act might be performed as part of a partial recording session (e.g., an “online” mode), or it might be performed prior to that session (e.g., an “offline” mode). As discussed when describing the inputs/outputs identification component 114, identifying each input to the target chunk of executable instructions might be based on having performed at least one of (i) a static analysis of the target chunk of executable instructions, (ii) a static analysis of a recorded execution of the target chunk of executable instructions, (iii) an emulation of the target chunk of executable instructions, (iv) an instrumentation of the target chunk of executable instructions, and/or (v) an analysis of at least one of debugging symbols or code annotations. Method 400 also includes an act 402 of executing a subject entity. In some embodiments, act 402 comprises executing the executable entity at the at least one processor. For example, the execution supervision component 115 can supervise execution of one or more threads of an application 112 at processor(s) 102. In embodiments, the execution supervision component 115 might initiate execution of application 112 as part of method 400, or attach to an existing instance of an application 112 already executing at processor(s) 102. Method 400 also includes an act 403 of identifying a target chunk of instructions in the entity. In some embodiments, act 403 comprises, while executing the executable entity, determining that a target chunk of executable instructions are to be executed as part of the execution of the executable entity. For example, based on the supervision by the execution supervision component 115, the target identification component 117 can identify when a target chunk of instructions is to be executed. For instance, the target identification component 117 might identify when a particular function is to be executed, when an instruction at a particular address is to be executed, etc. Method 400 also includes an act 404 of identifying each input to the target chunk. In some embodiments, act 404 comprises identifying each input to the target chunk of executable instructions, including identifying at least one non-parameter input. For example, the inputs identification component 118 can identify one or more inputs based on data identified by the inputs/outputs identification component 114, either during operation of method 400, or prior to operation of method 400. Method 400 also includes an act 405 of recording a snapshot of the input(s). In some embodiments, act 405 comprises recording a corresponding value for each identified input into a trace, along with information identifying the target chunk of executable instructions. For example, for each identified input, recording component 119 can obtain a value for the input (e.g., from memory, from a register, etc.) and store into a recorded execution 113 an identification of the inputs (e.g., memory address, register name, etc.) and a value for the inputs. In addition, this snapshot can be stored in the recorded execution 113 in a manner that associates it with the appropriate target chunk of instructions. For instance, the recorded execution 113 could include the instructions themselves, or a reference to the instructions (e.g., by instruction address, by function or module name, etc.). As discussed, after recording an inputs snapshot, the target chunk might be executed directly. Thus, method 400 might proceed to execute the target chunk at the processor(s) 102 (act 406). Alternatively, however, detailed tracing information for the target chunk might be obtained by emulating the target chunk. Thus, method 400 might alternatively proceed to emulate and record the target chunk (act 407). If method 400 proceeds to act 406 for executing the target chunk, method 400 might perform one or more validation and/or recording actions. For example, executing the target chunk in act 406 might include one or more of recording information indicative of output(s) (act 406a), recording information indicative of processor state (act 406b), validating via forking (act 406c), and/or validating via PTEs (act 406b). In act 406a, the outputs recording component 120 could record information about the output(s) of having executed the target chunk of instructions at the processor(s) 102. For instance, the outputs recording component 120 might record a corresponding value for each output of execution of the target chunk of executable instructions, or record one or more hashes based on the corresponding value for each output of the execution of the target chunk of executable instructions. This output information is then usable to validate whether a replayed execution of the target chunk of executable instructions deviated from execution of the target chunk of executable instructions as part of the execution of the executable entity. In act 406b, the processor state recording component 121 could record processor state information usable as a partial trace of the execution of the target chunk of executable instructions. For instance, the processor state recording component 121 might record one or more of one or more snapshots of processor state, one or more hashes of processor state, a control flow trace (e.g., INTEL's IPT), or one or more processor event based samples (e.g., INTEL's PEBS). This processor state is then usable to validate whether a replayed execution of the target chunk of executable instructions deviated from execution of the target chunk of executable instructions as part of the execution of the executable entity. In act 406c, the execution validation component 122 could use forking to validate, at record time, whether or not a replay based on inputs snapshot would produce that same outputs as the execution of the target chunk at the processor(s) 102. For example, act 406c could include forking execution of the executable entity, and executing a forked target chunk of executable instruction. Then, act 406c could include comparing outputs of executing the target chunk of executable instructions with outputs of executing the forked target chunk of executable instructions, to determine if the execution of the forked target chunk of executable instructions deviated from the execution of the target chunk of executable instructions. In act 406d, the execution validation component 122 could use PTEs to validate if another entity interfered with the subject entity. For example, the execution validation component 122 could mark one or more PTEs not corresponding to each input to the target chunk of executable instructions as invalid for the executable entity. Then, based on marking the one or more PTEs, the execution validation component 122 could detect an access by the executable entity and then determine from the access that the identified inputs to the target chunk of executable instructions were incomplete for the target chunk of executable instructions. In act 406d, the execution validation component 122 could additionally, or alternatively, use PTEs to detect if the list of identified inputs was incomplete. For example, the execution validation component 122 could mark one or more PTEs not corresponding to each input to the target chunk of executable instructions as invalid for the executable entity. Then, based on marking the one or more PTEs, the execution validation component 122 could detecting an access by the executable entity and then determine that the identified inputs to the target chunk of executable instructions were incomplete for the target chunk of executable instructions. Alternatively, if method 400 proceeds to act 407 for emulating and recording the target chunk, method 400 might include one or more of priming a cache with the input(s) (act 407a) and/or deferring cache entry validation (act 407b). In act 407a, the emulation component 123 could prime a cache with cache entries covering each identified input. Then, after priming the cache, the emulation component 123 could emulate execution of the target chunk of executable instructions while recording the emulated execution of the target chunk of executable instructions into the trace. After priming the cache, while emulating the execution of the target chunk of executable instructions, in act 407b the emulation component 123 could defer validation of one or more primed cache entries with a backing memory until a synchronization event. Then, in connection with reaching the synchronization event, the emulation component 123 might fully validate each of the primed cache entries with the backing memory, or tag each of the primed cache entries for a lazy validation. In embodiments, the executable entity might be instrumented to record each input to the target chunk of executable instructions. Thus, in method 400, the executable entity might generate trace data during execution of the target chunk of executable instructions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above, or the order of the acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. When introducing elements in the appended claims, the articles “a,” “an,” “the,” and “said” are intended to mean there are one or more of the elements. The terms comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. 16581570 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Feb 24th, 2017 12:00AM https://www.uspto.gov?id=US11314317-20220426 Supervisory control of power management A supervisory control system provides power management in an electronic device by providing timeout periods for a hardware component to lower levels of the operating system such as a power management arbitrator and/or a hardware interface controller. The power management arbitrator and/or hardware interface controller transition at least a portion of a hardware component to a lower-power state based on monitored activity information of the hardware component. The supervisory control system may further provide wakeup periods to the power management arbitrator and/or a hardware interface controller to determine whether the hardware component should be transitioned to a higher-power state at the end of the wakeup period if the hardware component satisfies a transition condition. 11314317 1. A supervisory control method for power management of an electronic device, the supervisory control method comprising: monitoring activity information of a hardware component of the electronic device, the activity information including information about use of the hardware component of the electronic device by one or more applications registered to use the hardware component during the monitoring operation; setting a timeout period associated with the hardware component of the electronic device based on the activity information, while continuing the monitoring operation and while at least one of the one or more applications remains registered to use the hardware component, the timeout period defining a minimum amount of time before a power state of the hardware component may be transitioned from a higher-power state to a lower-power state; and transitioning at least a portion of the hardware component from the higher-power state to the lower-power state after expiration of the timeout period while none of the at least one of the one or more applications are detected by the monitoring operation as using the hardware component during the timeout period and the at least one of the one or more applications remains registered to use the hardware component during the timeout period, responsive to detection that the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period. 2. The supervisory control method of claim 1, further comprising: maintaining the hardware component in the higher-power state after expiration of the timeout period responsive to detection that the hardware component does not satisfy the transition condition during the timeout period. 3. The supervisory control method of claim 1, further comprising: setting a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a maximum amount of time before a power status of the hardware component may be transitioned from the lower-power state to a different power state; and transitioning the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period responsive to detection that the hardware component satisfies a wakeup condition. 4. The supervisory control method of claim 3, wherein the transitioning operation includes determining whether demands on the hardware component justify a transition to the different power state. 5. The supervisory control method of claim 1, wherein the activity information includes information communicated between the hardware component and one or more applications executing on the electronic device. 6. The supervisory control method of claim 1, wherein the transitioning operation transitions a sub-component of the hardware component to a low-power state. 7. The supervisory control method of claim 1, wherein the transition condition is satisfied when there is no monitored activity information during the timeout period and the hardware component is idle at an end of the timeout period. 8. The supervisory control method of claim 1, further comprising: setting an initial value for the timeout period; and adjusting the initial value of the timeout period for the hardware component of the electronic device from the initial value to a different value based on at least one of the monitored activity information and a type of application using the hardware component. 9. A supervisory power management control system for an electronic device, the supervisory power management control system configured to: monitor activity information regarding a hardware component of the electronic device, the activity information including information about use of the hardware component of the electronic device by one or more applications registered to use the hardware component during the monitoring operation; set a timeout period associated with the hardware component, while continuing the monitoring operation and while at least one of the one or more applications remains registered to use the hardware component, the timeout period defining a minimum amount of time before a power state of the hardware component may be transitioned from a higher-power state to a lower-power state; transition at least a portion of the hardware component to a lower-power state, while none of the at least one of the one or more applications are detected by the monitoring operation as using the hardware component during the timeout period and the at least one of the one or more applications remains registered to use the hardware component during the timeout period, responsive to detection that the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period; and maintain the hardware component in the higher-power state after expiration of the timeout period responsive to detection that the hardware component does not satisfy the transition condition during the timeout period. 10. The supervisory power management control system of claim 9, further including a power management supervisory controller configured to determine one or more second timeout periods, the one or more second timeout periods based at least in part on applications executing on the electronic device, the power management supervisory controller further configured to communicate the one or more second timeout periods to a power management arbitrator. 11. The supervisory power management control system of claim 9, wherein the power management supervisory controller is further configured to: set another timeout period after the expiration of the timeout period responsive to detection that the hardware component does not satisfy the transition condition during the timeout period. 12. The supervisory power management control system of claim 9, wherein the power management supervisory controller is further configured to: set a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a minimum amount of time before the power status of the hardware component may be transitioned from the lower-power state to a different power state; and transition the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period responsive to detection that the hardware component satisfies a wakeup condition. 13. The supervisory power management control system of claim 9, further comprising: a hardware interface controller configured to set a timer at a beginning of the timeout period, the hardware interface controller further configured to transmit a signal to an arbitrator at the expiration of the timer. 14. The supervisory power management control system of claim 12, wherein the power management supervisory controller is further configured to: adjust the wakeup period for the hardware component to a different value after expiration of the wakeup period based on at least one of the activity information monitored during the wakeup period and a type of application using the hardware component. 15. One or more tangible processor-readable storage media embodied with instructions for executing on one or more processors and circuits of a device a process for supervisory power management of an electronic device, the process comprising: monitoring activity information of a hardware component of the electronic device, the activity information including information about use of the hardware component of the electronic device by one or more applications registered to use the hardware component during the monitoring operation; setting a timeout period for the hardware component of the electronic device based on the activity information, while continuing the monitoring operation and while at least one of the one or more applications remains registered to use the hardware component, the timeout period having an initial value based on the activity information; and transitioning at least a portion of the hardware component from a higher-power state to a lower-power state after expiration of the timeout period while none of the at least one of the one or more applications are detected by the monitoring operation as using the hardware component during the timeout period and the at least one of the one or more applications remains registered to use the hardware component during the timeout period, responsive to detection that the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period. 16. The one or more tangible processor-readable storage media of claim 15, wherein the process further comprises: maintaining the hardware component in the higher-power state after expiration of the timeout period responsive to determination that the hardware component does not satisfy the transition condition during the timeout period. 17. The one or more tangible processor-readable storage media of claim 15, wherein the process further comprises: setting a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a minimum amount of time before a power status of the hardware component may be transitioned from the lower-power state to a different power state; and transitioning the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period responsive to detection that the hardware component satisfies a wakeup condition. 18. The one or more tangible processor-readable storage media of claim 15, wherein the activity information includes information communicated between the hardware component and one or more applications executing on the electronic device. 19. The one or more tangible processor-readable storage media of claim 17, wherein the setting operation comprises setting a wakeup timer on the hardware component, the process further comprising receiving a wakeup signal from the hardware component at an end of the wakeup period responsive to detection that the activity information received while the hardware component is in the low-power state satisfies an activity condition. 20. The one or more tangible processor-readable storage media of claim 15, wherein the transitioning operation transitions a sub-component of the hardware component to the low-power state. 20 CROSS-REFERENCE TO RELATED APPLICATIONS The present application claims benefit of priority to U.S. Provisional Patent Application No. 62/415,142, entitled “USAGE PATTERN INFLUENCE ON POWER MANAGEMENT” and filed on Oct. 31, 2016, U.S. Provisional Patent Application No. 62/415,183, entitled “DEVICE-AGGREGATING ADAPTIVE POWER MANAGEMENT” and filed on Oct. 31, 2016, and U.S. Provisional Patent Application No. 62/415,228, entitled “SUPERVISORY CONTROL OF POWER MANAGEMENT” and filed on Oct. 31, 2016, which are specifically incorporated by reference for all that they disclose and teach. The present application is also related to U.S. patent application Ser. No. 14/441,693, entitled “USAGE PATTERN BASED SUPERVISORY CONTROL OF POWER MANAGEMENT,” and U.S. patent application Ser. No. 15/441,778, entitled “AGGREGATED ELECTRONIC DEVICE POWER MANAGEMENT,” both of which are filed concurrently herewith and are specifically incorporated by reference for all that they disclose and teach. BACKGROUND Power consumption management of the various system hardware components in a computing device, such as network adapters, display interfaces, input/output devices, cameras, etc., becomes more challenging as more computing devices become mobile and depend on battery power as a primary power source. Various modules in an operating system (OS) contribute to the management of power, such as deciding when a component may transition to a low-power mode. Applications and OS modules, however, can block system hardware components from going to low-power. For example, some computing systems rely on applications to register their use of a system device when the device is needed by the application and release the device when the application no longer needs to use the device. This arrangement can block the hardware component from going to low-power mode, even for a brief period, when the device is not in use because applications often do not reliably register and release their requests to use the hardware component. Often the device could have transitioned to a lower power state, thus drawing less power from the battery than if the device had not transitioned to a low-power state. SUMMARY The described technology provides supervisory control of power management in a computing device to determine when a system hardware component should transition to a low-power state. Supervisory control of power management by an OS includes a power management arbitrator, which may be at the lowest level of the operating system, to control transitions of the hardware devices between the various power states available to each of the devices. The operating system, in cooperation with hardware components (e.g., a network adapter), can detect when the system hardware component is not using a resource (e.g., network is idle) and transition to low-power, regardless of the level of active state the rest of the system or of the registration status for the hardware component of applications executing on the device. Effectively, this supervisory control seeks to increase probability that the power provided to the device is only used when the device is working (e.g., network adapter is transmitting or receiving network traffic), although some level of non-optimal power management may be expected. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Other implementations are also described and recited herein. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an example environment for supervisory control of power management of a device with a power usage plot. FIG. 2 illustrates an example environment for supervisory control of power management on a device. FIG. 3 is a plot of example power usage of a hardware component and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment. FIG. 4 is a plot of example power usage of a hardware component and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment. FIG. 5 is a plot of example power usage of a hardware component and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment. FIG. 6 illustrates plots of example power status of an application, hardware device, and network traffic of a device under supervisory control of power management environment. FIG. 7 illustrates a flow diagram showing an example operations for providing supervisory control of power management in a computing device. FIG. 8 illustrates an example processing system for use in supervisory control of power management in a computing device. DETAILED DESCRIPTIONS Supervisory control of power management allows a system hardware component (e.g., a network adapter, video adapter, input/output device, etc.) to transition to a low-power mode after all applications have relinquished use of the system hardware component for the duration of a timeout period (e.g., transmitting or receiving network communications, displaying video, audio output, etc.) even if some applications are still registered to use the hardware component. In one implementation, for example, supervisory control of power management can set (e.g., on a per component basis, per application basis, per OS module basis) variable timeout periods, designating a minimum period of time a system hardware component is inactive before triggering a transition of that system hardware component to a low-power state. In another implementation, supervisory control of power management can set (e.g., on a per component basis, per application basis, per OS module basis) variable wakeup periods, designating a maximum period of time a system waits before transitioning the system hardware component into a high-power state (or checking to determine whether such a transition should be triggered). This disclosure describes the system and methods for effectively managing power for various system hardware components, including networking components and controlling the power state of network adapters. A result of the described technology is a refinement in power consumption by automating the reduction of system hardware component power consumption states during those periods when the system hardware component is not needed (e.g., transitioning a network adapter to a low-power or lower-power state when network traffic is not actually flowing), even if only for a very brief period of time without relying on applications executing on the device to release their use of the hardware component by notifying the operation system. Various examples are described below with respect to the figures. These examples are merely exemplary, and implementations of the present disclosure are not limited to the examples described below and illustrated in the figures. FIG. 1 illustrates an example environment 100 for supervisory control of power management of a device 102 with a power usage plot 104. The device 102 is an electronic device with an OS and one or more system hardware components (not shown) that are capable of transitioning between one or more high-power modes and low-power modes. In at least some implementations the one or more system hardware components of device 102 are capable of multiple power modes, including intermediate power modes between a high-power mode and a low-power mode. For example, a system hardware component may have one or more on-board processors for that may be turned off in a low-power mode, limited to a speed below full capacity in an intermediate power mode, and allowed to run at full speed in a high-power mode. As used herein, “power mode” and “power state” refer to the energy consumption of a system hardware component relative to the maximum amount of power the system hardware component is capable of consuming. Power states may be referred to as “higher-power” and/or “lower-power” states to indicate that a power state is higher or lower than another power state without necessarily implying that the power state is the highest or lowest power state that the system hardware component is capable of. Other implementations may include additional power modes to provide a finer control over supervisory power management. Device 102 includes a power management supervisory controller and a power management arbitrator. In an implementation, the power management supervisory controller is associated with a higher level of the OS on the device 102 and the power management arbitrator is associated with a lower level of the OS on the device 102. The power management arbitrator may transition system hardware components between various power modes by controlling a hardware sublayer, a hardware driver, and/or a component of the data link layer of the OS associated with the hardware system device. In an implementation, the power management arbitrator need not communicate with the power management supervisory controller or other components of the OS on electronic device 102 to effect power state transitions of system hardware components. The OS of device 102 manages the applications in the system that utilize system services, including, for example, network services, video display, audio output, etc. Network adapters will be a focus of the description herein, but the supervisory power management features described may apply to any other system hardware components as well. Hence, the OS is aware whether the network will be needed by monitoring applications and OS modules that depend on the network adapter. This management represents a supervisory state that the OS of device 102 can apply to individual network adapters, based on the need for each rather than relying on registration of application for the network adapters. As applications begin and end execution, the OS tracks the hardware component's overall network need, which network devices (or media) are needed, and the load status of the network devices. The OS of device 102 informs a component such as a low-level OS arbitrator, an NDIS API, and/or a miniport driver whether the network is needed by the applications. If the network is not needed, the network adapter is transitioned to low-power. When the OS of device 102 indicates that the network is needed by applications or services, the OS still does not know the actual network demand. The system applications and operating modules calling for network usage therefore benefit from a separation of higher-level components of the OS of device 102 from the low-level OS components such as the network adapter, arbitrator, etc. Accordingly, network adapter power management can be controlled at the lowest possible layer in the network stack. If the upper-level OS indicates that network services are not needed, the network adapter can be transitioned to low-power. However, when the OS indicates that applications or services do require the network, it is still possible, or even likely, that there are periods where the network is not actually needed because applications often register with the OS for control of a system hardware component even if it is not needed for periods of time by the application. During these periods of network inactivity, the network adapter can be transitioned to low-power. Usually, these periods of inactivity are quite short, often several such periods per second. The sum of these inactive periods is typically larger than the periods of actual network use. Transitioning to low-power for these periods saves a significant portion of power the network adapter would otherwise be using. Decoupling the management of network power state and actual network adapter power state is controllable from both a system-wide and per-device basis. In devices that respond poorly to idle detection power transitions of system hardware components, a system-wide override will mitigate poor idle detections. Similarly, individual device manufacturers may deem it risky to allow the duty cycle of power transitions to negatively impact their devices. Other reasons also exist for not wanting a particular device to respond to idle detection power transitions. Hence, a mechanism exists for device vendors to override idle detection operating against their devices. Determining whether a system hardware component of device 102 is idle may be done per one or more algorithms that can be used separately or in various combinations. In one implementation, a fixed or variable timeout period may be used to determine whether a system hardware component is idle. In the example of a network device, the trigger to transition to a low-power state is when the network is idle for a timeout period. Typically, the length of the timeout period is dependent on the latency of the network adapter startup and the power consumed during transitions versus steady state operation. For example, a system hardware component with a shorter startup time that consumes relatively little power when transitioning between a low-power and a high-power state compared to remaining in a high-power state may weigh in favor of a shorter timeout period to avoid leaving the system hardware component in a high-power state when it is not needed. As used herein, an inactivity condition may refer to a level of activity of a hardware component that is zero, near-zero, or below a threshold level of activity. For example, a network adapter may satisfy an inactivity condition if it transmits or receives no packets to or from a host during a time period even if the network adapter is performing other functions such as running a timeout and/or wakeup timer, “listening” for certain types of packets, and/or communicating with a host. In some implementations, a network adapter may send or receive packets on behalf of a host during a timeout period and still satisfy the inactivity condition if the number and/or type of packets are below a threshold value. As used herein, a transition condition may depend on a timeout period and the activity of a hardware component during that period. For example, a transition condition may be satisfied if an inactivity condition is satisfied during a timeout period. In some implementations, a transition condition may be satisfied if a hardware component satisfies an inactivity condition during a timeout period and there are no applications that have indicated a need to use the hardware component in a time period after the timeout period. In another implementation, a transition condition may be satisfied if a hardware component doesn't satisfy an inactivity condition during a timeout period but user preferences indicate an aggressive power saving preference wherein a hardware component should be deemed to have satisfied the transition condition if the hardware component's activity during the timeout period is below a predetermined level or if the hardware component's activity is of a certain type during the timeout period. In another implementation, a transition condition is satisfied when there is no monitored activity information during the timeout period and the hardware component is idle at the end of the timeout period. In another implementation, a transition condition is satisfied when the hardware component has a maximum amount of activity during the timeout period. Other elements of the transition condition may also be used. In another implementation, a fixed or variable wakeup period may be used to determine whether a system hardware component of device 102 that has been in a low-power state may be transitioned to a high-power state or to re-check whether operating conditions have changed sufficiently to justify a transition to a high-power state. The length of the wakeup period may be dependent on the latency of the network adapter startup and the power consumed during transition of the system hardware component compared to steady state operation. For example, a system hardware component with a longer startup time that consumes significantly more power when transitioning between a low-power and high-power state compared to remaining in a high-power state may weigh in favor of a longer wakeup period to avoid excessively waking the device. In another implementation, the length of the timeout period to trigger the transition of a system hardware component of device 102 (e.g., a network adapter) to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from a low-power state to a high-power state may be determined by the class/type of applications running on the device. For example, video streaming application that burst short portions of network traffic may be optimized for different timeout periods and wakeup periods than audio application that download one song at a time and then remain idle until the next song downloads. In at least one implementation, a low-level arbitrator may determine the relevant time periods without resort to high-levels of the operating system. In another implementation, the length of the timeout period to trigger the transition of a system hardware component of device 102 (e.g., a network adapter) to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on the durations of recent periods, in the case of a network adapter, of network transmission compared to periods of network idle. If a set of recent periods of network transmission are cached, then a weighted algorithm or a decaying value algorithm may be used to determine the length of upcoming expected periods of network transmission for which the system hardware component (e.g., the network adapter) should be in a high-power state. In another implementation, the length of the timeout period to trigger the transition of a system hardware component (e.g., a network adapter) of device 102 to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on an available battery state. As such, the length of the timeout and/or wakeup periods may become more aggressive as the remaining battery power declines. The aggressiveness of the lengths of the timeout period and wakeup period is limited by the baseline power consumed by transitions. In another implementation, the length of the timeout period to trigger the transition of a system hardware component (e.g., a network adapter) of device 102 to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on feedback from a system hardware component. In the case of a network adapter, the responsiveness of the network adapter to transitions between low-power and high-power states and the power consumer by the network adapter in transitioning between those states may be communicated though the network driver to the OS to determine an optimal length of time for the timeout period and/or the wakeup period. In another implementation, an OS on the electronic device may measure other characteristics of the hardware component to determine an optimal length of time for the timeout period and/or wakeup period, such as, for example, measuring the speed of a network adapter. In another implementation, the length of the timeout period to trigger the transition of a system hardware component (e.g., a network adapter) of device 102 to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on user input. An OS may allow the user to input information to the OS regarding the nature of the length of the timeout period to be used. If the available battery power of the device reaches a threshold, the OS may present a user interface to the user to collect a “hint” from the user regarding the user's power management preferences. For example, if a user wishes for a system hardware component to have good performance, the user may select a less aggressive timeout and/or wakeup period to preserve performance at the cost of remaining battery life. On the other hand, a user may select a more aggressive timeout and/or wakeup period if the user wishes to maximize battery life at the cost of performance due to more frequent transitions between high and low-power states of a system hardware component. In another implementation, the length of the timeout period to trigger the transition of a system hardware component (e.g., a network adapter) of device 102 to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on the input of an arbiter. The power management arbitrator is a low-level OS module that determines the contribution and effect of the various available algorithms for determining the length of the timeout and/or wakeup periods. In another implementation, the length of the timeout period to trigger the transition of a system hardware component (e.g., a network adapter) of device 102 to a low-power state when the network is idle and/or the length of the wakeup period to trigger the transition of the system hardware component from low-power to a high-power state may be determined based on application type usage and/or the state of the application. If an application has registered with the OS to use the system hardware component, then the timeout period may be set based on the expected needs of the application with respect to the system hardware component. For example, if a multi-player online game application has registered with the OS to use the network adapter, then a longer timeout period may be set because network latency in an online gaming application is likely to degrade the user experience due to lag between the user and other players in the game. If the state of the online gaming application changes, however, then a longer timeout period may be more appropriate. For example, if the user becomes inactive in the game and appears to be taking a break from play, then the lag issue is not as important as when the player was actively engaged, and a longer timeout period may be acceptable. Other examples of application type and application state affecting the timeout and/or wakeup period include a video application that is displaying video at a requested framerate to the user. If the video application is playing the video, it may be assumed that the user is watching the video and the video adapter should not be transitioned to a low-power state. On the other hand, if the user pauses the video and/or selects the windows of other applications executing on the device (e.g., the video application window is partially or completely obscured by other windows), then a shorter timeout period may be appropriate. In an implementation, the video adapter has multiple intermediate power modes such that the video adapter may be transitioned into an intermediate power state to support video applications that do not demand a high frame rate. Implementations of the present application include a network power module, integrated within an operating system of device 102, which controls the use of networking hardware during various power states of a device. Applications register with the network power module. The network power module accesses a policy store to determine a priority of the application. The application's priority may be static or configurable. The network power module also determines a power state of the device, which may be defined in various ways without departing from the scope of implementations. The network power module—based on power state, application priority, or other factors such as remaining battery power, application state, device user interface (UI) state—enables the network communication hardware to be powered up to receive and transmit data. In some implementations, the network power module sets a timer in certain power states—such as in high-power states. The timer establishes a timeout period, which may be adaptable, during which the device and/or the network communication hardware remains powered up and traffic may be sent. After the timeout timer expires, the device and/or the network communication hardware revert into a low-power state. The arrival of additional high-priority traffic may cause a currently running timer to increase remaining time or to be reset, to give more time for the traffic to be transmitted. Low-priority traffic may be transmitted or received during the timer period. Low-priority traffic may be queued for transmission until the communication hardware is activated. In some implementations, lower-priority traffic may cause the network power module to activate the network communications hardware if a threshold amount of lower-priority traffic queues for transmission. The timeout period may be adjustable based on application operational requirements, upper-level OS operational requirements, hardware component activity, and other factors. In some implementations, the network power module sets a timer in certain power states—such as in low-power states. The timer establishes a wakeup period, which may be adaptable, during which the device and/or the network communication hardware remains in a low-power state. After the wakeup timer expires, the device and/or the network communication hardware revert into a high-power state or re-evaluated operational information and hardware device activity to determine whether to transition to a high-power state, to adjust a timeout period or to adjust the wakeup period. Detected hardware device activity or high-priority demand (e.g., by an application or OS module) may trigger a transition to a high-power state independent of the state of the wakeup period time. The wakeup period may be adjustable based on application operational requirements, upper-level OS operational requirements, hardware component activity, and other factors. Plot 104 illustrates an overall power usage level of the device 102 during various phases of operation. The y-axis of plot 104 indicates overall power usage level of the device 102 and the x-axis of plot 104 indicates time. Point 106 illustrates a period of time during which the device 102 is consuming a higher level of power. Point 108 illustrates a period of time during which the device 102 is consuming a low level of power due to the transitioning of one or more system hardware components to a low-power mode, such as, for example, after the expiration of a timeout period. Point 110 illustrates a return to higher level of power consumption by device 102, such as, for example, at the end of a wakeup period of one or more system hardware components. Point 112 illustrates a medium level of power consumption by device 102, such as when one or more system hardware components have been transitioned to a low-power mode, but not as many system hardware components as were transitioned at point 108. Alternatively, point 112 may illustrate a power consumption level represented by one or more system hardware components in an intermediate power state that is between a high-power state and a low-power state. Point 114 illustrates a return to a lower level of power consumption by device 102, such as when a user has indicated a preference for longer battery time over performance and an arbitrator has transitioned one or more system hardware components into low-power states. Point 116 illustrates an increase in power consumption by device 102, such as when a wakeup period has concluded and one or more system hardware components transition into higher-power modes. FIG. 2 illustrates an example environment 200 for supervisory control of power management on a device 202. The device 202 includes an OS 203 and a variety of applications 204 executing thereon. The applications 204 may include components of the OS as well as other executable applications. Some of the applications 204 seek to use resources provided by a hardware component 214 that is part of the device 202. Requests from the applications 204 to use hardware component 214 are managed by a power management supervisory controller 206. In an implementation, the power management supervisory controller is a part of the OS. If there are no pending requests to use hardware component 214 from the applications 204, then the power management supervisory controller 206 can direct the hardware component 214 to transition to a low-power state. A command to transition the hardware component 214 may be communicated from the power management supervisory controller 206 to the hardware component via one or more intermediate levels. In an implementation, the intermediate levels include a power management arbitrator 208, a hardware sublayer 210, and a hardware driver 212. The hardware sublayer and hardware driver may be referred to herein as the hardware interface controller 211, which may include a combination of hardware and software components. If there are pending requests from the applications 204 to use the hardware component 214, then the power management supervisory controller may not be aware of the true demand for the hardware component 214 because applications 204 may register with the power management supervisory controller 206 to use the hardware component 214 without necessarily using the hardware component 214 or may only use the hardware component sporadically or with a usage pattern that leaves the hardware component 214 idle for periods of time. Accordingly, the power management supervisory controller 206 may delegate power management decisions to a lower level of the OS (e.g., the power management arbitrator 208). The power management arbitrator 208 monitors the load on the hardware component 214 and the applications 204 that are using the hardware component 214. The power management arbitrator 208 may independently transition the hardware component 214 to a low-power state (or an intermediate lower power state) for periods of time during which the hardware component 214 would not have been transitioned to a low-power state if the transitioning decision had been based on registrations of applications 204 to use the hardware component 214. In at least one implementation, the power management supervisory control is in communication with a data store 207. The data store 207 contains a variety of information regarding timout periods that are available for use by the power management supervisory control 206, which may be implemented by the power management supervisory control 206 directly or delegated to the power management arbitrator 208 or other lower levels of the OS 203. The data store 207 may include tables of timeout periods with timeout periods associated with particular applications 204. In one implementation, the power management supervisory control 206 selects a timeout period from a plurality of timeout periods in data store 207 to pass to the power management arbitrator 208. For example, the power management supervisory control 206 may select the longest timeout period from a set of timeout periods corresponding to the applications 204 running on OS 203 as the timeout period the power management arbitrator 208 should use for a hardware component 214. In at least one implementation, the power management supervisory control is in communication with a data store 207. The data store 207 contains a variety of information regarding timout periods that are available for use by the power management supervisory control 206, which may be implemented by the power management supervisory control 206 directly or delegated to the power management arbitrator 208 or other lower levels of the OS 203. The data store 207 may include tables of timeout periods with timeout periods associated with particular applications 204. In one implementation, the power management supervisory control 206 selects a timeout period from a plurality of timeout periods in data store 207 to pass to the power management arbitrator 208. For example, the power management supervisory control 206 may select the longest timeout period from a set of timeout periods corresponding to the applications 204 running on OS 203 as the timeout period the power management arbitrator 208 should use for a hardware component 214. In at least one implementation, the power management supervisory control is in communication with a data store 207. The data store 207 contains a variety of information regarding timeout periods that are available for use by the power management supervisory control 206, which may be implemented by the power management supervisory control 206 directly or delegated to the power management arbitrator 208 or other lower levels of the OS 203. The data store 207 may include tables of timeout periods with timeout periods associated with particular applications 204. In one implementation, the power management supervisory control 206 selects a timeout period from a plurality of timeout periods in data store 207 to pass to the power management arbitrator 208. For example, the power management supervisory control 206 may select the longest timeout period from a set of timeout periods corresponding to the applications 204 running on OS 203 as the timeout period the power management arbitrator 208 should use for a hardware component 214. In at least one implementation, the power management supervisory controller is in communication with a data store 207. The data store 207 contains a variety of information regarding timeout periods that are available for use by the power management supervisory controller 206, which may be implemented by the power management supervisory controller 206 directly or delegated to the power management arbitrator 208 or other lower levels of the OS 203. The data store 207 may include tables of timeout periods with timeout periods associated with particular applications 204. In one implementation, the power management supervisory controller 206 selects a timeout period from a plurality of timeout periods in data store 207 to pass to the power management arbitrator 208. For example, the power management supervisory controller 206 may select the longest timeout period from a set of timeout periods corresponding to the applications 204 running on OS 203 as the timeout period the power management arbitrator 208 should use for a hardware component 214. In at least one implementation, the power management supervisory controller 206 queries one or more applications 204 for preferred timeout periods associated with the hardware component 214 and stores the preferred timeout periods associated with the hardware component 214 in the data store 207. In an implementation, the power management arbitrator 208 sets a timeout period for the hardware component 214. If there is no activity on the hardware component 214 during the timeout period, then the power management arbitrator 208 causes the hardware component 214 to transition to a low-power (or lower power) mode. The length of the timeout period for hardware component 214 may be chosen according to the algorithms disclosed herein. Control of the power state of the hardware component 214 and setting the timeout period of the hardware component 214 (and other hardware components that are part of the device 202) may be on a per device basis, a per application or application type basis, on a battery status basis, on a user preference basis, on a device feedback basis, on a recent history basis, etc. The power management supervisory controller 206 may further set a wakeup period for the hardware component 214 according to the algorithms disclosed herein. Control of the power state of the hardware component 214 and setting the wakeup period of the hardware component 214 (and other hardware components that are part of the device 202) may also be on a per device basis, a per application or application type basis, on a battery status basis, on a user preference basis, on a device feedback basis, on a recent history basis, etc. In at least one implementation, the power management arbitrator 208 causes the hardware component 214 to transition between power modes by communicating with a hardware sublayer 210 and/or a hardware driver 212. In the case where the hardware component 214 is a network adapter, the hardware sublayer 210 may be an NDIS API and the hardware driver 212 may be a miniport driver. Other hardware sublayers 210 and hardware drivers 212 may be appropriate for other types of hardware components 214. In an implementation, the power management arbitrator may set a timer in one of the lower levels (e.g., the hardware sublayer 210 and/or the hardware driver 212) to manage transitioning the hardware component 214 to a low-power mode such that the hardware sublayer 210 and/or the hardware driver 212 will transition the hardware component 214 to the low-power mode without action by the power management arbitrator 208, the power management supervisory controller 206, or other parts of the OS 203. In at least one implementation, the power management supervisory controller 206 communicates with a policy store that stores priority information for one or more applications executing on OS 203. The priority information in the priority store may be static or configurable. Where configurable, the priority of an application may be configured by a user (such as during or after install), by a group policy, or by another mechanism. Some applications may have a statically assigned priority. Based on the application priority in the priority store and the power state of the device 202, the power management supervisory controller 206 may set timeout and wakeup periods, responsive to the requests or attempts from the applications executing on OS 203 to use the hardware component 214. FIG. 3 is a plot of example power usage of a hardware component 316 and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment 300. The plot includes a line 302 indicating whether the hardware component 316 is in a high-power mode or a low-power mode. In other implementations, the hardware component 316 has more than two power modes. The two power modes of the hardware component 316 used herein are exemplary. The x-axis of the plot indicates time, and the time markers T0-T4 are on the same scale as the communications shown in the signal diagram below the plot. At a time T0, an application 310 executing on the computing device communicates a request to use the hardware component 316 to the OS of the device. The request to use hardware component 316 is received in communication 318 by the power management supervisory controller 312. Shortly after time T0, the power management supervisory controller 312 informs the arbitrator 314 of information regarding the request from application 310 to use hardware component 316 in communication 320. Communication 320 may include a variety of information regarding application 310 and the request to use hardware component 316 such as application 310's type, information regarding power management user preferences, historical power management information regarding the device and/or application 310, current battery status, etc. Communication 320 may also include a timeout period for hardware component 316. In another implementation, communication 320 does not include a timeout period for hardware component 316. At time T1, the timeout period 304 for hardware component 316 begins. In an implementation, during the timeout period 304, the arbitrator 314 monitors usage of the hardware component 316 at operation 322. If the hardware component 316 is not used during the timeout period 304, then the arbitrator 314 transitions the hardware component 316 into a low-power mode at time T2 via communication 324. A time shortly after time T2 indicated by the arrow 305, the hardware component 316 is in a low-power mode as indicated by line 302. Transitioning the hardware component 316 to a low-power mode at time T2 is an improvement over other methods of power management that rely on application 310 to de-register its use of hardware component 316. The additional savings of supervisory power management in the form of an earlier transition of hardware component 316 is shown on the plot as area 306. Dashed line 308 indicates what the power status of hardware component 316 would have been if the device had relied on the application 310 to announce its relinquishment of the hardware component 316 instead of using the methods of supervisory power management disclosed herein. In an implementation, the hardware component 316 may “listen” for an event that can trigger the hardware component 316 to transition to a different power mode. For example, if the hardware component 316 is a network adapter, the network adapter may scan only the headers of network packets for an indication that the hardware component 316 should wake up even though the hardware component 316 is in a low-power mode. Such a packet may be referred to as a “wake packet” received in communication 326. Upon receipt of the wake packet in communication 326, the hardware component 316 informs the arbitrator 314 in communication 328 at time T3. In an implementation, the arbitrator may itself transition the hardware component 316 into a higher-power state in communication 330 at time T4 without relaying the hardware component 316's receipt of the wake packet to higher levels of the OS, e.g., the power management supervisory controller 312, the application 310, or other parts of the OS. Upon receipt of communication 330 from the arbitrator 314 at time T4, the hardware component 316 transitions to a higher-power mode and communicates network traffic to the application 310 in communication 332. FIG. 4 is a plot of example power usage of a hardware component 416 and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment 400. The plot includes a line 402 indicating whether the hardware component 416 is in a high-power mode or a low-power mode. In other implementations, the hardware component 416 has more than two power modes. The two power modes of the hardware component 416 used herein are exemplary. The x-axis of the plot indicates time, and the time markers T0-T3 are on the same scale as the communications shown in the signal diagram below the plot. At a time T0, an application 410 executing on the computing device communicates a request to use the hardware component 416 to the OS of the device. The request to use hardware component 416 is received in communication 418 by the power management supervisory controller 412. Shortly after time T0, the power management supervisory controller 412 informs the arbitrator 414 of information regarding the request from application 410 to use the hardware component 416 in communication 420. Communication 420 may include a variety of information regarding application 410 and the request to use hardware component 416 such as application 410's type, information regarding power management user preferences, historical power management information regarding the device and/or application 410, current battery status, etc. Communication 420 may also include a timeout period for hardware component 416. In another implementation, communication 420 does not include a timeout period for hardware component 416. At time T1, the timeout period 404 for hardware component 416 begins. In an implementation, during the timeout period 404, the arbitrator 414 monitors usage of the hardware component 416 at operation 422. If the hardware component 416 is not used during the timeout period 404, then the arbitrator 414 transitions the hardware component 416 into a low-power mode at time T2 via communication 424. A time shortly after time T2 indicated by arrow 405, the hardware component 416 is in a low-power mode as indicated by line 402. After the hardware component 416 has been transitioned to a low-power mode at time T2, the arbitrator 414 sets a wakeup period 406 for the hardware component 416 represented by area 406 on the plot. The wakeup period 406 ends at time T3 and arbitrator 414 sends a wakeup signal to hardware component 416 in communication 426. In one implementation, communication 426 transitions the hardware component 416 from the low-power mode to a high-power mode as indicated by line 402. In another implementation, communication 426 simply transitions hardware component 416 into an intermediate power mode (not shown on the plot) wherein the hardware component 416 checks to see whether there is a pending request from application 410, any other applications, the OS, or other components of the computing device to use the hardware component 416. If there is such a request, then hardware component 416 may remain in the high-power mode as shown by line 402 in the plot. If there is not such a request, then hardware component 416 may return to the low-power mode. If the hardware component 416 remains in the high-power mode, it may communicate with application 410 at communication 428. FIG. 5 is a plot of example power usage of a hardware component 516 and a signal diagram illustrating interaction between the components of a device under a supervisory control of power management environment 500. The plot includes a line 502 indicating whether the hardware component 516 is in a high-power mode or a low-power mode. In other implementations, the hardware component 516 has more than two power modes. The two power modes of the hardware component 516 used herein are exemplary. The x-axis of the plot indicates time, and the time markers T0-T4 are on the same scale as the communications shown in the signal diagram below the plot. At a time T0, an application 510 executing on the computing device communicates a request to use the hardware component 516 to the OS of the device. The request to use hardware component 516 is received in communication 518 by the power management supervisory controller 512. Shortly after time T0, the power management supervisory controller 512 informs the arbitrator 514 of information regarding the request from application 510 to use hardware component 516 in communication 520. Communication 520 may include a variety of information regarding application 510 and the request to use hardware component 516 such as application 510's type, information regarding power management user preferences, historical power management information regarding the device and/or application 510, current battery status, etc. Communication 520 may also include a timeout period for hardware component 516. In another implementation, communication 520 does not include a timeout period for hardware component 516. At time T1, a first timeout period 504 for hardware component 516 begins. In an implementation, during the first timeout period 504, the arbitrator 514 monitors usage of the hardware component 516 at operation 522. During operation 522, the hardware component 516 becomes active at time T2 and notifies the arbitrator 514 of the activity in communication 524. Receipt of communication 524 triggers the arbitrator 514 to reset the timeout period at time T2 to a second timeout period 506. During the second timeout period 506, the arbitrator 514 monitors usage of the hardware component 516 at operation 526. If the hardware component 516 is not used during the timeout period 506, then the arbitrator 514 transitions the hardware component 516 into a low-power mode at time T3 via communication 528. A time shortly after time T3 as indicated by arrow 505, the hardware component 516 is in a low-power mode as indicated by line 502. Transitioning the hardware component 516 to a low-power mode at time T3 is an improvement over other methods of power management that rely on application 510 to de-register its use of hardware component 516. The additional savings of supervisory power management in the form of an earlier transition of hardware component 516 is shown on the plot as area 507. Dashed line 508 indicates what the power status of hardware component 516 would have been if the device had relied on the application 510 to announce its relinquishment of the hardware component 516 instead of using the methods of supervisory power management disclosed herein. In an implementation, the hardware component 516 may “listen” for an event that can trigger the hardware component 516 to transition to a different power mode. For example, if the hardware component 516 is a network adapter, the network adapter may scan only the headers of network packets for an indication that the hardware component 516 should wake up even though the hardware component 516 is in a low-power mode. Such a packet may be referred to as a “wake packet” received in communication 530. Upon receipt of the wake packet in communication 530, the hardware component 516 informs the arbitrator 514 in communication 532 at time T4. In an implementation, the arbitrator may itself transition the hardware component 516 into a higher-power state in communication 533 at time T4 without relaying the hardware component 516's receipt of the wake packet to higher levels of the OS, e.g., the power management supervisory controller 512, the application 510, or other parts of the OS. Upon receipt of communication 533 from the arbitrator 314 at time T4, the hardware component 516 transitions to a higher-power mode and communicates network traffic to the application 510 in communication 534. In at least one implementation, the first timeout period 504 may be superseded by the second timeout period 506 when a component of the system receives a signal indicating that the hardware component 516 is active, would not satisfy an inactivity condition, and/or would not satisfy a transition condition at time T2. As such, the first timeout period 504 is “cut off” by the second timeout period 506 once it is known that the hardware component 516 will not transition to the low-power state at the end of the first timeout period 504. In another implementation, the first timeout period 504 may be allowed to run until its expiration even if it is known that the hardware component 516 did not satisfy a transition condition and/or an inactivity condition during the first timeout period 504. Instead, the first timeout period 504 is allowed to persist until it expired, and then the second timeout period 506 begins after the first timeout period 504 has completed. FIG. 6 illustrates plots of example power status of an application, hardware device, and network traffic of a device under supervisory control of power management environment 600. In a first plot 602, an application is inactive at the beginning of the plot, becomes active at time T0, and remains active for the remainder of the plot. Plot 604 indicates the power mode status of a hardware component, in this case a network adapter. The network adapter is in a low-power mode until time T0 when the application becomes active and requests use of the network adapter. Plot 606 shows network traffic over the network adapter. Network traffic is present starting at time T1 until time T2. At time T2, network traffic ceases and a timeout period begins for the network adapter. At time T3, the timeout period ends, and the network adapter transitions to a low-power state even though the application is still active and may not have de-registered its use of the network adapter. Later, between times T4 and T5, the network becomes active again and the network adapter transitions to a high-power state to service the network traffic. After time T5, the network traffic again ceases and a timeout period begins for the network adapter, which transitions to a low-power mode again at time T6 even though the application is still active and may not have de-registered its use of the network adapter. FIG. 7 illustrates a flow diagram 700 showing an example operations for providing supervisory control of power management in a computing device. A monitoring operation 702 is to monitor activity information of a hardware component of an electronic device. Activity information of a hardware component may include various types of information regarding the hardware component. For example, activity information may include load information on the hardware component, a state of the hardware component, and/or a usage level of the hardware component. In an implementation, the hardware component is a network adapter and activity information regarding the network adapter includes at least transmission information, receive information, and control information suitable to change a state of the network adapter such as the network adapter's power state and/or parameters of the network adapter such as MAC address, multicast address, data rate, etc. The monitoring operation 702 may be performed by a variety of components in an electronic device. In one implementation, a high-level component of an OS such as a power management supervisory controller performs the monitoring operation 702. In another implementation, a low-level component on an OS such as a power management arbitrator performs the monitoring operation 702 without recourse to higher levels of the OS. In yet another implementation, other low level components of an OS perform the monitoring operation 702, such as a hardware sublayer, a data link layer component, and/or a hardware driver associated with a hardware component. The monitoring operation 702 may include monitoring activity of the hardware component (e.g., network traffic on a network adapter, frame rate on a video adapter, etc.) as well as monitoring the nature of the activity of the hardware component. For example, the monitoring operation 702 may include monitoring the type of application requesting usage of a hardware component, historical usage data regarding the usage of the hardware component, available battery status on the device, power consumption of the hardware component, feedback from the hardware component itself, user preference data regarding power management preferences, etc. The setting operation 704 is set a timeout period associated with the hardware component of the electronic device based on the activity information. The setting operation 704 may be performed by a variety of components in an electronic device. In one implementation, a high-level component of an OS such as a power management supervisory controller performs the setting operation 704. In another implementation, a low-level component on an OS such as a power management arbitrator performs the setting operation 704 without recourse to higher levels of the OS. In yet another implementation, other low level components of an OS perform the setting operation 704, such as a hardware sublayer and/or a hardware driver associated with a hardware component. The timeout period set by the setting operation 704 may be shorter if excessive timeouts would degrade the user experience of applications requesting use of the hardware device. For example, a Voice over IP (VoIP) application may miss an incoming call if the incoming call cannot get through a network adapter in a low-power mode. If a VoIP program has requested use of a network adapter hardware component, the timeout period may therefore be set as a relatively short period in the setting operation 704. On the other hand, if a music application has requested to use a network adapter to download songs, then a longer timeout period may be set in the setting operation 704 because the music program is unlikely to need network access after it has downloaded a song and before the next song must be downloaded. A decision operation 706 decides whether the hardware component satisfies an inactivity condition during the timeout period. In an implementation, satisfying the inactivity condition includes receiving no requests from applications on the computing device to use the hardware component during the timeout period. In another implementation, satisfying the inactivity condition includes performing only certain tasks during the timeout period such as executing a heartbeat function, polling other devices for availability, etc. In yet another implementation, satisfying the inactivity condition includes a level of activity of the hardware device that consumes a limited amount of power. Other standards of hardware component activity or power consumption may also be used to satisfy the inactivity condition in the decision operation 706. If the hardware component does not satisfy the inactivity condition in the decision operation 706, the method returns to operation 704 to set a new timeout period. If the hardware component does satisfy the inactivity condition in the decision operation 706, then the method 700 proceeds to transition operation 708. In at least one implementation, the decision operation 706 resets a timeout period when there is activity on the hardware component during the timeout period. Resetting the timeout period when there is activity on the hardware component causes the timeout period only to expire when there has been no activity during the timeout period. Alternatively, decision operation 706 may let the timeout period run to the expiration point before checking whether there has been activity on the hardware component during the timeout period. Letting the timeout period run to expiration requires less frequent resetting of timeout periods but may cause the hardware component to go into the low-power state less frequently than resetting the timeout period when there is activity on the hardware component. The transition operation 708 is transition a portion of the hardware component from a high-power state to a low-power state after expiration of the timeout period. In an implementation, the hardware component is capable of multiple power states (or power modes). The transition operation 708 may therefore include transitioning the hardware component from a high-power state to an intermediate power state or transitioning the hardware component from an intermediate power state to a low-power state. The setting operation 710 is set a wakeup period associated with the hardware component of the electronic device based on the activity information. Setting operation 710 may be performed by a variety of components in an electronic device. In one implementation, a high-level component of an OS such as a power management supervisory controller performs the setting operation 710. In another implementation, a low-level component on an OS such as a power management arbitrator performs the setting operation 710 without recourse to higher levels of the OS. In yet another implementation, other low level components of an OS perform the setting operation 710, such as a hardware sublayer, a component of the data link layer, and/or a hardware driver associated with a hardware component. The wakeup period set by the setting operation 710 may be shorter if an excessive period of a low-power mode by the hardware component would degrade the user experience of applications requesting use of the hardware device. For example, a Voice over IP (VoIP) application may miss an incoming call if the incoming call cannot get through a network adapter in a low-power mode. If a VoIP program has requested use of a network adapter hardware component, the wakeup period may therefore be set as a relatively short period in the setting operation 710. On the other hand, if a music application has requested to use a network adapter to download songs, then a longer wakeup period may be set in the setting operation 710 because the music program is unlikely to need network access after it has downloaded a song and before the next song must be downloaded, and the music application is not likely to present a degraded user experience if a song cannot be downloaded due to a low-power state of a network adapter when the music application first requests it because the music program may not need continuous network access to play continuous music to the user. A decision operation 712 determines whether the wakeup period has ended. If the wakeup period has ended at the decision operation 712, then the method proceeds to the transitioning operation 714. The transitioning operation 714 transitions the portion of the hardware component from a low-power state to a high-power state. The transitioning operation 714 may be performed by a variety of components in an electronic device. In one implementation, a high-level component of an OS such as a power management supervisory controller performs the transitioning operation 714. In another implementation, a low-level component on an OS such as a power management arbitrator performs the transitioning operation 714 without recourse to higher levels of the OS. In yet another implementation, other low level components of an OS perform the transition operation 714, such as a hardware sublayer, a component of the data link layer, and/or a hardware driver associated with a hardware component. In at least one implementation, the transition operation 714 does not completely transition the portion of the hardware component from a low-power state to a high-power state. Instead, the transition operation 714 may merely check whether there are any demands from the applications or OS to use the hardware component. If there are no demands to use the hardware component, then transition operation 714 may allow the hardware component to remain in the low-power state. The order of the steps in method 700 is not limited to the order presented herein. For example, setting operation 704 may set the timeout period independently of the wakeup period in setting operation 710. In another implementation, setting operations 704 and 710 may operate simultaneously. An example supervisory control method for power management of an electronic device includes monitoring activity information of a hardware component of the electronic device, setting a timeout period associated with the hardware component of the electronic device based on the activity information, the timeout period defining a minimum amount of time before a power state of the hardware component may be transitioned from a higher-power state to a lower-power state, and transitioning at least a portion of the hardware component from the higher-power state to the lower-power state after expiration of the timeout period if the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period. Another example method of any preceding method includes maintaining the hardware component in the higher-power state after expiration of the timeout period if the hardware component does not satisfy the transition condition during the timeout period. Another example method of any preceding method includes setting a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a maximum amount of time before a power status of the hardware component may be transitioned from the lower-power state to a different power state, and transitioning the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period if the hardware component satisfies a wakeup condition. Another example method of any preceding method includes wherein the transitioning operation includes determining whether demands on the hardware component justify a transition to the different power state. Another example method of any preceding method includes wherein the activity information includes information communicated between the hardware component and one or more applications executing on the electronic device. Another example method of any preceding method includes wherein the transitioning operation transitions a sub-component of the hardware component to a low-power state. Another example method of any preceding method includes wherein the transition condition is satisfied when there is no monitored activity information during the timeout period and the hardware component is idle at an end of the timeout period. Another example method of any preceding method includes setting an initial value for the timeout period, and adjusting the initial value of the timeout period for the hardware component of the electronic device from the initial value to a different value based on at least one of the monitored activity information and a type of application using the hardware component. An example apparatus includes a supervisory power management control system configured to monitor activity information regarding a hardware component of the electronic device, set a timeout period associated with the hardware component, the timeout period defining a minimum amount of time before a power state of the hardware component may be transitioned from a higher-power state to a lower-power state, transition at least a portion of the hardware component to a lower-power state if the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period, and maintain the hardware component in the higher-power state after expiration of the timeout period if the hardware component does not satisfy the transition condition during the timeout period. An example apparatus of any preceding apparatus includes a power management supervisory controller configured to determine one or more second timeout periods, the one or more second timeout periods based at least in part on applications executing on the electronic device, the power management supervisory controller further configured to communicate the one or more second timeout periods to a power management arbitrator. An example apparatus of any preceding apparatus includes wherein the power management supervisory controller is further configured to set another timeout period after the expiration of the timeout period if the hardware component does not satisfy the transition condition during the timeout period. An example apparatus of any preceding apparatus includes wherein the power management supervisory controller is further configured to set a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a minimum amount of time before the power status of the hardware component may be transitioned from the lower-power state to a different power state, and transition the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period if the hardware component satisfies a wakeup condition. An example apparatus of any preceding apparatus includes a hardware interface controller configured to set a timer at a beginning of the timeout period, the hardware interface controller further configured to transit a signal to an arbitrator at the expiration of the timer. An example apparatus of any preceding apparatus includes wherein the power management supervisory controller is further configured to adjust the wakeup period for the hardware component to a different value after expiration of the wakeup period based on at least one of the activity information monitored during the wakeup period and a type of application using the hardware component. An example apparatus includes one or more tangible processor-readable storage media embodied with instructions for executing on one or more processors and circuits of a device a process for supervisory power management of an electronic device, the process including monitoring activity information of a hardware component of the electronic device, setting a timeout period for the hardware component of the electronic device, the timeout period having an initial value based on the activity information, and transitioning at least a portion of the hardware component from a higher-power state to a lower-power state after expiration of the timeout period if the hardware component satisfies a transition condition, the transition condition being dependent at least in part on the activity information and the timeout period. An example apparatus of any preceding apparatus includes wherein the process further includes maintaining the hardware component in the higher-power state after expiration of the timeout period if the hardware component does not satisfy the transition condition during the timeout period. An example apparatus of any preceding apparatus includes wherein the process further includes setting a wakeup period associated with the hardware component of the electronic device based on the activity information, the wakeup period defining a minimum amount of time before a power status of the hardware component may be transitioned from the lower-power state to a different power state, and transitioning the portion of the hardware component from the lower-power state to the different power state after expiration of the wakeup period if the hardware component satisfies a wakeup condition. An example apparatus of any preceding apparatus includes wherein the activity information includes information communicated between the hardware component and one or more applications executing on the electronic device. An example apparatus of any preceding apparatus includes wherein the setting operation comprises setting a wakeup timer on the hardware component, the process further comprising receiving a wakeup signal from the hardware component at an end of the wakeup period if the activity information received while the hardware component is in the low-power state satisfies an activity condition. An example apparatus of any preceding apparatus includes wherein the transitioning operation transitions a sub-component of the hardware component to the low-power state. FIG. 8 illustrates an example processing system 800 enabled to provide a supervisory power management service. The processing system 800 includes processors 802 and memory 804. On memory 804 is stored the power management supervisory controller 206, the power management arbitrator 208, one or more hardware sublayers 210, and/or one or more hardware drivers 212. The processing system may include one or more hardware devices, boxes, or racks, and may be hosted in a network data center. According to various non-limiting examples, the computing systems described herein includes one or more devices, such as servers, storage devices, tablet computers, laptops, desktop computers, gaming consoles, media players, mobile phones, handheld computers, wearable devices, smart appliances, networking equipment, kiosk devices, and so forth. In one example configuration, the computing systems comprise at least one processor. The computing systems also contain communication connection(s) that allow communications with various other systems. The computing systems also include one or more input devices, such as a keyboard, mouse, pen, voice input device, touch input device, etc., and one or more output devices, such as a display (including a touch-screen display), speakers, printer, etc. coupled communicatively to the processor(s) and computer-readable media via connections such as a bus. The memory 804 is an example of computer-readable media. Computer-readable media stores computer-executable instructions that are loadable and executable by one or more processor(s), as well as data generated during execution of, and/or usable in conjunction with, these programs. In the illustrated example, computer-readable media stores OS instances, which provide basic system functionality to applications. One or more of these components, including the operating systems, may be instantiated as virtual machines, application containers, or as some other type of virtualized instantiation. Processor(s) 802 may include one or more single-core processing unit(s), multi-core processing unit(s), central processing units (CPUs), graphics processing units (GPUs), general-purpose graphics processing units (GPGPUs), or hardware logic components configured, e.g., via specialized programming from modules or application program interfaces (APIs), to perform functions described herein. In alternative examples one or more functions of the present disclosure may be performed or executed by, and without limitation, hardware logic components including Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Digital Signal Processing unit(s) (DSPs), and other types of customized processing unit(s). For example, a processing unit configured to perform one or more of the functions described herein may represent a hybrid device that includes a CPU core embedded in an FPGA fabric. These or other hardware logic components may operate independently or, in some instances, may be driven by a CPU. In some examples, examples of the computing systems may include a plurality of processing units of multiple types. For example, the processing units may be a combination of one or more GPGPUs and one or more FPGAs. Different processing units may have different execution models, e.g., as is the case for graphics processing units (GPUs) and central processing units (CPUs). Depending on the configuration and type of computing device used, computer-readable media (e.g., memory 804) include volatile memory (such as random access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, 3D XPoint, resistive RAM, etc.). The computer-readable media can also include additional removable storage and/or non-removable storage including, but not limited to, SSD (e.g., flash memory), HDD (Hard Disk Drive) storage or other type of magnetic storage, optical storage, and/or other storage that can provide non-volatile storage of computer-executable instructions, data structures, program modules, and other data for computing systems. Computer-readable media can, for example, represent computer memory, which is a form of computer storage media. Computer-readable media includes at least two types of computer-readable media, namely computer storage media and communications media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any process or technology for storage of information such as computer-executable instructions, data structures, programming modules, or other data. Computer storage media includes, but is not limited to, phase change memory (PRAM), Resistive RAM, 3D Xpoint non-volatile memory, static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store information for access and retrieval by a computing device. In contrast, communication media can embody computer-executable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. Various processes described herein are carried out as computing functions in conjunction with networking functions. For example, one computing device or system may cause transmission of a message to another computing device via network communication hardware. This may include, for example, passing by a software module a pointer, argument, or other data to a networking module. The pointer, argument or other data may identify data stored in memory or in a register that is to be transmitted to another computing device. The networking module may include a protocol stack, and may read the data identified by the pointer, argument, or other data. The protocol stack may encapsulate the data in one or more frames, packets, cells, or other data networking protocol structures. The protocol stack (such as within the network power module or elsewhere) may call a network interface device driver, to cause physical transmission of electrical, magnetic, or optical signals along a communication medium to a network element, such as a gateway, router, switch, hub, and so forth. An underlying network may route or switch the data to the destination. The destination computing device may receive the data via a network interface card, which results in an interrupt being presented to a device driver or network adaptor. A processor of the destination computing device passes the device driver an execution thread, which causes a protocol stack to de-encapsulate the data in the packets, frames, and cells in which the data was received. The protocol stack causes the received data to be stored in a memory, a register, or other location. The protocol stack may pass a pointer, argument, or other data that identifies where the received data is stored to a destination software module executing on the destination computing device. The software module receives an execution thread along with the argument, pointer, or other data, and reads the data from the identified location. The processing system 800 may also include a display 806 (e.g., a touchscreen display, an OLED display with photodetectors, etc.), and other interfaces 808 (e.g., a keyboard interface). The memory device 804 generally includes both volatile memory (e.g., RAM) and non-volatile memory (e.g., flash memory). An OS 810, such as one of the varieties of the Microsoft Windows® operating system, resides in the memory device 804 and is executed by at least one of the processor units 802, although it should be understood that other operating systems may be employed. Other features of the processing system 800 may include without limitation an image sensor, a sensing trigger (e.g., a pressure sensor or a proximity sensor), etc. One or more applications 812, such as power management software, power management user preference software, hardware power mode control software, etc., are loaded in the memory device 804 and executed on the OS 810 by at least one of the processor units 802. The processing system 800 includes a power supply 816, which is powered by one or more batteries and/or other power sources and which provides power to other components of the processing system 800. The power supply 816 may also be connected to an external power source that overrides or recharges the built-in batteries or other power sources. The processing system 800 includes one or more communication transceivers 830 to provide network connectivity (e.g., mobile phone network, Wi-Fi®, BlueTooth®, etc.). The processing system 800 also includes various other components, such as a positioning system 820 (e.g., a global positioning satellite transceiver), one or more audio interfaces 834 (e.g., such a microphone, an audio amplifier and speaker and/or audio jack), one or more antennas 832, and additional storage 828. Other configurations may also be employed. In an example implementation, a mobile operating system, various applications, modules for power management and device control (e.g., drivers, communication stack layers and/or sublayers), and other modules and services may be embodied by instructions stored in the memory device 804 and/or storage devices 828 and processed by the processing unit 802. Timeout periods, wakeup periods, and other data may be stored in the memory device 804 and/or storage devices 828 as persistent datastores. The processing system 800 may include a variety of tangible processor-readable storage media and intangible processor-readable communication signals. Tangible processor-readable storage can be embodied by any available media that can be accessed by the processing system 800 and includes both volatile and nonvolatile storage media, removable and non-removable storage media. Tangible processor-readable storage media excludes intangible communication signals and includes volatile and nonvolatile, removable and non-removable storage media implemented in any method or technology for storage of information such as processor-readable instructions, data structures, program modules or other data. Tangible processor-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can be accessed by the processing system 800. In contrast to tangible processor-readable storage media, intangible processor-readable communication signals may embody processor-readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include signals traveling through wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Some implementations may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium to store logic. Examples of a storage medium may include one or more types of processor-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, OS software, middleware, firmware, software modules, routines, subroutines, operation segments, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one implementation, for example, an article of manufacture may store executable computer program instructions that, when executed by a processor, cause the processor to perform methods and/or operations in accordance with the described implementations. The executable processor program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable processor program instructions may be implemented according to a predefined processor language, manner or syntax, for instructing a processor to perform a certain operation segment. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. The implementations described herein are implemented as logical steps in one or more processor systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more processor systems and (2) as interconnected machine or circuit modules within one or more processor systems. The implementation is a matter of choice, dependent on the performance requirements of the processor system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language. 15441865 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Dec 6th, 2018 12:00AM https://www.uspto.gov?id=US11315256-20220426 Detecting motion in video using motion vectors Technology is disclosed herein for detecting motion in video using motion vectors. In an implementation, a frame of video is divided into regions and a vector score is identified for each of the regions. A selection is then made of a subset of the regions based on the identified vector scores, i.e. at least some of the regions may be excluded from further analysis based on their score. The selected subset is divided into or grouped in clusters. Motion may then be identified in response to at least one of the clusters appearing in at least one other frame of the video. 11315256 1. A method for detecting motion in video, the method comprising: dividing a frame of video into regions; assigning motion vectors from the video to at least some of the regions; determining a vector score for each one of the regions that comprises a representation of one or more motion vectors assigned to a given region; filtering out one or more of the regions based at least on the vector score determined for each of the regions, resulting in a subset of the regions to divide into clusters; dividing the subset of the regions into the clusters; identifying a compactness score for each of the clusters, wherein the compactness score comprises a value representative of a closeness of each region of a cluster to each other; filtering out one or more of the clusters based on the compactness score for each of the clusters, resulting in a subset of the clusters to track in at least one other frame of the video; and identifying the motion in the video in response an appearance in at least the one other frame of the video of a cluster similar to at least one of the subset of the clusters. 2. The method of claim 1 wherein the vector score comprises a magnitude representation of the one or more motion vectors in the given region. 3. The method of claim 1 further comprising selecting the one or more of the clusters in response to the compactness score satisfying one or more criteria. 4. The method of claim 1 further comprising determining that the cluster is similar to the at least one of the subset of the clusters by comparing one or more of a location of the cluster, a size of the cluster, and a bounding box for the cluster to one or more of a location of the at least one of the subset of the clusters, a size of the at least one of the subset of the clusters, and a bounding box of the at least one of the subset of the clusters. 5. The method of claim 1 wherein the vector score for each of the regions comprises a sum of the one or more motion vectors. 6. The method of claim 5 further comprising calculating the sum of the one or more motion vectors for each of the regions using less than half of the one or more motion vectors in each region. 7. The method of claim 5 further comprising selecting the given region in response to an average motion vector for the given region satisfying one or more criteria. 8. A computing apparatus comprising: one or more computer readable storage media; a processing system operatively coupled to the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media for detecting motion in video that, when executed by the processing system, direct the computing apparatus to at least: divide a frame of video into regions; determine a vector score for each of the regions, wherein the vector score comprises a representation of zero or more motion vectors in each of the regions; exclude at least some of the regions from further analysis based on the vector score determined for each of the regions, resulting in a subset of the regions to divide into clusters; divide the subset of the regions into the clusters; identify a compactness score for each of the clusters, wherein the compactness score comprises a value representative of a closeness of each region of a cluster to each other; filter out one or more of the clusters based on the compactness score for each of the clusters, resulting in a subset of the clusters to track in at least one other frame of the video; and identify the motion in the video in response to an appearance in at least the one other frame of the video of a cluster similar to at least one of the subset of the clusters in the frame. 9. The computing apparatus of claim 8 wherein the program instructions further direct the computing apparatus to: determine that the cluster is similar to the at least one of the clusters by comparing one or more of a location of the cluster, a size of the cluster, and a bounding box for the cluster to one or more of a location of the at least one of the subset of the clusters, a size of the at least one of the subset of the clusters, and a bounding box of the at least one of the clusters. 10. The computing apparatus of claim 8 wherein the vector score for each of the regions comprises a magnitude representation of one or more motion vectors. 11. The computing apparatus of claim 10 wherein the program instructions further direct the computing apparatus to calculate the magnitude representation for each of the regions using only a fraction of all motion vectors in each region. 12. The computing apparatus of claim 10 wherein to exclude the at least some of the regions based on the vector score determined for each of the regions, the program instructions direct the computing apparatus to exclude a given region in response to the magnitude representation for the given region not satisfying one or more criteria. 13. The computing apparatus of claim 8 wherein the vector score for each of the regions comprises a sum of one or more motion vectors in a given region. 14. The computing apparatus of claim 9 wherein the program instructions direct the computing apparatus to select the one or more of the clusters in response to the compactness score satisfying one or more criteria. 15. One or more non-transitory computer readable storage media having program instructions stored thereon that, when executed by a processor, direct a computing apparatus to at least: divide a frame of video into regions; identify a vector score for each of the regions; filter out one or more of the regions based on the vector score identified for each of the regions, resulting in a subset of the regions to divide into clusters; divide the subset of the regions into the clusters; identify a cluster metric for each of the clusters, wherein the cluster metric comprises a value representative of a closeness of each region of a cluster to each other; filter out, based on the cluster metric identified for each of the clusters, one or more of the clusters, resulting in a subset of the clusters to track from the frame to at least one other frame of the video; and identify motion in the video in response to a cluster appearing in the at least one other frame of the video that is similar to at least one of the one or more of the clusters. 16. The one or more non-transitory computer readable storage media of claim 15 wherein the vector score for each of the regions comprises an average of one or more motion vectors in a given region. 17. The one or more non-transitory computer readable storage media of claim 16 wherein the program instructions further direct the computing apparatus to calculate the average of the one or more motion vectors for each of the regions using only a fraction of all motion vectors in each region. 18. The one or more non-transitory computer readable storage media of claim 17 wherein the fraction comprises one of approximately ¼ and approximately ⅛. 19. The one or more non-transitory computer readable storage media of claim 16 wherein the program instructions further direct the computing apparatus to select an average motion vector for the given region. 20. The one or more non-transitory computer readable storage media of claim 15 wherein the cluster metric comprises a compactness of each of the clusters. 20 TECHNICAL FIELD Aspects of the disclosure are related to the field of video analysis, and in particular, to detecting motion in video. BACKGROUND The automatic detection of motion is a useful feature of many video analysis systems. Motion detection may be used in the context of video surveillance, for example, to alert on movement in a particular scene. In other examples, motion detection may be used to determine when to record video, in autonomous vehicle applications, and in many other contexts. Frame-based motion detection is a technique that compares the background and foreground of one frame to those of one or more other frames. Differences in the background or foreground may indicate the presence of motion in a scene. Unfortunately, such motion detection is a slow process that typically takes 12-35 ms to perform on a 1080p stream by a single central processing unit (CPU). Thus, the cost to perform such analysis by a single CPU on multiple streams would be prohibitively expensive from a performance standpoint. OVERVIEW Technology is disclosed herein for detecting motion in video using motion vectors. In an implementation, a frame of video is divided into regions and a vector score is identified for each of the regions. A selection is then made of a subset of the regions based on the identified vector scores, i.e. at least some of the regions are excluded from further analysis based on their score. The selected subset is divided into or grouped in clusters. Motion may then be identified in response to at least one of the clusters appearing in at least one other frame of the video. This Overview is provided to introduce a selection of concepts in a simplified form that are further described below in the Technical Disclosure. It may be understood that this Overview is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS Many aspects of the disclosure may be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, like reference numerals in the drawings designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, the disclosure is not limited to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents. FIG. 1 illustrates an operational environment in an implementation. FIG. 2 illustrates a motion detection process in an implementation. FIGS. 3A-3B illustrate region selection in an implementation. FIGS. 4A-4B illustrate cluster selection in an implementation. FIG. 5 illustrates cluster tracking in an implementation. FIGS. 6A-6D illustrate box bounding and tracking in an implementation. FIGS. 7A-7C illustrate box bounding and tracking in an implementation. FIG. 8 illustrates a computing system suitable for implementing the various operational environments, architectures, processes, scenarios, and sequences discussed below with respect to the Figures. DETAILED DESCRIPTION Motion vectors are used herein for the automatic detection of motion in video. In various implementations, motion vectors are produced when video of a scene is captured and encoded. Motion vectors typically describe the displacement of a macroblock from one frame to another and thus may be leveraged to make encoding and decoding more efficient. In an advance, the motion vectors are leveraged to detect when motion is occurring in the scene. Detecting motion includes dividing a frame of video into regions and assigning motion vectors to each region. Next, a vector score is calculated for each region. The vector score may be, for example, an average of all of the motion vectors of the frame that fall within a given region, a sum of the vectors, or any other scalar value that may be compared against criteria. In any case, all of the motion vectors in the region or only a subset of the vectors may be used. For instance, as many as ¾ to ⅞ of the motion vectors may be discarded and excluded from the calculation of the vector score. The regions are then filtered based on their vector scores. That is, at least some of the regions are excluded from further analysis based on whether their scores satisfy one or more criteria, such as a threshold value. Such filtering further reduces the amount of data needed to detect motion. The remaining regions are grouped into clusters based on their mean distance from each other using a k-means clustering algorithm. Each cluster is assigned a compactness score that represents how close the regions of the cluster are to each other. The clusters are then filtered to exclude those that fail to satisfy one or more criteria, such as a threshold compactness score, further reducing the amount of data. Finally, the remaining cluster(s) are tracked to determine if they appear in one or more other frames of the video. For instance, one or more clusters of motion vectors may be identified in subsequent frames of the video and compared against the subject frame. If the same (or similar) cluster is present in one or more of the subsequent frames, then motion may be determined to have occurred. The same cluster may be in the same location as the original cluster or it may be displaced from the original location. In some cases, a certain amount of displacement may be further indicative of motion. The amount of overlap of two clusters from one frame to the next may also be indicative of motion (or lack thereof). The motion detection technology disclosed herein has the technical effect of substantially speeding up motion detection. Filtering the regions reduces the amount of data to be considered, as does filtering the number of motion vectors used in the region filtering. Filtering the clusters further reduces the amount of data to consider. In trials, the motion detection algorithm took approximately 3 ms to execute on images from cameras with high digital noise, while exporting the motion vectors to the algorithm added 1-2 ms to the total time. In trials with low noise cameras, this total was reduced to less than 1 ms on average, including motion vector export time. This technique may be up to 10× faster to execute than an equivalent image comparison technique for cameras with low digital noise. FIG. 1 illustrates an operational environment 100 in an implementation to better describe the enhanced motion detection considered herein. Operational environment 100 includes motion detection application 101 (hereinafter application 101) and video capture device 105. Video capture device 105 captures vide of a scene, encodes the video, and communicates the encoded video to application 101. Application 101 processes the video to detect motion that may be represented therein, such as the movement of a figure, an object, or the like. Application 101 may be implemented on one or more computing systems or devices, of which computing system 801 in FIG. 8 is representative. Examples of such computers include, but are not limited to, desktop computers, laptop computers, server computers, and other physical or virtual computing devices. In some cases, application 101 may be implemented in the context of an application-specific device, such as a video surveillance camera. In fact, application 101 may be implemented in video capture device 105 in some implementations and/or video capture device 105 may be integrated with the computing system on which application 101 implemented. Application 101 may be implemented as a stand-alone application but may also be integrated in another application. Application 101 may be a native application, a browser-based application, a mobile application, or any other type of software application. Application 101 may be implemented in firmware or hardware in some cases. Application 101 employs a motion detection process 200 to detect motion in video. Motion detection process 200 may be implemented in program instructions in the context of any of the modules, components, or other such programming elements of application 101. The program instructions direct the underlying physical or virtual computing system(s) to operate as described for motion detection process 200, referring parenthetically to the steps in FIG. 2. In operation, application 101 receives the video and divides a given frame into regions. Motion vectors are extracted from the encoded video and assigned to the regions (step 201). In some implementations, motion vectors may be extracted for the p-frames and/or b-frames of the video, although they may not be available for the i-frames (key frames). Assigning the motion vectors may include mapping their pixel coordinates to the regions. The frame may be divided into a 32×32 grid, or any other suitable dimensions, with the cells of the grid encompassing or corresponding to certain pixel ranges in the video frame. As the frame itself was encoded with motion vectors, application 101 is able to identify those of the motion vectors located in each of the regions. Application 101 then identifies a vector score for each region (step 203). For instance, application 101 may calculate the average vector for each region, a sum of the vectors in each region, or some other magnitude that can be compared against a threshold value. The grid size chosen will dictate the granularity of objects that can be tracked, along with the associated cost in tracking them. Objects significantly smaller than the grid size are unlikely to be identified. A high-density grid will increase the runtime cost of the motion detection proportional to the total grid regions, as such the grid size should be determined based on the usage and location of the camera In some implementations, the threshold value may be set programmatically to a static value. In other cases, the threshold value may be dynamic, such that it can change based on an analysis of the performance of the motion detection over time. The threshold value could also change based on time of day, environmental factors, or other dynamics that may influence the performance of the system. Application 101 next identifies a subset of the regions to include in clusters based on the identified vector scores. Non-qualifying regions are filtered out (step 205) and the remaining regions are organized into clusters (step 207). The clustering may be accomplished using a k-means clustering algorithm, for example, or any other suitable algorithm. Optionally, each cluster may be assigned a compactness score that represents the closeness of each region in the cluster to each other (step 209). When such scores are produced, non-qualifying clusters may be filtered out (step 211), further reducing the data set. Lastly, the remaining clusters of motion vectors are tracked across frames (step 213). The presence of the same cluster (or a different cluster positioned similarly) in one or more other frames may be considered indicative of motion in the scene. The cluster may be in the same position or may be displaced in the subsequent frame relative to the first frame. Referring back to FIG. 1, operational scenario 110 presents an example of motion detection process 200. Operational scenario 110 includes frame 111, which is representative of a frame of video. Frame 111 has been divided into regions, each of which may be referred to by its alphabetical coordinates. The empty regions indicate regions of little to no motion vectors, whereas the pattern-filled regions indicate the presence of motion vectors. The darker pattern indicates a vector score that satisfied a threshold, whereas the lighter pattern indicates a vector score that fails to satisfy the threshold. Next, the regions are filtered based on their relative scores. As such, the regions with the lighter pattern are filtered out, while the regions with the darker pattern remain. The remaining regions are then organized into clusters, represented by cluster 121, cluster 122, and cluster 123. The clusters themselves may also be scored (optional) and filtered out (optional). In this example, cluster 122 is filtered-out because its regions are spaced relatively far apart. Cluster 121 and cluster 123 remain, which may be compared against clusters that appear in subsequent video frames 113 to detect the presence of motion. FIG. 3A illustrates the selection of regions in an operational scenario. Step 300A of the operational scenario includes video frame 301 which has been divided into thirty-six regions (6×6). Region 303 is representative of the regions, each of which may be referred to in terms of its coordinates in the grid, where the x-axis is defined in terms of a-f, as is the y-axis. Various motion vectors, of which vector 305 is representative, are included in the video frame 301. The motion vectors are a byproduct of the encoding of the raw video. Application 101 may extract the motion vectors from the encoded video in order to perform the motion detection described herein. As part of the motion detection, application 101 identifies the motion vectors in each region and scores the regions based on their motion vectors. All of the motion vectors may be used for a given region, although less than all may also be used. For instance, as few as one-quarter or one-eighth of the motion vectors may be used to calculate the score for a region. FIG. 3B illustrates a second step 300B of the operational scenario. At step 300B, the vectors scores have been calculated and are represented. It may be assumed for exemplary purposes that the average, the sum, or some other magnitude of the vectors in each region has been calculated. Several of the regions have resulted in a NULL score, since their motion vectors canceled each other out. What remains are eight regions where the scores satisfied a threshold. Moving forward, the data set is reduced to only these regions (a-d, b-d, b-f, d-b, d-c, d-f, e-b, and e-c). FIG. 4A illustrates a first step 400A in the clustering process. Here, three clusters of regions are identified: cluster 307, cluster 308, and cluster 309. Next, a compactness score is calculated for each. Both cluster 307 and cluster 308 are relatively compact. However, cluster 309 has a compactness that falls below an exemplary threshold. Accordingly, cluster 309 is removed from consideration, as illustrated in the second step 400B of the process in FIG. 4B. Here, only two clusters remain to be tracked. FIG. 5 illustrates the tracking process. In a first step 500A, the two clusters from FIG. 4B have been identified. The second step 500B of the process illustrates a subsequent frame 311 in the video. Frame 311 has also been divided into a grid of regions, of which region 313 is representative. In addition, its regions and clusters have been filtered. What remains is a single cluster comprised of the following regions: e-b; e-c; f-b; and f-c. The remaining cluster may be compared against the two clusters in frame 301. Indeed, the new cluster is the same as (or similar to) one of the clusters in the original frame. Accordingly, it may be concluded that motion is occurring in the scene. FIGS. 6A-6D illustrates a bounding box operation in an alternative example. In step 600A, a cluster 603 of regions has been identified in frame 601. Frame 601 has been divided into regions defined by a grid. At step 600B in FIG. 6B, a bounding box 605 is calculated for cluster 603 and may be visualized as drawn around it. The bounding box 605 may be tracked across frames, too. For instance, in FIG. 6C, which illustrates frame 606, bounding box 609 has been calculated for cluster 607. The dimensions of bounding box 609 may be compared to those of bounding box 605. Since the size of the two boxes are the same, a determination may be made that motion has occurred in the scene captured in the two frames. Alternatively, the bounding box comparison may serve as a confirmatory step subsequent to a conclusion that cluster 607 and cluster 603 are the same. Alternatively, or in addition to the analysis provided with respect to FIG. 6C, the overlap of one bounding box relative with respect to another may be analyzed to determine if there is motion. FIG. 6D illustrates bounding box 609 and its overlap with bounding box 605. A substantial amount of overlap (e.g. half or more) of one box with respect to another may indicate that motion has occurred. In contrast, FIGS. 7A-7C illustrate a scenario where little overlap has occurred and therefore a determination of no motion may be made. FIG. 7A in step 700A includes frame 701 in which a bounding box 705 is calculated for cluster 703. Step 700B in FIG. 7B shows a subsequent frame 707 in which a new cluster 711 has a bounding box 713 calculated for it. Two aspects may be appreciated: the new cluster 711 is substantially smaller than cluster 703; and bounding box 713 is substantially smaller than bounding box 715. Either or both facts may lead to a determination that motion has not occurred. Step 7C in FIG. 7C illustrates the overlap of the two bounding boxes, which may further support the determination of no motion. As can be seen, bounding box 713 overlaps only slightly with bounding box 705. Such little overlap from one frame to the next may indicate that the two clusters were created from artifacts in the video or other dynamics in the scene other than the motion of a single figure, an object, or the like. The amount of overlap that triggers a determination of motion may be set programmatically. In some cases, it may be dynamic such that it may change over time based on an analysis of the motion detection process. The amount of overlap required to trigger a determination of motion may also change based on the environment of a scene, the speed of a scene, or other factors. In scenarios where the bounding box is close or identical in position and size to that of a previous frame, motion may still have occurred, such as a small object rotating in place. Depending on whether this type of motion is useful, this may be accepted or rejected. In cases where a bounding box can be tracked over several frames, the position could be used to identify a path taken, including a direction. Examples might include crossing a property boundary or entering from—but not leaving—a restricted area. A trajectory could also be calculated and used for further calculations to predict object behavior. Object tracking and predictions derived from this data could be considered a first pass approach and validation of the subject within the tracked region should be performed to prevent false positives in potentially hazardous environments such as autonomous vehicles. It may be appreciated that the enhanced motion detection disclosed herein may be implemented with stationary cameras as well as pan-tilt-zoom (PTZ) cameras. However, the technique could also be applied to free-moving cameras, such as on autonomous vehicle by subtracting the average motion vector from the whole image. As mentioned, the technique may also be implemented with integrated cameras, such as those in laptop, televisions, or the like. FIG. 8 illustrates computing system 801 that is representative of any system or collection of systems in which the various processes, programs, services, and scenarios disclosed herein may be implemented. Examples of computing system 801 include, but are not limited to, server computers, cloud computing platforms, and data center equipment, as well as any other type of physical or virtual server machine, container, and any variation or combination thereof. Other examples include desktop computers, laptop computers, table computers, Internet of Things (IoT) devices, wearable devices, and any other physical or virtual combination or variation thereof. Computing system 801 may be implemented as a single apparatus, system, or device or may be implemented in a distributed manner as multiple apparatuses, systems, or devices. Computing system 801 includes, but is not limited to, processing system 802, storage system 803, software 805, communication interface system 807, and user interface system 809 (optional). Processing system 802 is operatively coupled with storage system 803, communication interface system 807, and user interface system 809. Processing system 802 loads and executes software 805 from storage system 803. Software 805 includes and implements motion detection process 806, which is representative of the motion detection processes discussed with respect to the preceding Figures. When executed by processing system 802 to provide packet rerouting, software 805 directs processing system 802 to operate as described herein for at least the various processes, operational scenarios, and sequences discussed in the foregoing implementations. Computing system 801 may optionally include additional devices, features, or functionality not discussed for purposes of brevity. Referring still to FIG. 8, processing system 802 may comprise a micro-processor and other circuitry that retrieves and executes software 805 from storage system 803. Processing system 802 may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. Examples of processing system 802 include general purpose central processing units, graphical processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. Storage system 803 may comprise any computer readable storage media readable by processing system 802 and capable of storing software 805. Storage system 803 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of storage media include random access memory, read only memory, magnetic disks, optical disks, flash memory, virtual memory and non-virtual memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other suitable storage media. In no case is the computer readable storage media a propagated signal. In addition to computer readable storage media, in some implementations storage system 803 may also include computer readable communication media over which at least some of software 805 may be communicated internally or externally. Storage system 803 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system 803 may comprise additional elements, such as a controller, capable of communicating with processing system 802 or possibly other systems. Software 805 (including motion detection process 806) may be implemented in program instructions and among other functions may, when executed by processing system 802, direct processing system 802 to operate as described with respect to the various operational scenarios, sequences, and processes illustrated herein. For example, software 805 may include program instructions for implementing a motion detection process to learn motion in video as described herein. In particular, the program instructions may include various components or modules that cooperate or otherwise interact to carry out the various processes and operational scenarios described herein. The various components or modules may be embodied in compiled or interpreted instructions, or in some other variation or combination of instructions. The various components or modules may be executed in a synchronous or asynchronous manner, serially or in parallel, in a single threaded environment or multi-threaded, or in accordance with any other suitable execution paradigm, variation, or combination thereof. Software 805 may include additional processes, programs, or components, such as operating system software, virtualization software, or other application software. Software 805 may also comprise firmware or some other form of machine-readable processing instructions executable by processing system 802. In general, software 805 may, when loaded into processing system 802 and executed, transform a suitable apparatus, system, or device (of which computing system 801 is representative) overall from a general-purpose computing system into a special-purpose computing system customized to provide motion detection. Indeed, encoding software 805 on storage system 803 may transform the physical structure of storage system 803. The specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media of storage system 803 and whether the computer-storage media are characterized as primary or secondary storage, as well as other factors. For example, if the computer readable storage media are implemented as semiconductor-based memory, software 805 may transform the physical state of the semiconductor memory when the program instructions are encoded therein, such as by transforming the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. A similar transformation may occur with respect to magnetic or optical media. Other transformations of physical media are possible without departing from the scope of the present description, with the foregoing examples provided only to facilitate the present discussion. Communication interface system 807 may include communication connections and devices that allow for communication with other computing systems (not shown) over communication networks (not shown). Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry. The connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media. The aforementioned media, connections, and devices are well known and need not be discussed at length here. Communication between computing system 801 and other computing systems (not shown), may occur over a communication network or networks and in accordance with various communication protocols, combinations of protocols, or variations thereof. Examples include intranets, internets, the Internet, local area networks, wide area networks, wireless networks, wired networks, virtual networks, software defined networks, data center buses and backplanes, or any other type of network, combination of network, or variation thereof. The aforementioned communication networks and protocols are well known and need not be discussed at length here. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. The included descriptions and figures depict specific embodiments to teach those skilled in the art how to make and use the best mode. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these embodiments that fall within the scope of the disclosure. Those skilled in the art will also appreciate that the features described above may be combined in various ways to form multiple embodiments. As a result, the invention is not limited to the specific embodiments described above, but only by the claims and their equivalents. 16212501 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Jan 27th, 2017 12:00AM https://www.uspto.gov?id=US11314485-20220426 Lazy generation of templates Methods, systems, apparatuses, and computer program products are described herein that generate and assist in managing templates (pre-generated user-customizable automated workflows) that can be used to easily and efficiently develop automated workflows in an automated workflow development system. A plurality of workflows steps in a library of workflow steps is determined. One or more workflow templates are automatically generated. Each automatically generated workflow template includes a combination of at least two of the workflow steps in the library. The one or more workflow templates are stored in a library of templates. Furthermore, one or more workflow steps compatible with a workflow step may be determined. The determined one or more workflow steps may be displayed in association with the first workflow step for selection. 11314485 1. A method in a computing device, comprising: determining a plurality of workflow steps in a library of workflow steps, each workflow step in the library configured with corresponding logic that operates on parameters and having an interface for at least one input parameter or output parameter; automatically generating one or more workflow templates, each automatically generated workflow template including a combination of at least two of the workflow steps in the library, and is selectable for inclusion in workflows; and storing the one or more workflow templates in a library of templates; wherein said automatically generating one or more workflow templates comprises: automatically generating text describing operations performed by a generated workflow template of the one or more workflow templates; and iterating through combinations of trigger steps in the workflow step library with action steps in the workflow step library to generate a plurality of workflow templates, including selecting a trigger step of the workflow steps in the workflow step library, selecting at least one action step of the workflow steps in the workflow step library, and automatically combining the selected trigger step and the selected at least one action step to generate a workflow template. 2. The method of claim 1, further comprising: enabling an administrator to curate the template library to eliminate one or more workflow templates from the template library. 3. The method of claim 1, wherein said automatically generating further comprises: analyzing statistics regarding workflows created by developers to determine a workflow created by the developers at a frequency greater than a predetermined threshold, and generating a workflow template corresponding to the determined workflow; and wherein said storing comprises: storing in the template library the workflow template corresponding to the determined workflow. 4. The method of claim 1, further comprising: displaying in a graphical user interface a template gallery including indications of the one or more workflow templates in the template library; and enabling developers to interact with the graphical user interface to select workflow templates from the template library for including in workflows. 5. The method of claim 1, wherein the generated text is displayed on an icon representing the generated workflow template of the one or more workflow templates. 6. The method of claim 1, wherein said determining a plurality of workflow steps in a library of workflow steps comprises: filtering out at least one particular type of workflow step of the library of workflow steps from inclusion in the determined plurality of workflow steps. 7. A system, comprising: one or more processors; and a memory that stores computer program logic for execution by the one or more processors, the computer program logic including: template generation logic configured to automatically generate one or more workflow templates, each automatically generated workflow template including a combination of at least two of workflow steps of a workflow step library, and to store the one or more workflow templates in a template library, wherein the template generation logic comprises: a workflow step combiner configured to select a trigger step of the workflow steps in the workflow step library, select at least one action step of the workflow steps in the workflow step library, and automatically combine the selected trigger step and the selected at least one action step to generate a workflow template, wherein the workflow step combiner is further configured to iterate through combinations of trigger steps in the workflow step library with action steps in the workflow step library to generate a plurality of workflow templates. 8. The system of claim 7, wherein the template generation logic further comprises: a description generator configured to automatically generate text describing operations performed by each automatically generated workflow template. 9. The system of claim 8, wherein the generated text is displayed on an icon representing the corresponding generated workflow template. 10. The system of claim 8, wherein the workflow step combiner is configured filter out at least one particular type of workflow step of the workflow step library from the automatic generating of the one or more workflow templates. 11. The system of claim 7, wherein a user interface enables an administrator to curate the template library to eliminate one or more workflow templates from the template library. 12. The system of claim 7, wherein the template generation logic comprises: a usage analyzer configured to analyze statistics regarding workflows created by developers to determine a workflow created by the developers at a frequency greater than a predetermined threshold; and the workflow step combiner is further configured to generate a workflow template corresponding to the determined workflow, and to store in the template library the workflow template corresponding to the determined workflow. 13. The system of claim 7, the computer program logic further comprising: a template gallery generator configured to display in a graphical user interface a template gallery including indications of the one or more workflow templates in the template library, and to enable developers to interact with the graphical user interface to select workflow templates from the template library for including in workflows. 14. The system of claim 7, wherein the description generator is configured to automatically generate a name for each automatically generated workflow template. 15. A computing device, comprising: at least one processor circuit; and a memory that stores program code configured to be executed by the at least one processor circuit to perform operations, the operations including: determining a plurality of workflow steps in a library of workflow steps, each workflow step in the library configured with corresponding logic that operates on parameters and having an interface for at least one input parameter or output parameter; automatically generating one or more workflow templates, each automatically generated workflow template including a combination of at least two of the workflow steps in the library, and is selectable for inclusion in workflows; and storing the one or more workflow templates in a library of templates; wherein said automatically generating comprises: iterating through combinations of trigger steps in the workflow step library with action steps in the workflow step library to generate a plurality of workflow templates, including selecting a trigger step of the workflow steps in the workflow step library, selecting at least one action step of the workflow steps in the workflow step library, and automatically combining the selected trigger step and the selected at least one action step to generate a workflow template; analyzing statistics regarding workflows created by developers to determine a workflow created by the developers at a frequency greater than a predetermined threshold, and generating a workflow template corresponding to the determined workflow; and wherein said storing comprises: storing in the template library the workflow template corresponding to the determined workflow. 16. The computing device of claim 15, wherein said automatically generating further comprises: automatically generating text describing operations performed by the generated workflow template corresponding to the determined workflow. 17. The computing device of claim 16, wherein the generated text is displayed on an icon representing the generated workflow template corresponding to the determined workflow. 18. The computing device of claim 16, wherein said determining a plurality of workflow steps in a library of workflow steps comprises: filtering out at least one particular type of workflow step of the library of workflow steps from inclusion in the determined plurality of workflow steps. 19. The computing device of claim 15, wherein the operations further comprise: enabling an administrator to curate the template library to eliminate one or more workflow templates from the template library. 20. The computing device of claim 15, wherein the operations further comprise: displaying in a graphical user interface a template gallery including indications of the one or more workflow templates in the template library; and enabling developers to interact with the graphical user interface to select workflow templates from the template library for including in workflows. 20 CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 62/329,016, filed on Apr. 28, 2016, titled “Template Generation and Management for Automated Workflow Development Application,” and of U.S. Provisional Application No. 62/328,913, filed on Apr. 28, 2016, titled “Simplified Access to and Sign-Up for Automated Workflow Development System,” which are both incorporated by reference herein in their entireties. BACKGROUND A business or enterprise application is a computer program used by business users to perform various business functions. Business applications are frequently developed when available off-the-shelf software does not completely address the desired functionality. Many business applications are interactive, having a graphical user interface (GUI) via which users can input data, submit data queries, perform operations, and view results. Consumer applications are less business-focused, instead being focused on the needs of the consumer. Business and consumer users tend to depend on information technology (IT) personnel to code their applications due to application complexity, and the programming expertise required. Merely designing an application to pull data from a remote source (e.g., a cloud service) is difficult, typically requiring an experienced software developer. SUMMARY This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Methods, systems, apparatuses, and computer program products are described herein that generate and assist in managing templates (pre-generated user-customizable automated workflows) that can be used to easily and efficiently develop automated workflows in an automated workflow development system. In particular, methods, systems, apparatuses, and computer program products are described herein for automatically generating templates (pre-generated user-customizable automated workflows) that can be presented to a user of an automated workflow development application and that can be used thereby to easily and efficiently develop automated workflows. Methods, systems, apparatuses, and computer program products are also described herein for automatically generating human-readable names for and descriptions of automated workflows (including templates) generated using an automated workflow development application. Methods, systems, apparatuses, and computer program products are further described herein that enable compatible workflow steps to be automatically suggested based on a selected workflow step. The selected workflow step, and one or more of the compatible workflow steps may be combined to form a workflow template. Methods, systems, apparatuses, and computer program products are further described herein that automatically anonymize at least a portion of the parameters included in a workflow template to generate an anonymized automated workflow template. The anonymized automated workflow template may then be published. Further features and advantages of the invention, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES The accompanying drawings, which are incorporated herein and form part of the specification, illustrate embodiments of the application and, together with the description, further serve to explain the principles of the embodiment and to enable a person skilled in the relevant art(s) to make and use the embodiments. FIG. 1 shows a workflow development system, according to an example embodiment. FIG. 2 shows a flowchart providing a process for development of workflows, according to an example embodiment. FIG. 3 shows a block diagram of a workflow designer application, according to an example embodiment. FIG. 4 shows a block diagram of a display screen showing a browser window displaying an exemplary workflow, according to an example embodiment. FIGS. 5-8 shows views of an exemplary workflow in various phases of development using a development GUI, according to example embodiments. FIG. 9 shows a block diagram of a system for operating a workflow, according to an example embodiment. FIG. 10 shows a flowchart providing a process for executing a user application that includes one or more workflows, according to an example embodiment. FIG. 11 depicts an example interactive display screen of an automated workflow development system via which one or more manually-generated or automatically-generated templates may be presented to a user in accordance with an embodiment. FIG. 12 shows a block diagram of a workflow designer configured to automatically generate workflow templates, according to an example embodiment. FIG. 13 shows a flowchart providing a process for automatically generating workflow templates, according to an example embodiment. FIG. 14 shows a flowchart providing a process for combining a trigger step with one or more actions steps to generate a workflow template, according to an example embodiment. FIG. 15 shows a process for automatically generating all combinations of workflow templates based on a set of workflow steps, according to an example embodiment. FIG. 16 shows a process for reducing a number of automatically generated workflow templates, according to an example embodiment. FIG. 17 shows a flowchart providing a process for automatically generating workflow templates based on usage statistics, according to an example embodiment. FIG. 18 shows a block diagram of a workflow designer containing template generation logic configured to determine compatible workflow steps for use in a workflow template, according to an example embodiment. FIG. 19 shows a flowchart providing a process for automatically determining compatible workflow steps for use in a workflow template, according to an example embodiment. FIG. 20 shows a view of a graphical user interface for developing workflows where a set of workflow steps compatible with a first workflow step is displayed, according to an example embodiment. FIG. 21 shows a flowchart providing a process for selecting determining trigger steps and action steps that are compatible with each other according to an example embodiment. FIG. 22 shows a flowchart providing a process for selecting a compatible workflow step for inclusion in a workflow template, according to an example embodiment. FIG. 23 depicts an example interactive display screen of an automated workflow development application that enables a user to publish a completed workflow application as a template, according to an example embodiment. FIG. 24 is a block diagram of a workflow designer configured to selectively anonymize templates for public and private sharing in accordance with an embodiment. FIG. 25 shows a flowchart providing a process for anonymizing parameters in a workflow template, according to an example embodiment. FIG. 26 shows a view of a graphical user interface for developing workflows where a workflow step includes parameters to be anonymized, according to an example embodiment. FIG. 27 is a block diagram of a selective anonymizer configured to selectively anonymize workflow templates in accordance with an embodiment. FIG. 28 shows a process for enabling a developer to select parameters for anonymization, in accordance with an embodiment. FIG. 29 shows a process for partially anonymizing parameters, in accordance with an embodiment. FIG. 30 is a block diagram of an example mobile device that may be used to implement various embodiments. FIG. 31 is a block diagram of an example processor-based computer system that may be used to implement various embodiments. The features and advantages of the embodiments described herein will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The following detailed description discloses numerous example embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of persons skilled in the relevant art(s) to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner. II. Example Automated Workflow Development System Business applications and consumer applications typically are created when available off-the-shelf software does not completely address desired functionality. Many business and consumer applications are interactive, having a graphical user interface (GUI) into which users can input data, use to submit data queries, use to perform operations, and/to use to view results. Users tend to depend on information technology (IT) personnel to code their applications due to application complexity and the programming expertise required. For instance, configuring an application to pull data from a source of interest to enterprises or consumers (e.g., data from an SQL (structured query language) database, customer relationship information from Salesforce.com of San Francisco, Calif., social network information from Facebook® operated by Facebook, Inc. of Palo Alto, Calif., or Twitter® operated by Twitter, Inc. of San Francisco, Calif.) is a difficult process. Embodiments enable easier development of user applications, including business applications and consumer applications. Developers are enabled to develop user applications in the form of workflows without having to be expert programmers. Example embodiments are described in the following sections for development of user application workflows. In the following description, a person that develops a user application using the techniques described herein is referred to as a “developer,” to be distinguished from a person that uses the user application at runtime (a “user” or “end user”). It is noted, however, that a “developer,” as referred to herein, does not need to have expertise in computer programming. The embodiments described herein enable application development without special programming skills. A. Example Workflow Development Embodiments Development of workflows may be enabled in various ways in embodiments. For instance, FIG. 1 shows a workflow development system 100, according to an example embodiment. As shown in FIG. 1, system 100 includes a computing device 102, storage 104, a first network-based application 124A, a second network-based application 124B, and a server 134. Server 134 includes a workflow designer 106 and a workflow library 118 (e.g., in storage). Workflow designer 106 includes an UI generator 110 and a workflow logic generator 112. Computing device 102 includes a display screen 108 and a browser 136. Storage 104 stores a local application 122. System 100 is described as follows. Computing device 102 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., a Microsoft® Surface® device, a laptop computer, a notebook computer, a tablet computer such as an Apple iPad™, a netbook, etc.), a wearable computing device (e.g., a head-mounted device including smart glasses such as Google® Glass™, etc.), or a stationary computing device such as a desktop computer or PC (personal computer). Server 134 may include one or more server devices and/or other computing devices. Local application 122 in storage 104 is an example of an application accessible by computing device 102 without communicating over a network. Local application 122 may be configured to perform data processing and/or data hosting operations when executed by a processor of computing device 102, and may provide data 132 to workflows created by workflow designer 106 during runtime of those workflows. Local application 122 may be any type of local application/service, such as a database application (e.g., QuickBooks®, a Microsoft® Excel® spreadsheet), a messaging application (e.g., Microsoft® Outlook®), a productivity application (e.g., Microsoft® Word®, Microsoft® PowerPoint®, etc.), or another type of application. Although FIG. 1 shows a single local application, any number of local applications may be present at computing device 102, including numbers in the tens, hundreds, or greater numbers. First and second network-based applications 124A and 124B are examples of network-based applications, also referred to as “cloud” applications or services. Network-based applications 124A and 124B are accessible by computing device 102 over network 126, may be configured to perform data processing and/or data hosting operations, and may provide data 130A and 130B, respectively, to workflows created by workflow designer 106 during runtime of those workflows. Network-based applications 124A and 124B may each be any type of web accessible applications/services, such as database applications, social networking applications, messaging applications, financial services applications, news applications, search applications, web-accessible productivity applications, cloud storage and/file hosting applications, etc. Examples of such applications include a web-accessible SQL (structured query language) database, Salesforce.com™, Facebook®, Twitter®, Instagram®, Yammer®, LinkedIn®, Yahoo! ® Finance, The New York Times® (at www.nytimes.com), Google search, Microsoft® Bing, Google Docs™, Microsoft® Office 365, Dropbox™, etc. Although FIG. 1 shows two network-based applications, any number of network-based applications may be accessible over network 126, including numbers in the tens, hundreds, thousands, or greater numbers. Note that data 128, data 130A, data 130B, and data 132 may each include any type of data, including messages, notifications, calculated data, retrieved data, and/or any other type of information requested or usable by a workflow. Computing device 102 and server 134 may each include at least one network interface that enables communications with each other and with network-based applications 124A and 124B over network 126. Examples of such a network interface, wired or wireless, include an IEEE 802.11 wireless LAN (WLAN) wireless interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) interface, an Ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a Bluetooth™ interface, a near field communication (NFC) interface, etc. Further examples of network interfaces are described elsewhere herein. Examples of network 126 include a local area network (LAN), a wide area network (WAN), a personal area network (PAN), and/or a combination of communication networks, such as the Internet. Workflow designer 106 (also referred to as an automated workflow development system) is configured to be operated/interacted with to create workflows. For instance, a developer may access workflow designer 106 by interacting with an application at computing device 102 capable of accessing a network-based application, such as browser 136. The developer may use browser 136 to traverse a network address (e.g., a uniform resource locator) to workflow designer 106, which invokes a workflow designer GUI 116 (e.g., a web page) in a browser window 114. The developer is enabled to interact with workflow designer GUI 116 to develop a workflow. As shown in FIG. 1, workflow designer 106 includes UI generator 110 and workflow logic generator 112. UI generator 110 is configured to transmit workflow GUI information 140 (e.g., one or more web pages, image content, etc.) to browser 136 to be displayed as workflow designer GUI 116 in display screen 108 in browser window 114. Workflow designer GUI 116 may be interacted with by the developer to select and configure workflow steps into a workflow. For example, the developer may insert and sequence a plurality of workflow steps in workflow designer GUI 116, with one or more of the steps being associated with a local or network-based application. Browser 136 stores the selected workflow steps, corresponding configuration information, and workflow step sequence information as constructed workflow information 138. Constructed workflow information 138 is transmitted to workflow logic generator 112 at server 134. Workflow logic generator 112 generates workflow logic 120 based on the assembled workflow represented by constructed workflow information 138. The workflow represented by workflow logic 120 may subsequently be invoked at runtime by an end user. During runtime of the workflow, workflow logic 120 may invoke operation of one or more local or network-based applications associated with the workflow steps of workflow logic 120. Each workflow step may receive input data 128 from workflow designer GUI 116, data 132 from local application 122, data 130A or data 130B from one or both of local or network-based applications 124A and 124B, and/or data from another workflow step of workflow logic 120. Workflow designer 106 may operate in various ways, to enable development of a workflow. For instance, in embodiments, workflow designer 106 may operate according to FIG. 2. FIG. 2 shows a flowchart 200 providing a process for development of workflows, according to an example embodiment. Flowchart 200 and workflow designer 106 are described as follows with respect to FIGS. 3 and 4. FIG. 3 shows a block diagram of workflow designer 106, according to an example embodiment. As shown in FIG. 3, workflow designer 106 includes UI generator 110 and workflow logic generator 112. UI generator 110 includes a workflow step gallery generator 302, a template gallery generator 304, a saved workflow selector 306, a step selector 308, and a step configuration UI generator 310. Workflow logic generator 112 includes a workflow definition generator 312 and an interface definition generator 314. FIG. 4 shows a block diagram of display screen 108, illustrating an example of workflow designer GUI 116 displayed in browser window 402 on display screen 108, according to an example embodiment Flowchart 200 of FIG. 2 begins with step 202. In step 202, development of a workflow is initiated. For example, in an embodiment, workflow designer 106 may be invoked by a developer interacting with browser 136 at computing device 102. The developer may traverse a link or other network address directed to workflow designer 106 at server 134, to invoke workflow designer 106, causing workflow designer 106 to provide workflow GUI information 140 (e.g., one or more web pages, image content, etc.) to browser 136 to be displayed as workflow designer GUI 116 in display screen 108 in browser window 114. Once invoked, the developer may open an existing workflow for further development, or may begin a new workflow. For instance, a displayed page of workflow designer GUI 116 may display a gallery or workflow steps generated by workflow step gallery generator 302. The workflow step gallery includes a plurality of selectable workflow steps. The workflow steps may be stored in workflow library 118, and accessed for display by workflow designer GUI 116. The developer may select one of the workflow steps for inclusion in their workflow, and may proceed with configuring the contents of the workflow step, and/or may add additional workflow steps to continue generating their workflow. For example, as shown in FIG. 4, workflow step gallery generator 302 may enable steps 406A, 406B, and 406C to be selected for insertion into a workflow 404 being assembled in workflow designer GUI 116. Any number of workflow steps may be inserted. In another example, a displayed page of workflow designer GUI 116 may display a template gallery generated by template gallery generator 304. The template gallery includes a plurality of selectable workflow templates, which each include one or more workflow steps pre-connected for operation. The workflow templates may be stored in workflow library 118, and accessed for display by workflow designer GUI 116. The developer may select one of the workflow templates for inclusion in their workflow, and may proceed with configuring the contents of the workflow template, and/or may add additional workflow steps to the workflow steps of the workflow template to generate a more complex workflow. For instance, in the example of FIG. 4, steps 406A and 406B may have been included in a workflow template placed in workflow 404, step 406C may have been subsequently added (e.g., from a workflow step gallery). In another example, saved workflow selector 306 may enable the developer to select an existing, saved workflow to be opened for further editing in a displayed page of workflow designer GUI 116. The saved workflows may be stored in workflow library 118 or elsewhere. For example, saved workflow selector 306 may display a list of saved workflows, may enable navigation to a saved workflow, and/or may provide another mechanism for selecting a saved workflow for editing. The developer may then proceed with further configuring the contents of the workflow, and/or may add additional workflow steps to the workflow steps of the workflow to generate a more complex workflow. In step 204, selection of one or more steps for inclusion in the workflow is enabled. When a developer is editing a workflow, step selector 308 may enable the developer to select further workflow steps for inclusion in the workflow, and to order the steps. The workflow steps may be accessed by step selector 308 in workflow library 118. For instance, step selector 308 may display a pull-down menu of workflow steps, a scrollable and/or searchable list of available workflow steps, or may provide the workflow steps in another manner, and may enable the developer to select any number of workflow steps from the list for inclusion in the workflow. In one example, step selector 308 may enable a developer to select a step that is associated with a local application, such as Microsoft® Outlook®, or a network-based application, such as Facebook®. Step selector 308 enables the steps to be chained together in a sequence, optionally with conditional steps, for inclusion in workflow logic 120. In step 206, each of the selected steps in the workflow is enabled to be configured. In an embodiment, step configuration UI generator 310 enables configuration of each workflow step in a workflow. Step configuration UI generator 310 accesses each selected workflow step in workflow library 118 to determine the configuration of the workflow step, including all of its input parameters and any other selections or information that a user or developer needs to provide to the workflow step to configure it For example, step configuration UI generator 310 may generate a UI that enables the developer to type, navigate to, use a pull-down menu, or otherwise enter input data into a text input box or other data input element (e.g., input parameter) of a workflow step. The developer may configure an output of a prior step to be input data for a workflow step. Step configuration UI generator 310 may enable data or other objects to be copied and pasted, dragged and dropped, or otherwise entered copied from elsewhere into data input boxes of a workflow step. In step 208, workflow logic to implement the workflow is generated. In an embodiment, workflow logic generator 112 is configured to package and generate workflow logic 120 based on constructed workflow information 138 when the developer indicates the workflow is finished, such as when the developer interacts with workflow designer GUI 116 to save the workflow. As shown in FIG. 3, workflow logic generator 112 receives constructed workflow information 138. Constructed workflow information 138 indicates which workflow steps have been inserted into the workflow, their input parameter values, and their sequencing. Workflow logic generator 112 also receives selected workflow logic 320, which is the workflow logic for each workflow step of the workflow as indicated in constructed workflow information 138. In one example, workflow logic generator 112 retrieves workflow logic from workflow library 118 for each workflow step indicated in constructed workflow information 138, to receive selected workflow logic 320. Workflow logic generator 112 generates workflow logic 120 for the workflow based on constructed workflow information 138 and selected workflow logic 320. For example, workflow logic generator 112 may generate workflow logic 120 in the form of an executable file, a zip file, or other form, which may be executed in a standalone fashion, may be executed in a browser, or may be executed in another manner, depending on the particular type of workflow being generated. With reference to FIG. 3, workflow logic generator 112 may generate workflow logic 120 to include at least two components (e.g., files): workflow definition information 316 and interface definition information 318. Workflow definition information 316 includes information that defines the sequence and operation of the workflow of workflow logic (e.g., lists the workflow step operations and their ordering/sequencing) and includes the parameter values for the workflow. For example, workflow definition information 316 may be generated to contain information in the format of a JSON (JavaScript object notation) file or in another form. Interface definition information 318 includes information that defines the interfaces/parameters (e.g., inputs and outputs) of the workflow steps of the workflow. For example, interface definition information 318 may be generated to contain information in the format of a Swagger (a specification for REST (representational state transfer) web services) file or in another form. For instance, each workflow step may be represented in workflow library 118 as API (application programming interface) metadata in Swagger format, defining what are the necessary inputs and outputs (parameters) of the workflow step, such that a service may be accessed according to the API definition. In such an implementation, the operations in the workflow definition information 316 refer to the corresponding API metadata in the interface definition information 318 to give a complete structure of a generated workflow (e.g., each sequenced workflow step/operation defined with parameter values in the workflow definition information 316 has a corresponding API, which is defined in the interface definition information 318). Accordingly, flowchart 200 and workflow designer 106 enable a developer to create workflows. FIGS. 5-8 shows views of an exemplary workflow in various phases of development using a development GUI, according to example embodiments. For example, each of FIGS. 5-8 show browser window 402 displaying a corresponding view of workflow designer GUI 116 being used to develop a workflow. For instance, FIG. 5 shows browser window 402 including a workflow step 502 and an add interface 504. Workflow step 502 was selected by a developer to be a first step in a workflow. Add interface 504 (e.g., a button or other GUI control) may be interacted with by the developer to add further workflow steps to the workflow. As described above, a developer is enabled to select workflow step 502 from a list or library of steps, a gallery of workflow steps, a template gallery, or elsewhere. A list, library, or gallery may include any number of workflow steps. The workflow steps may be associated with network-based applications mentioned elsewhere herein or otherwise known (e.g., Dropbox™), and/or with local applications mentioned elsewhere herein or otherwise known (e.g., Microsoft® Outlook®). Each workflow step is configured for plug-and-place into the workflow. Each workflow step is configured with the appropriate logic and/or interface(s) to perform its respective function(s), which may include communicating with a local or remote application. For instance, a workflow step may be configured to transmit a query to an application (e.g., a search query to a search engine, a database query to a database, a request for data from a social networking application, etc.), being pre-configured how to properly transmit and format such a request to the application. The workflow step may be configured to receive a response to the request, being pre-configured how to parse the response for desired response data. As such, a developer of a workflow does not need to know how to write program code in a programming language, to interface with complex application interfaces (e.g., application programming interfaces (APIs)), or to understand network communication protocols, as the workflow steps are already setup. When a workflow step is plugged into workflow logic by a developer, the developer configures the inputs to the workflow step (as described below), and the otherwise pre-configured workflow step handles any communications with other applications. In FIG. 6, the developer has interacted with step 502 (e.g., by mouse click, etc.) to cause step configuration UI generator 310 to generate a UI for configuration of step 502. For instance, in the example of FIG. 6, workflow step 502 is configured to monitor for a file to be created in a particular folder identified by the developer in a text input box (e.g., by typing, clicking on a navigator indicated by “ . . . ”, etc.). When workflow step 502 determines a file is added to the indicated folder, a workflow step following workflow step 502 is triggered. Thus, workflow step 502 may be considered a trigger step in this example. For instance, in FIG. 7, the developer interacted with add interface 504 to select a next workflow step 702. In an embodiment, interaction with add interface 504 invokes step selector 308 in FIG. 3, which enables the developer to select a workflow step. In the example of FIG. 7, workflow step 702 is a conditional step. In embodiments, logical elements may be selected for inclusion in a workflow, including arithmetic logic (e.g., summers, multipliers, etc.), conditional logic, etc., that operate based on variable values determined in earlier workflow steps. The condition of workflow step 702 enables the workflow to fork based on the determination of a condition (e.g., a variable value). The condition may include an object name, a relationship (e.g., a logical relationship, such as equal to, includes, not equal to, less than, greater than, etc.), and a value, which are all defined by the developer interacting with workflow step 702. Corresponding action steps may be performed depending on which way the workflow forks based on the condition. In one illustrative example of FIG. 7, the object name may be selected (e.g., from a list of possibilities) to be a name of the created file of workflow step 502, the relationship may be “contains” (e.g., selected by a pull-down menu) and the value may be “dummyfile” (e.g., typed in by the developer). The condition evaluates to a “yes” condition if the file name contains “dummyfile,” which invokes first action workflow step 704, and evaluates to “no” condition if the file name does not contain “dummyfile,” which invokes second action workflow step 706. An action may be defined for one or both of the “yes” and “no” action workflow steps 704 and 706 by the developer, if desired. For example, in FIG. 8, the developer interacts with action workflow step 704 to define an action. In this example, the developer is defining action workflow step 704 by selecting a workflow step via step selector 308. As shown in FIG. 8, a list of workflow steps 802A, 802B, 802C is displayed, from which the developer can select a workflow step (e.g., by mouse click, etc.) to be performed for action workflow step 704. The workflow step can be a trigger step, an action step, or a condition step. After selecting the workflow step, the developer may configure the workflow step as described above. Furthermore, the developer may configure an action for workflow step 706, may add further workflow steps, etc., eventually being enabled to save the workflow. It is noted that in some embodiments, a workflow step, such as first workflow step 502, may require credentials (e.g., a login and password) to access indicated data (e.g., to access a file at the location indicated in the text input box in FIG. 6). As such, the developer may be requested to provide credential information in association with first workflow step 502 so that when first workflow step 502 is performed during runtime, the data may be accessed. Alternatively, the credentials may be requested of a user during runtime. B. Example Runtime Embodiments According to embodiments, end users may execute workflows developed as described herein. During operation, an end user may interact with a GUI of the workflow, which may lead to workflow logic being executed. The workflow logic may execute locally (e.g., in a browser) and/or at a remote service (in “the cloud”). The workflow logic may access data of one or more applications, local or network-accessible, as was configured by the developer. Accordingly, the workflow performs its intended functions. FIG. 9 shows a block diagram of a system 900 for operating a workflow that includes one or more workflow steps, according to an example embodiment. As shown in FIG. 9, system 900 includes a computing device 902, first network-based application 124A, second network-based application 124B, and server 134. Computing device 902 includes a workflow application 904. Server 134 includes a workflow execution engine 906. System 100 is described as follows. First and second network-based applications 124A and 124B are each optionally present, depending on the configuration of workflow logic 120. Further network-based applications may be present, depending on the configuration of workflow logic 120. Computing device 902 may be any type of stationary or mobile computing device described herein or otherwise known. Computing device 902 is configured to communicate with first and second network-based applications 124A and 124B and server 134 over network 126. In one embodiment, workflows are executed at server 134 by workflow execution engine 906, and workflow application 904 is a UI application that enables a user at computing device 902 to interact with the executing workflows, such as by selecting and invoking the workflows, receiving communications from the executing workflows (e.g., messages, alerts, output data, etc.), providing requested input data to executing workflows, etc. In such an embodiment, workflow application 904 may be a workflow UI application associated with workflow execution engine 906 (e.g., workflow application 904 may be an extension of workflow execution engine 906) that may operate separate from or within a browser at computing device 902, or may be configured in another way. As shown in FIG. 9, workflow execution engine 906 may load workflow logic 102 for a selected workflow (e.g., selected from a workflow library by a user), and may execute workflow logic 102 to execute the workflow. In another embodiment, workflow application 902 may be configured to execute workflows at computing device 902. For instance, an end user of computing device 902 may interact with a user interface of workflow application 902 to select and invoke a particular workflow (e.g., selected from a workflow library). In such embodiments, workflow logic 120 may operate separate from or in a browser at computing device 902, or may be configured in another way. As shown in FIG. 9, workflow application 904 may load workflow logic 120 for a selected workflow (e.g., selected from a workflow library by a user), and may execute workflow logic 120 to execute the workflow. In another embodiment, a first portion of workflow logic 120 may operate in workflow application 904 at computing device 902 and a second portion of workflow logic 120 may operate in workflow execution engine 906 at server 134 and/or elsewhere. FIG. 10 shows a flowchart 1000 providing a process for executing workflow logic 120 of a workflow, according to an example embodiment. Flowchart 1000 is described as follows with respect to system 900 of FIG. 9 for illustrative purposes. Flowchart 1000 begins with step 1002. In step 1002, the workflow is executed. In an embodiment, an end user at computing device 902 may cause workflow logic 120 to be executed, such as by command line, by clicking/tapping or otherwise interacting with an icon representing the application, by selection in a browser, or in another manner. As described above, workflow logic 120 may execute in workflow application 904 at computing device 902 and/or in workflow execution engine 906 at server 134. When executed, the workflow steps of workflow logic 120 are performed in the configured sequence. Accordingly, one or more of the workflow steps may make calls to corresponding applications/services to perform their functions, such as local application 122 (to return data 132), network-based application 124A (to return data 130A), network-based application 124B (to return data 130B), and/or other applications, local or network-based. In step 1004, the workflow GUI is displayed. Step 1004 is optional, as in some embodiments, a GUI is not displayed for a workflow. In an embodiment, the GUI may be displayed by workflow application 904 at computing device 902. When displayed, the user may interact with the GUI by reviewing displayed data (e.g., from a file, database record, spreadsheet, or other data structure read by the workflow), by entering data into the GUI (e.g., by typing, by voice, etc.), and/or by interacting with one or more controls displayed by the GUI. In step 1006, workflow logic is triggered based on an interaction with the workflow. Step 1006 is optional in cases where one or more workflow steps of a workflow require input from a user. In such cases, the user interacts with a control in a GUI of workflow application 904 associated with a workflow step of workflow logic 120 to provide information that triggers logic of the workflow step to operate. In this manner, workflow logic 120 performs its functions, such as processing orders, tracking information, generating messages, processing documents to generate tasks or information, collecting feedback, and/or any other functions. III. Automatic Generation of Templates in an Automated Workflow Development System As described above with respect to FIG. 3, template gallery generator 304 may enable display of a template gallery that includes a plurality of selectable workflow templates that each include one or more workflow steps pre-connected for operation. The workflow templates may be stored in workflow library 118 (FIG. 1), and accessed for display by workflow designer GUI 116. The developer may select one of the workflow templates for inclusion in their workflow, and may proceed with configuring the contents of the workflow template, and/or may add additional workflow steps to the workflow steps of the workflow template to generate a more complex workflow. FIG. 11 depicts an example interactive display screen 1102 of workflow designer GUI 116 via which one or more manually-generated or automatically-generated templates may be presented to a user in accordance with an embodiment. As shown in FIG. 11, a plurality of workflow templates indications is shown, including workflow template indications 1104A-1104C. Each workflow templates indication is represented by a rectangular icon and by a text description of the functionality of the workflow template. Thus for example, the templates presented to the user in FIG. 11 include “When you save a file in Office 365, upload to SharePoint” (workflow template 1104A), “Post your Instagram to Twitter” (workflow template 1104C), “Simplify your workflow with work essentials” (workflow template 1104B), “Set up project essentials”, and “Start working on accepted estimates”. Each workflow template is defined as a combination of steps. For example, a template (e.g., workflow template 1104A) may be defined as a combination of a trigger (“When you save a file in Office 365”) and an action that is carried out when the trigger is activated (“upload to SharePoint”). Each step included in the template may include an interaction with a particular connector or service that is accessed using a suitable application programming interface (API). That is to say, each connector (e.g., DropBox, Outlook, Bing Search, SharePoint, etc.) may enable one or more operations to be performed with respect to that connector. When a user selects a particular template indicated on the screen (e.g., by touching it on a touch screen, by pointing to it with a mouse and clicking, or the like), the user is taken to a workflow designer screen (e.g., browser window 402 of FIG. 7) that provides a graphical representation of each of the steps of the template and that provides a means by which the user can customize various parameters referenced by each of the steps, as well as a means by which the user can add, modify or remove steps, test the automated workflow, and save the automated workflow. In an embodiment, each of the templates that are presented to the user via interactive display screen 1102 shown in FIG. 11 may represent an automated workflow that was developed manually by a person, stored, and then published for use by others. However, in an embodiment in which there are a large number of connectors each of which may have a large number of operations, the number of possible templates that may be generated by combining different connector operations can be extremely large and it may simply not be possible to manually generate (and optionally tailor) every possible template. To address this issue, template generation logic of workflow designer 106 may be configured to automatically generate new templates that may be presented to the user by automatically combining different connector operations. Each new combination can be presented as a new template via an interactive display screen such as that shown in FIG. 11. Thus, for example, the template generation logic can automatically combine an Office 365 operation of saving a file with a SharePoint operation of uploading a file to automatically create a template that uploads a file to SharePoint whenever a file is saved in Office 365. In an embodiment, the template generation logic is also configured to automatically generate text describing the operations performed by the newly-generated template. Thus, in further accordance with the foregoing example, the template generation logic can automatically generate the text “When you save a file in Office 365, upload to SharePoint”. In a further embodiment, telemetry/usage statistics may be collected about which manually-generated and automatically-generated templates are utilized by users. Based on such usage information, a determination can be made about which templates are popular with users. This information can in turn be used to determine which templates should be highlighted for users on an interactive display screen, what type of templates should be automatically generated in the future, and/or what type of templates should be manually generated and/or tailored in the future. Accordingly, in embodiments, a workflow designer may be configured to generate workflow templates. For example, FIG. 12 shows a block diagram of workflow designer 106 configured to automatically generate workflow templates, according to an example embodiment. As shown in FIG. 12, workflow designer 106 includes template generation logic 1202. Template generation logic 1202 is configured to automatically generate templates (pre-generated user-customizable automated workflows) that can be presented to a user and that can be used thereby to easily and efficiently develop automated workflows. As shown in FIG. 12, template generation logic 1202 includes a workflow step combiner 1206, a description generator 1208, and a usage analyzer 1210. Workflow step combiner 1206 is configured to combine workflow steps to generate workflow templates. Description generator 1208 is configured to generate labels for the generated workflow templates. Usage analyzer 1210 is optionally present. When present, usage analyzer 1210 is configured to monitor the usage of workflow steps and the creation of workflows from workflow steps by users, and usage statistics based thereon may be used to determine desired combinations of workflow steps to form into templates. Template generation logic 1202 is described in further detail as follows with respect to FIG. 13. FIG. 13 shows a flowchart 1300 providing a process for automatically generating workflow templates, according to an example embodiment. In an embodiment, template generation logic 1202 may operate according to flowchart 1300. Note that not all steps of flowchart 1300 need to be performed in all embodiments. Template generation logic 1202 and flowchart 1300 are described as follows. Flowchart 1300 begins with step 1302. In step 1302, a plurality of workflow steps in a library of workflow steps is determined. In an embodiment, workflow step combiner 1206 of FIG. 12 is configured to access workflow library 118 (and/or other library that contains workflow steps) to determine a plurality of workflow steps contained within. Workflow step combiner 1206 may determine all of the workflow steps in the library, or may determine a portion thereof based on filter criteria configured by a developer/administrator (e.g., filtering out particular types of workflow steps, such as messaging steps, file monitoring steps, alerting steps, etc.). Any number and combination of types of workflow steps may be determined, including one or more trigger steps and/or action steps. Generally, a trigger step of a workflow monitors for a condition, and when the condition is met, the trigger step triggers (activates or enables) one or more following workflow steps. An action step performs an action when reached in a workflow, such as following another action step, or when triggered by a trigger step. A condition step may be considered a trigger step or an action step, depending on the particular condition step and the situation. In step 1304, one or more workflow templates are automatically generated, each automatically generated workflow template including a combination of at least two of the workflow steps in the library. In an embodiment, workflow step combiner 1206 receives workflow steps 1214 from workflow library 118. Workflow steps 1214 includes the workflow steps determined in step 1302. Workflow step combiner 1206 is configured to automatically generate one or more workflow templates 1216 by combining workflow steps received in workflow steps 1214. Workflow step combiner 1206 may combine the workflow steps in any manner to generate any number of workflow templates 1216, in a similar manner as combining workflow steps to generate workflows as described further above with respect to FIGS. 1-8. FIG. 14 shows a flowchart 1400 providing a process for combining a trigger step with one or more actions steps to generate a workflow template, according to an example embodiment. Workflow step combiner 1206 may operate according to flowchart 1400 in an embodiment. Flowchart 1400 is described as follows. In step 1402, a trigger step of the workflow steps in the workflow step library is selected. In an embodiment, workflow step combiner 1206 may select a trigger step of workflow steps 1214. The particular trigger step may be selected in any manner, including by being the next trigger step in a particular order (e.g., ordered by step identifiers assigned to workflow steps), randomly, according to a selection algorithm, or in any other manner. In one example, the selected trigger step may be a table monitoring workflow step that monitor when a row is added to a table. For example, the trigger step may be implemented in Google Sheets™. In step 1404, at least one action step of the workflow steps in the workflow step library is selected. In an embodiment, workflow step combiner 1206 may select one or more action steps of workflow steps 1214. The particular action step(s) may be selected in any manner, including by being the next action step(s) in a particular order (e.g., ordered by step identifiers assigned to workflow steps), randomly, according to a selection algorithm, or in any other manner. Continuing the example, the selected action step may be a messaging workflow step that transmits a message. For example, the action step may be implemented in in a text messaging application. In step 1406, the selected trigger step and the selected at least one action step are automatically combined to generate a workflow template. In an embodiment, workflow step combiner 1206 may automatically (e.g., without human intervention) combine the selected trigger step with the one or more selected action steps into an interconnected workflow. The trigger step may be configured by workflow step combiner 1206 as the initiating step, having one or more outputs (e.g., a trigger signal, etc.) that flow into the one or more action steps, triggering the one or more action steps into action when the trigger step is triggered. When multiple action steps are present, the action steps may be configured to operate in series, parallel, or a combination thereof. Continuing the above example, workflow step combiner 1206 may combine the selected table monitoring workflow step with the selected messaging workflow step, in this particular example, combining the Google Sheets™ application workflow step with the text messaging application workflow step. Workflow step combiner 1206 may configure the Google Sheets™ workflow step to trigger the text messaging workflow step, such that when a new row is added to a table in Google Sheets™, the text messaging workflow step is triggered to send a text message to notify a recipient of the row being added. Note that when generating a workflow template 1216, workflow step combiner 1206 may or may not generate the workflow template 1216 in a generic or anonymized manner. For instance, when anonymizing, workflow step combiner 1206 may generate workflow template 1216 to not include parameter values (or genericized parameter values) for at least some parameters. In the above example, workflow step combiner 1206 may erase or otherwise genericize parameters values for the row monitored by the Google Sheets™ workflow step and/or for the recipient(s) of the text message transmitted by the text messaging workflow step. Such parameter values may be later filled in by a user (as described further above) when such user selects (e.g., makes a copy of) workflow template 1216 to convert into an actual workflow. Further description on anonymizing workflow templates is described further below in a later subsection. In an embodiment, workflow step combiner 1206 may access or be configured similarly to workflow logic generator 112 (FIG. 1) to generate workflow template 1216 in the form of workflow logic 120 (without at least some parameter values, as described above). For example, workflow step combiner 1206 may generate workflow template 1216 in various forms, such as in the form of a package that includes at least two components (e.g., files): workflow definition information 316 and interface definition information 318 (FIG. 3). As described above, workflow definition information 316 includes information that defines the sequence and operation of the workflow of workflow logic (e.g., lists the workflow step operations and their ordering/sequencing) and includes the parameter values for the workflow. Furthermore, as described above, interface definition information 318 includes information that defines the interfaces/parameters (e.g., inputs and outputs) of the workflow steps of a workflow. When generating workflow 1216, workflow step combiner 1206 generates workflow definition information 316 to include the information that defines the sequence and operation of the workflow template, and generates interface definition information 318 to include information that defines the interfaces/parameters (e.g., inputs and outputs) of the workflow steps of the workflow template, while leaving out or replacing parameter values for any anonymized parameters, as described above. Referring back to FIG. 14, in step 1408, text describing operations performed by the generated workflow template is automatically generated. In an embodiment, description generator 1208 is present, and receives workflow template 1216. Description generator 1208 is configured to automatically generate text describing the operations performed by workflow template 1216, and to include the generated text (e.g., as a parameter value for a workflow template “title” or “name” parameter) with the workflow template 1216, thereby generating labeled workflow template 1218. The generated text may be used in various ways, including being displayed on an icon representing a workflow template, such as shown in FIG. 11, for example. For example, description generator 1208 may automatically generate the text “When you save a file in Office 365, upload to SharePoint” for a workflow template that triggers a file upload to SharePoint when a file is saved in Office 365. Description generator 1208 may generate the text in any manner, including by determining names or labels for the workflow steps being combined into the workflow template (e.g., from “title” or “name” parameters of the workflow steps), and combining the determined names or labels of the workflow steps into the text. Continuing the example above, description generator 1208 may automatically generate the text “Send a text when new row is added in Google Sheets,” and save the generated text as a name or label for the workflow template. Note that template generation logic 1202 may generate workflow templates according to FIG. 12 or in other manners. Furthermore, template generation logic 1202 may generate numbers of workflow templates based on the available workflow steps in various ways. FIGS. 15-17 illustrate various techniques for generating workflow templates, and are described as follows. For instance, FIG. 15 shows a step 1502 for automatically generating all combinations of workflow templates based on a set of workflow steps, according to an example embodiment. In step 1502, all combinations of trigger steps with action steps in the workflow step library are iterated to generate a plurality of workflow templates. In an embodiment, workflow step combiner 1206 may be configured to generate a plurality of workflow templates by iterating through all combinations of trigger steps with action steps. For example, if there are ten different types of trigger steps in workflow library 118, and twelve different types of action steps in workflow library 118, workflow step combiner 1206 may generate one hundred and twenty (120) workflow templates, which includes all combinations of trigger steps and action steps. In a further embodiment, workflow step combiner 1206 may be configured to generate a plurality of workflow templates by iterating through all combinations of trigger steps with all combinations of pairs of actions action steps, or greater numbers of actions steps. Thus, according to step 1502, large numbers or workflow templates may be generated, some of which may contain useful combinations of workflow steps, while others may not contain useful combinations of workflow steps. In such manner, a large number of workflow templates may be generated, though a workflow template gallery may potentially be cluttered with large numbers of un-useful workflow templates, which may be undesirable to developers who access the gallery. FIG. 16 shows a step 1602 for reducing a number of automatically generated workflow templates, according to an example embodiment. Step 1602 may be used to pare down the number of workflow templates generated in step 1502 of FIG. 15. In step 1602, an administrator is enabled to curate the template library to eliminate one or more workflow templates from the template library. In an embodiment, an administrator may access template library 1212 to view and delete workflow templates that the administrator deems not useful. For example, interactive display screen 1102 (FIG. 11) provided by workflow designer GUI 116 may be displayed to the administrator to show the workflow templates in template library 1212. The administrator may have administrator privileges with respect to template library 1212, enabling the administrator to select and delete workflow templates. For instance, the administrator may read the labels of the workflow templates to inform the administrator of their contents, and may use this information to delete workflow templates the administrator deems to lack usefulness. In this manner, a displayed workflow template gallery will not be cluttered with workflow templates that are not likely to be desired by users. In other embodiments, workflow templates may be generated automatically in a more intelligent fashion. For instance, FIG. 17 shows a flowchart 1700 providing a process for automatically generating workflow templates based on usage statistics, according to an example embodiment. Flowchart 1700 begins with step 1702. In step 1702, statistics regarding workflows created by developers are analyzed to determine a workflow created by the developers in numbers or at a frequency greater than a predetermined threshold. In an embodiment, workflow steps in workflow library 118 may be available to many developers for inclusion in workflows. Usage analyzer 1210 is configured to receive usage statistics 1220 associated with workflow library 118. Usage statistics 1220 may include information on accesses of workflow steps in workflow library 118, including numbers of times each workflow step is incorporated into a workflow by a developer, numbers of times users combine particular workflow steps together in workflow, and/or further information regarding access of workflow steps in workflow library 118. Usage analyzer 1210 analyzes usage statistics 1220 to determine relatively popular combinations of workflow steps in workflows, which may be advantageously formed into workflow templates. As a result, users may be enabled to select from template library 1212 the workflow templates containing relatively popular combinations of workflow steps, rather than the users having to assemble those workflow step combinations themselves. For example, usage analyzer 1210 may compare a number of times each particular workflow step combination (e.g., a particular trigger step interconnected with one or more particular action steps) is formed into a workflow occurs against a predetermined threshold value. The predetermined threshold value may be a minimum number of times a particular workflow step combination has to occur to be considered to be popular. Usage analyzer 1210 may generate a popular workflow indication 1222 to indicate one or more workflows that contain workflow step combinations occurring at greater number of times than the predetermined threshold. In step 1704, a workflow template corresponding to the determined workflow is generated. As shown in FIG. 12, workflow step combiner 1206 receives popular workflow indication 1222 (when usage analyzer 1210 is present). Workflow step combiner 1206 is configured to generate a workflow template 1216 that includes the combination of workflow steps indicated in popular workflow indication 1222. Workflow step combiner 1206 may generate the workflow template as described herein. In step 1706, the workflow template corresponding to the determined workflow is stored in the template library. As shown in FIG. 12, description generator 1208 may generate a label for workflow template 1216, and store labeled workflow template 1218 in a template library 1212, or workflow step combiner 1206 may store workflow template 1216 directly in template library 1212. Further ways for determining which workflow templates to generate and store in template library 1212 will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Referring back to FIG. 13, in step 1306, the one or more workflow templates are stored in a library of templates. As shown in FIG. 12, description generator 1208 may store labeled workflow template 1218 in a template library 1212. Alternatively, in an embodiment where description generator 1208 is not present, workflow step combiner 1206 may store workflow template 1216 directly in template library 1212. Template library 1212 is a library of workflow templates, containing any number of workflow templates. In step 1308, a template gallery is displayed in a graphical user interface including indications of the one or more workflow templates in the template library. In an embodiment, template gallery generator 304 (FIG. 3) may access template library 1212 for workflow templates to display in workflow designer GUI 116. As described above, FIG. 11 depicts an example interactive display screen 1102 of workflow designer GUI 116 via which one or more manually-generated or automatically-generated templates may be presented. As shown in FIG. 11, a plurality of workflow template indications is shown, including workflow template indications 1104A-1104C. Each workflow template indication is represented by an icon and by a text description of the functionality of the corresponding workflow template. In step 1310, developers are enabled to interact with the graphical user interface to select workflow templates from the template library for including in workflows. In an embodiment, a developer may select one of the workflow templates of FIG. 11 (indicated as workflow templates 1104A-1104C, etc.) for inclusion in their workflow, and may proceed with configuring the contents of the workflow template, and/or may add additional workflow steps to the workflow steps of the workflow template to generate a more complex workflow. IV. Generation of Templates Based Upon Selected Pre-Generated Trigger Steps As described in the prior section, workflow templates may be generated in various ways, including by iterating through all combinations of available workflow steps (e.g., trigger steps and action steps), enabling an administrator to curate a set of generated workflow templates, generating workflow templates based on usage statistics, etc. In another embodiment, when a developer (or administrator) selects a workflow step, a selection of workflow steps compatible with the workflow step may be automatically determined and displayed. The administrator may select one or more of the automatically displayed workflow steps to be combined with the initially selected workflow step to create a workflow template. The automatic display of compatible workflow steps helps the developer avoid trial and error selection of incompatible workflow steps until a compatible workflow step is selected, instead making sure that only compatible workflow steps are displayed to the developer for selection. The resulting workflow template may be stored in workflow library 118 (FIG. 1) or elsewhere, to be made available to developers for inclusion in their workflows. For example, FIG. 18 shows a block diagram of a workflow designer 106 containing template generation logic configured to determine compatible workflow steps, according to an embodiment. As shown in FIG. 18, workflow designer 106 includes template generation logic 1802. Template generation logic 1802 is configured generate workflow templates based on compatible workflow steps. As shown in FIG. 18, template generation logic 1802 includes a compatible workflow step determiner 1804. Compatible workflow step determiner 1804 is configured to assist in generating workflow templates by automatically determining workflow steps compatible to selected workflow steps. Template generation logic 1802 is described in further detail as follows with respect to FIG. 19. FIG. 19 shows a flowchart 1900 providing a process for automatically determining compatible workflow steps, according to an example embodiment. In an embodiment, template generation logic 1802 may operate according to flowchart 1900. Note that not all steps of flowchart 1900 need to be performed in all embodiments. Template generation logic 1802 and flowchart 1900 are described as follows. Flowchart 1900 begins with step 1902. In step 1902, a developer (or administrator) is enabled to select a first workflow step. As described further above with respect to step 204 of flowchart 200 (FIG. 2) and UI generator 110 (e.g., step selector 308 of FIG. 3), developers may interact with a user interface to select workflow steps for inclusion in a workflow. For instance, FIG. 20 shows a view of browser window 402, which as described above, is an example GUI for developing workflows. In FIG. 20, a developer may have selected workflow step 502 to be a step in a workflow being developed. In step 1904, one or more workflow steps compatible with the first workflow step are automatically determined. As shown in FIG. 18, compatible workflow step determiner 1804 receives a selected step indication 1806, which indicates the workflow step selected in step 1902. In an embodiment, compatible workflow step determiner 1804 is configured to automatically determine one or more workflow steps compatible with the workflow step indicated by selected step indication 1806. Compatible workflow step determiner 1804 may determine one or more workflow steps compatible with a workflow step indicated as selected by a user in various ways. For instance, in an embodiment, compatible workflow step determiner 1804 may operate according to flowchart 2100. FIG. 21 shows a flowchart 2100 providing a process for determining trigger steps and action steps that are compatible with each other according to an example embodiment. Flowchart 2100 is described as follows. In step 2102, one or more trigger steps compatible with the first workflow step are automatically determined in response to the first workflow step being an action step. In an embodiment, compatible workflow step determiner 1804 determines what type of workflow step was indicated as selected (e.g., based on a parameter of the selected workflow step, based on lists of types of workflow steps, etc.). When the selected workflow step is determined to be an action step, compatible workflow step determiner 1804 may be configured to automatically determine one or more trigger steps compatible with the action step. Alternatively, compatible workflow step determiner 1804 may automatically determine one or more action steps compatible with the action step, or a combination of triggers steps and action steps compatible with the action step. In step 2104, one or more action steps compatible with the first workflow step are automatically determined in response to the first workflow step being a trigger step. When the selected workflow step is determined to be a trigger step, compatible workflow step determiner 1804 may be configured to automatically determine one or more action steps compatible with the trigger step. Alternatively, compatible workflow step determiner 1804 may automatically determine one or more trigger steps compatible with the trigger step, or a combination of triggers steps and action steps compatible with the trigger step. Compatible workflow step determiner 1804 may be configured to determine workflow steps compatible with a selected workflow step in various ways. For instance, in one embodiment, compatible workflow step determiner 1804 may maintain of list of all workflow steps available, and for each listed workflow step, may indicate all available workflow steps compatible with the listed workflow step. Each workflow step may be indicated as “trigger” or “action” step, if desired to only select compatible trigger steps or action steps. In another embodiment, compatible workflow step determiner 1804 may access usage statistics 1220 (FIG. 12) to determine workflow steps combined with each other by users in their workflows, and may categorize workflow steps combined with each other in user workflows as being compatible with each other. In other embodiments, compatible workflow step determiner 1804 may determine compatible workflow steps in other ways. Referring back to FIG. 19, in step 1906, the determined one or more workflow steps are displayed in association with the first workflow step. In an embodiment, UI generator 110 (FIG. 1) may be configured to display the compatible workflow steps determined in step 1904 in association with the workflow step selected in step 1902. The determined compatible workflow steps may be displayed in association with the selected workflow step in any manner, including being displayed to the left, right, top, or bottom of the selected step, in a pull down menu, or in any other manner in association with the selected workflow step. Alternatively, a workflow step gallery generated by workflow step gallery generator 302 (FIG. 3) may be displayed on the same page or a new page. For example, as shown in FIG. 20, the developer selected workflow step 502 for inclusion in a workflow, and in response, a compatible connector selector 2002 may be displayed adjacent to workflow step 502 (to the right in FIG. 20) that includes a plurality of workflow steps determined to be compatible with workflow step 502. Compatible connector selector 2002 may provide a scrollable list, particularly when the number of determined compatible workflow steps does not fit on one page, or may display the compatible workflow steps in another manner. Once the compatible workflow steps are displayed, the developer (or administrator) may be enabled to select one or more of them for incorporation into a workflow template. In particular, FIG. 22 shows a flowchart 2200 providing a process for selecting a compatible workflow step for inclusion in a workflow template, according to an example embodiment. In an embodiment, flowchart 2200 may be a continuation of flowchart 1900. Flowchart 2200 is described as follows. In step 2202, the developer is enabled to select a second workflow step of the displayed one or more workflow steps. In an embodiment, step selector 308 (or other mechanism for selecting workflow steps) may enable the developer to select a workflow step from the displayed compatible workflow steps in any manner described elsewhere herein or otherwise known. For instance, with respect to FIG. 20, the developer may scroll through compatible workflow steps displayed by compatible connector selector 2002. In step 2204, the second workflow step is inserted into a workflow template interconnected with the first workflow step. In an embodiment, step selector 308 (or other mechanism for selecting workflow steps) may insert the selected workflow step into the workflow template with the first workflow step selected in step 1902 in any manner described elsewhere herein or otherwise known. For instance, with respect to FIG. 20, the developer may have selected (in step 1902 of flowchart 1900) workflow step 2004 displayed by compatible connector selector 2002 by clicking on workflow step 2004 with a mouse pointer or in any other manner. Step selector 308 may incorporate selected workflow step 2004 into a workflow template with workflow step 502. A label may be generated for the workflow template (e.g., “When a file is created, post a message to Slack”), and the workflow template may be saved, as described elsewhere herein. Note that upon inclusion of one of the compatible workflow steps in a workflow, in an embodiment, compatible workflow step determiner 1804 may determine and display another list of compatible workflow steps, this time being determined for the just included workflow step according to flowchart 1900. Another compatible workflow step may be incorporated into the workflow in this manner, as well as even further compatible workflow steps. V. Selective Anonymization of Automated Workflow Templates for Private and Public Sharing In an embodiment, template generation logic 1202 and 1802 of FIGS. 12 and 18 are configured to allow a user to publish automated workflow templates to sites where they can be accessed by developers. As noted above, a template comprises a pre-generated user-configurable automated workflow. By allowing a user to publish a developed/completed automated workflow as a template, template generation logics 1202 and 1802 advantageously enable users to leverage the work of others in their community to identify automated workflows that may be useful to them and that can be used as a starting point for developing their own automated workflows. FIG. 23 depicts an example interactive display screen 2302 of an automated workflow development application (e.g., workflow designer 106 of FIG. 1) that enables a user to publish a completed workflow application as a template. As shown in FIG. 23, screen 2302 includes an interactive element 2304 (e.g., a button) that a user may interact with to publish a developed/completed automated workflow as a template. As further shown in screen 2302, the user may also be presented with a first text entry box 2306 into which the user may type a name of the template (“Send a text when a row is added in Google Sheets”) (rather than being generated automatically, as described above) and a second text entry box 2308 into which the user may type a more extensive description of the automated workflow. In this example, the user may select an interactive element 2310 (e.g., a button) to submit the text entered into text entry boxes 2306 and 2308 to be associated with the workflow template. In some embodiments, the user may selectively share the workflow template with the entire public (e.g., by publishing the template to a general population of users via the Internet) or may selectively share the template privately (e.g., by publishing the template to a population of users within a private organization, such as an enterprise or sharing the template with a target user or users). Depending on the audience to whom the template is published, various parameters associated within the template may or may not need to be anonymized. For example, as described above, FIG. 6 shows a workflow step 502 of an automated workflow under development. Workflow step 502 is included in the workflow under further development in FIG. 6, the workflow including a series of steps, wherein each step has one or more user-configurable parameters associated therewith. As shown in FIG. 6, workflow step 502 includes a parameter of a folder name shown as /PowerApps/FolderName. If this automated workflow were to be shared as a template with other users within the same private organization, then it might make sense to include the same folder name of /PowerApps/FolderName within the template. However, if this automated workflow were to be shared as a template with users outside of the enterprise, then it might make more sense to remove the particular folder name (i.e., to anonymize the parameter) since that folder name will have no relevance to users outside the enterprise, may not be accessible by users outside the enterprise, and/or may include private information not desired to be divulged outside of the organization. Other examples of parameters in an automated workflow that might need to be anonymized when publishing outside of an organization might include, for example, an identifier of a particular database, an identifier of a particular connection, an identifier of a particular account, as well as credentials or other information that may be used to access such a database, connection or account. As such, a workflow designer may be configured to enable parameter values to be anonymized for any of these reasons, as well as other reasons that would be apparent to persons skilled in the relevant art(s). Accordingly, FIG. 24 is a block diagram of workflow designer 106 configured to selectively anonymize templates for public and private sharing in accordance with an embodiment. As shown in FIG. 24, workflow designer 106 includes template generation logic 2402. Template generation logic 2402 includes a workflow template publisher 2406 that receives an automated workflow to be published as a template and then publishes such template. As further shown in FIG. 24, template generation logic 2402 further includes a selective anonymizer 2404 that selectively anonymizes certain parameters included in or referenced by the automated workflow based on one or more factors prior to publication. The one or more factors may include, for example, and without limitation, the intended target audience for publication (e.g., whether publication will be to the general public or within an organization only, whether publication will be to an entire organization or to a particular team within an organization, whether publication will be to a single individual only, etc.), the user sharing the template, the organization within which the template is being shared, and the context in which the template was built. Depending upon these factors, some connections and/or properties of an automated workflow may be anonymized and some may not prior to publication. Template generation logic 2402 is described in further detail as follows with respect to FIG. 25. FIG. 25 shows a flowchart 2500 providing a process for anonymizing parameters in a workflow template, according to an example embodiment. In an embodiment, template generation logic 2402 may operate according to flowchart 2500. Note that not all steps of flowchart 2500 need to be performed in all embodiments. Template generation logic 2402 and flowchart 2500 are described as follows. Flowchart 2500 begins with step 2502. In step 2502, an automated workflow template is received for publication that includes parameters. In embodiments, a workflow template 2408 may be generated (e.g., as described above) and received by template generation logic 2402. Workflow template 2408 may include one or more parameters having parameter values, as described above. This may be because workflow template 2408 is formed from a combination of workflow steps that had parameter values assigned. At least some of the parameter values may be desired to be anonymized prior to publication of workflow template 2408 (where workflow template 2408 is made available to a different set of users). Workflow template 2408 may be received by template generation logic 2402 to be anonymized, and thereafter published as an anonymized workflow template. An example workflow that may be received for publication as a workflow template is illustrated with respect to FIG. 26. FIG. 26 shows a view of a browser window 402 that includes a workflow 2600 under development, according to an example embodiment. Workflow 2600 is an automated workflow that included a series of steps, wherein each step has one or more user-configurable parameters associated therewith. One or more of the workflow steps have parameters to be anonymized. For instance, as shown in FIG. 26, workflow 2600 includes a trigger step 2602 as a first step and an action step 2604 as a second step. Trigger step 2602 includes as parameters a site URL filed with a parameter value of “https:/contoso.sharepoint.com/teams/marketing” and a list name filled with a parameter value of “Assets.” If workflow 2600 were shared as a workflow template with other users within the same private organization, then it might make sense to include the same site URL “https:/contoso.sharepoint.com/teams/marketing” within the template as the parameter value. However, if workflow 2600 were to be shared as a workflow template with users outside of the enterprise, then it might make more sense to remove the particular site URL (i.e., to anonymize the parameter) since that site URL may have no relevance to users outside the enterprise. In step 2504, at least a portion of the parameters included in the automated workflow template are automatically anonymized to generate an anonymized automated workflow template. As shown in FIG. 24, selective anonymizer 2404 may receive workflow template 2408. In an embodiment, selective anonymizer 2404 is configured to anonymize one or more parameters in workflow template 2408, purging the parameter values or replacing the parameter values with generic values to anonymize them. In an embodiment, selective anonymizer 2404 may step through the parameters of workflow template 2408 (e.g., as stored in interface definition information 318 of FIG. 3), and selectively anonymize them, determining which parameters to anonymize, and anonymizing the parameter values of those determined parameters. As shown in FIG. 24, selective anonymizer 2404 outputs anonymized workflow template 2410, which is the anonymized form of workflow template 2408. Selective anonymizer 2404 may be configured in various ways to perform such automatic anonymizing of parameters. For instance, FIG. 27 is a block diagram of a selective anonymizer 2404 configured to selectively anonymize templates in accordance with an embodiment. As shown in FIG. 27, selective anonymizer 2404 includes an audience-based anonymizer 2702, a user identity-based anonymizer 2704, an organization-based anonymizer 2706, and a context-based anonymizer 2708. Any one or more of these components may be present in selective anonymizer 2404, in embodiments. These components of selective anonymizer 2404 are described as follows. Audience-based anonymizer 2702 is configured to parse through the parameters of workflow template 2408, and selectively anonymize their parameter values based on the audience for publication. The audience for publication may be determined by audience-based anonymizer 2702 in various ways, including by user input, by an indication of a target publication library or folder, and/or by other mechanism. Based on this determination, audience-based anonymizer 2702 can determine or more of whether publication will be to the general public or within an organization only, whether publication will be to an entire organization or to a particular team within an organization, whether publication will be to a single individual only, etc. Accordingly, audience-based anonymizer 2702 is configured to anonymize parameters that would have no meaning or would otherwise need to be private outside of the present organization (if publication is to a different organization), that would have no meaning or would otherwise need to be private outside of the present team (if publication is to a different team within a same organization), that would have no meaning or would otherwise need to be private if being used by a different person, etc. User identity-based anonymizer 2704 is configured to parse through the parameters of workflow template 2408, and selectively anonymize their parameter values based on an identity of a user to whom workflow template 2408 is being published. The identity of the user for publication may be determined by user identity-based anonymizer 2704 in various ways, including by user input, by an indication that a target publication library or folder is owned by the user, and/or by other mechanism. Based on this determination, user identity-based anonymizer 2704 can anonymize parameters that would have no meaning to the identified user, should not be divulged to the user, or would otherwise need to be adjusted by the user to the user's own profile (e.g., messaging identifier such as text messaging number or email address). Organization-based anonymizer 2706 is configured to parse through the parameters of workflow template 2408, and selectively anonymize their parameter values based on an identity of an organization to which workflow template 2408 is being published. The identity of the organization for publication may be determined by organization-based anonymizer 2706 in various ways, including by user input, by an indication that a target publication library or folder is owned by the organization, and/or by other mechanism. Based on this determination, organization-based anonymizer 2706 can anonymize parameters that would have no meaning to the identified organization, should not be divulged to the organization, or would otherwise need to be adjusted by the organization to the organization's own profile (e.g., folder names, domain names, group messaging identifiers, etc.). Context-based anonymizer 2708 is configured to parse through the parameters of workflow template 2408, and selectively anonymize their parameter values based on a context in which workflow template 2408 was created. This context may be determined by context-based anonymizer 2708 in various ways, including by user input, by a time, a place, and/or a reason for creation of workflow template 2408, and/or by other mechanism. Based on this determination, context-based anonymizer 2708 can anonymize parameters that would have no meaning outside of the identified context. In an embodiment, all parameters are automatically analyzed by selective anonymizer 2404 to determine whether or not the parameters are to be anonymized. In another embodiment, selective anonymizer 2404 may enable a developer to select one or more of the parameters for anonymization, in addition to selective anonymizer 2404 determining whether or not the parameters are to be anonymized. For instance, FIG. 28 shows a step 2802 for enabling a developer to select parameters for anonymization, in accordance with an embodiment. In step 2802, a developer is enabled to select whether to anonymize a particular parameter in the received automated workflow template. In an embodiment, UI generator 110 (FIG. 1) is configured to display a user interface control (e.g., a pulldown menu, a check box, a button, etc.) to enable a developer to select whether to anonymize a parameter of workflow template 2408. If the parameter is selected by the user for anonymization, selective anonymizer 2404 is configured to anonymize the parameter (e.g., clear the parameter value, replace the parameter value with a generic parameter value, enter a message to fill in the parameter value, etc.). For example, if the parameter is a messaging account identifier (e.g., an email address), selective anonymizer 2404 may be configured to erase the current messaging account identifier, or replace it with a generic messaging account identifier. With respect to the example of FIG. 26, selective anonymizer 2404 may be configured to anonymize the “Site URL” parameter because “https:/contoso.sharepoint.com/teams/marketing” does not having meaning outside the current organization that developed workflow 2600. As such, selective anonymizer 2404 may delete or may replace “https:/contoso.sharepoint.com/teams/marketing” with a parameter value of a generic URL or a fill-in message (e.g., “Insert URL here”). Selective anonymizer 2404 may be configured to anonymize the “List Name” parameter due to “Assets” being a confidential list of assets for the organization that developed workflow 2600. As such, selective anonymizer 2404 may delete or replace the “Assets” with a parameter value of a generic list or a fill-in message (e.g., “Insert List Name here”). Selective anonymizer 2404 may be configured to anonymize the “Recipient” parameter due to “fredjones@hotmail.com” being a personal email address for a person in the organization that developed workflow 2600. As such, selective anonymizer 2404 may delete or replace “fredjones@hotmail.com” with a parameter value of a generic message identifier or a fill-in message (e.g., “Target Recipient”). Selective anonymizer 2404 may determine that the Message parameter has a parameter value (“Item Received”) that contains no confidential information and would have meaning outside of the organization, and thus does not need to be anonymized. Note that in some embodiments, rather than entirely deleting or replacing a parameter value, selective anonymizer 2404 may be configured to modify a parameter value in part. For example, FIG. 29 shows a step 2902 for partially anonymizing parameters, in accordance with an embodiment. In step 2902, a parameter included in the received automated workflow template is partially anonymized. In such an embodiment, selective anonymizer 2404 is configured to analyze a particular parameter, and if selective anonymizer 2404 determines the parameter is to be anonymized, selective anonymizer 2404 replaces a portion of the parameter value. For instance, selective anonymizer 2404 may modify a folder name to a different folder name or a generic folder name, may modify a messaging identifier to a different messaging identifier or generic messaging identifier, etc. For instance, with respect to FIG. 26, selective anonymizer 2404 may modify the URL “https:/contoso.sharepoint.com/teams/marketing” to point to a site URL of a group within an organization with which the workflow template is to be shared, or to a generic URL site, in either case by modifying a portion of the parameter value. For instance, the URL may be modified to be directed from a “team” group to a “managers” group as follows: “https:/contoso.sharepoint.com/managers/marketing”, where “team” was replaced with “managers” in the file path. Referring back to FIG. 25, in step 2506, the anonymized automated workflow template is published. As shown in FIG. 24, workflow template publisher 2406 receives anonymized workflow template 2410, which is the anonymized form of workflow template 2408. Workflow template publisher 2406 is configured to publish anonymized workflow template 2410. Workflow template publisher 2406 may publish anonymized workflow template 2410 to a folder or site designated by a user or to a default folder or site. For example, as shown in FIG. 24, workflow template publisher 2406 may publish anonymized workflow template 2410 by storing anonymized workflow template 2410 in template library 1212. Workflow template publisher 2406 may publish workflow template 1216 in various forms, such as in the form of a package including at least two components (e.g., files): workflow definition information 316 and interface definition information 318 (FIG. 3). Template library 1212 may be publicly available, or available to an organization, a team within an organization, or one or more specific individuals, and may have been anonymized correspondingly. VI. Example Computer System Implementation Computing device 102, workflow designer 106, UI generator 110, workflow logic generator 112, local application 122, network-based application 124A, network-based application 124B, server 134, workflow step gallery generator 302, template gallery generator 304, saved workflow selector 306, step selector 308, step configuration UI generator 310, workflow definition generator 312, interface definition generator 314, computing device 902, workflow application 904, workflow execution engine 906, template generation logic 1202, workflow step combiner 1206, description generator 1208, usage analyzer 1210, template generation logic 1802, compatible workflow step determiner 1804, selective anonymizer 2404, workflow template publisher 2406, audience-based anonymizer 2702, user identity-based anonymizer 2704, organization-based anonymizer 2706, context-based anonymizer 2708, flowchart 200, flowchart 1000, flowchart 1300, flowchart 1400, step 1502, step 1602, flowchart 1700, flowchart 1900, flowchart 2100, flowchart 2200, flowchart 2500, step 2802, and step 2902 may be implemented in hardware, or hardware combined with software and/or firmware. For example, workflow designer 106, UI generator 110, workflow logic generator 112, local application 122, network-based application 124A, network-based application 124B, server 134, workflow step gallery generator 302, template gallery generator 304, saved workflow selector 306, step selector 308, step configuration UI generator 310, workflow definition generator 312, interface definition generator 314, computing device 902, workflow application 904, workflow execution engine 906, template generation logic 1202, workflow step combiner 1206, description generator 1208, usage analyzer 1210, template generation logic 1802, compatible workflow step determiner 1804, selective anonymizer 2404, workflow template publisher 2406, audience-based anonymizer 2702, user identity-based anonymizer 2704, organization-based anonymizer 2706, context-based anonymizer 2708, flowchart 200, flowchart 1000, flowchart 1300, flowchart 1400, step 1502, step 1602, flowchart 1700, flowchart 1900, flowchart 2100, flowchart 2200, flowchart 2500, step 2802, and/or step 2902 may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium, or may be implemented as hardware logic/electrical circuitry. For instance, in an embodiment, one or more, in any combination, of workflow designer 106, UI generator 110, workflow logic generator 112, local application 122, network-based application 124A, network-based application 124B, server 134, workflow step gallery generator 302, template gallery generator 304, saved workflow selector 306, step selector 308, step configuration UI generator 310, workflow definition generator 312, interface definition generator 314, computing device 902, workflow application 904, workflow execution engine 906, template generation logic 1202, workflow step combiner 1206, description generator 1208, usage analyzer 1210, template generation logic 1802, compatible workflow step determiner 1804, selective anonymizer 2404, workflow template publisher 2406, audience-based anonymizer 2702, user identity-based anonymizer 2704, organization-based anonymizer 2706, context-based anonymizer 2708, flowchart 200, flowchart 1000, flowchart 1300, flowchart 1400, step 1502, step 1602, flowchart 1700, flowchart 1900, flowchart 2100, flowchart 2200, flowchart 2500, step 2802, and/or step 2902 may be implemented together in a SoC. The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a central processing unit (CPU), microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits, and may optionally execute received program code and/or include embedded firmware to perform functions. FIG. 30 shows a block diagram of an exemplary mobile device 3002 that may implement embodiments described herein. For example, mobile device 3002 may be used to implement computing device 102 or server 134. As shown in FIG. 30, mobile device 3002 includes a variety of optional hardware and software components. Any component in mobile device 3002 can communicate with any other component, although not all connections are shown for ease of illustration. Mobile device 3002 can be any of a variety of computing devices (e.g., cell phone, smartphone, handheld computer, Personal Digital Assistant (PDA), etc.) and can allow wireless two-way communications with one or more mobile communications networks 3004, such as a cellular or satellite network, or with a local area or wide area network. The illustrated mobile device 3002 can include a controller or processor 3010 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions. An operating system 3012 can control the allocation and usage of the components of mobile device 3002 and provide support for one or more application programs 3014 (also referred to as “applications” or “apps”). Application programs 3014 may include common mobile computing applications (e.g., e-mail applications, calendars, contact managers, web browsers, messaging applications) and any other computing applications (e.g., word processing applications, mapping applications, media player applications). The illustrated mobile device 3002 can include memory 3020. Memory 3020 can include non-removable memory 3022 and/or removable memory 3024. Non-removable memory 3022 can include RAM, ROM, flash memory, a hard disk, or other well-known memory devices or technologies. Removable memory 3024 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in GSM communication systems, or other well-known memory devices or technologies, such as “smart cards.” Memory 3020 can be used for storing data and/or code for running operating system 3012 and applications 3014. Example data can include web pages, text, images, sound files, video data, or other data to be sent to and/or received from one or more network servers or other devices via one or more wired or wireless networks. Example code can include program code for workflow designer 106, UI generator 110, workflow logic generator 112, local application 122, network-based application 124A, network-based application 124B, server 134, workflow step gallery generator 302, template gallery generator 304, saved workflow selector 306, step selector 308, step configuration UI generator 310, workflow definition generator 312, interface definition generator 314, computing device 902, workflow application 904, workflow execution engine 906, template generation logic 1202, workflow step combiner 1206, description generator 1208, usage analyzer 1210, template generation logic 1802, compatible workflow step determiner 1804, selective anonymizer 2404, workflow template publisher 2406, audience-based anonymizer 2702, user identity-based anonymizer 2704, organization-based anonymizer 2706, context-based anonymizer 2708, flowchart 200, flowchart 1000, flowchart 1300, flowchart 1400, step 1502, step 1602, flowchart 1700, flowchart 1900, flowchart 2100, flowchart 2200, flowchart 2500, step 2802, and/or step 2902 (including any suitable step of flowcharts 200, 1000, 1300, 1400, 1700, 1900, 2100, 2200, 2500), and/or further embodiments described herein. Memory 3020 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers can be transmitted to a network server to identify users and equipment. Mobile device 3002 can support one or more input devices 3030, such as a touch screen 3032, a microphone 3034, a camera 3036, a physical keyboard 3038 and/or a trackball 3040 and one or more output devices 3050, such as a speaker 3052 and a display 3054. Other possible output devices (not shown) can include piezoelectric or other haptic output devices. Some devices can serve more than one input/output function. For example, touch screen 3032 and display 3054 can be combined in a single input/output device. The input devices 3030 can include a Natural User Interface (NUI). Wireless modem(s) 3060 can be coupled to antenna(s) (not shown) and can support two-way communications between the processor 3010 and external devices, as is well understood in the art. The modem(s) 3060 are shown generically and can include a cellular modem 3066 for communicating with the mobile communication network 3004 and/or other radio-based modems (e.g., Bluetooth 3064 and/or Wi-Fi 3062). At least one of the wireless modem(s) 3060 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, between cellular networks, or between the mobile device and a public switched telephone network (PSTN). Mobile device 3002 can further include at least one input/output port 3080, a power supply 3082, a satellite navigation system receiver 3084, such as a Global Positioning System (GPS) receiver, an accelerometer 3086, and/or a physical connector 3090, which can be a USB port, IEEE 1394 (FireWire) port, and/or RS-232 port. The illustrated components of mobile device 3002 are not required or all-inclusive, as any components can be deleted and other components can be added as would be recognized by one skilled in the art. In an embodiment, mobile device 3002 is configured to perform any of the above-described functions of workflow designer 106. Computer program logic for performing these functions may be stored in memory 3020 and executed by processor 3010. FIG. 31 depicts an example processor-based computer system 3100 that may be used to implement various embodiments described herein. For example, any of computing device 102, server 134, and mobile device 3002 may be implemented in one or more computing devices similar to computing device 3100 in stationary or mobile computer embodiments, including one or more features of computing device 3000 and/or alternative features. The description of system 3100 provided herein is provided for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). As shown in FIG. 31, system 3100 includes a processing unit 3102, a system memory 3104, and a bus 3106 that couples various system components including system memory 3104 to processing unit 3102. Processing unit 3102 may comprise one or more microprocessors or microprocessor cores. Bus 3106 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory 3104 includes read only memory (ROM) 3108 and random access memory (RAM) 3110. A basic input/output system 3112 (BIOS) is stored in ROM 3108. System 3100 also has one or more of the following drives: a hard disk drive 3114 for reading from and writing to a hard disk, a magnetic disk drive 3116 for reading from or writing to a removable magnetic disk 3118, and an optical disk drive 3120 for reading from or writing to a removable optical disk 3122 such as a CD ROM, DVD ROM, BLU-RAY™ disk or other optical media. Hard disk drive 3114, magnetic disk drive 3116, and optical disk drive 3120 are connected to bus 3106 by a hard disk drive interface 3124, a magnetic disk drive interface 3126, and an optical drive interface 3128, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of computer-readable memory devices and storage structures can be used to store data, such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system 3130, one or more application programs 3132, other programs 3134, and program data 3136. Application programs 3132 or other programs 3134 may include, for example, computer program logic (e.g., computer program code or instructions) for implementing workflow designer 106, UI generator 110, workflow logic generator 112, local application 122, network-based application 124A, network-based application 124B, server 134, workflow step gallery generator 302, template gallery generator 304, saved workflow selector 306, step selector 308, step configuration UI generator 310, workflow definition generator 312, interface definition generator 314, computing device 902, workflow application 904, workflow execution engine 906, template generation logic 1202, workflow step combiner 1206, description generator 1208, usage analyzer 1210, template generation logic 1802, compatible workflow step determiner 1804, selective anonymizer 2404, workflow template publisher 2406, audience-based anonymizer 2702, user identity-based anonymizer 2704, organization-based anonymizer 2706, context-based anonymizer 2708, flowchart 200, flowchart 1000, flowchart 1300, flowchart 1400, step 1502, step 1602, flowchart 1700, flowchart 1900, flowchart 2100, flowchart 2200, flowchart 2500, step 2802, and/or step 2902 (including any suitable step of flowcharts 200, 1000, 1300, 1400, 1700, 1900, 2100, 2200, 2500), and/or further embodiments described herein. A user may enter commands and information into system 3100 through input devices such as a keyboard 3138 and a pointing device 3140 (e.g., a mouse). Other input devices (not shown) may include a microphone, joystick, game controller, scanner, or the like. In one embodiment, a touch screen is provided in conjunction with a display 3144 to allow a user to provide user input via the application of a touch (as by a finger or stylus for example) to one or more points on the touch screen. These and other input devices are often connected to processing unit 3102 through a serial port interface 3142 that is coupled to bus 3106, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). Such interfaces may be wired or wireless interfaces. Display 3144 is connected to bus 3106 via an interface, such as a video adapter 3146. In addition to display 3144, system 3100 may include other peripheral output devices (not shown) such as speakers and printers. System 3100 is connected to a network 3148 (e.g., a local area network or wide area network such as the Internet) through a network interface 3150, a modem 3152, or other suitable means for establishing communications over the network. Modem 3152, which may be internal or external, is connected to bus 3106 via serial port interface 3142. As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive 3114, removable magnetic disk 3118, removable optical disk 3122, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media (including memory 1220 of FIG. 12). Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media. As noted above, computer programs and modules (including application programs 3132 and other programs 3134) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface 3150, serial port interface 3142, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device 3100 to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of the computing device 3100. Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware. IV. Example Embodiments In a first embodiment, a method in a computing device comprises: determining a plurality of workflows steps in a library of workflow steps; automatically generating one or more workflow templates, each automatically generated workflow template including a combination of at least two of the workflow steps in the library; and storing the one or more workflow templates in a library of templates. In an embodiment, the automatically generating comprises: selecting a trigger step of the workflow steps in the workflow step library; selecting at least one action step of the workflow steps in the workflow step library; and automatically combining the selected trigger step and the selected at least one action step to generate a workflow template. In an embodiment, the automatically generating further comprises: automatically generating text describing operations performed by the generated workflow template. In an embodiment, the automatically generating comprises: iterating through all combinations of trigger steps in the workflow step library with action steps in the workflow step library to generate a plurality of workflow templates. In an embodiment, the method further comprises: enabling an administrator to curate the template library to eliminate one or more workflow templates from the template library. In an embodiment, the automatically generating comprises: analyzing statistics regarding workflows created by developers to determine a workflow created by the developers at a frequency greater than a predetermined threshold, and generating a workflow template corresponding to the determined workflow; and wherein said storing comprises: storing in the template library the workflow template corresponding to the determined workflow. In an embodiment, the method further comprises: displaying in a graphical user interface a template gallery including indications of the one or more workflow templates in the template library; and enabling developers to interact with the graphical user interface to select workflow templates from the template library for including in workflows. In another embodiment, a system comprises: one or more processors; and a memory that stores computer program logic for execution by the one or more processors, the computer program logic including: template generation logic configured to automatically generate one or more workflow templates, each automatically generated workflow template including a combination of at least two of workflow steps of a workflow step library, and to store the one or more workflow templates in a template library. In an embodiment, the template generation logic comprises: a workflow step combiner configured to select a trigger step of the workflow steps in the workflow step library, select at least one action step of the workflow steps in the workflow step library, and automatically combine the selected trigger step and the selected at least one action step to generate a workflow template. In an embodiment, the template generation logic further comprises: a description generator configured to automatically generate text describing operations performed by the generated workflow template. In an embodiment, the workflow step combiner is configured to iterate through all combinations of trigger steps in the workflow step library with action steps in the workflow step library to generate a plurality of workflow templates. In an embodiment, a user interface enables an administrator to curate the template library to eliminate one or more workflow templates from the template library. In an embodiment, the template generation logic comprises: a usage analyzer configured to analyze statistics regarding workflows created by developers to determine a workflow created by the developers at a frequency greater than a predetermined threshold; a workflow step combiner configured to generate a workflow template corresponding to the determined workflow, and to store in the template library the workflow template corresponding to the determined workflow. In an embodiment, the computer program logic further comprises: a template gallery generator configured to display in a graphical user interface a template gallery including indications of the one or more workflow templates in the template library, and to enable developers to interact with the graphical user interface to select workflow templates from the template library for including in workflows. In another embodiment, a method in a computing device comprises: enabling a developer to select a first workflow step; automatically determining one or more workflow steps compatible with the first workflow step; and displaying the determined one or more workflow steps in association with the first workflow step. In an embodiment, the method further comprises: enabling the developer to select a second workflow step of the displayed one or more workflow steps; and inserting the second workflow step into a workflow template interconnected with the first workflow step. In an embodiment, the automatically determining comprises: automatically determining one or more trigger steps compatible with the first workflow step in response to the first workflow step being an action step; and automatically determining one or more action steps compatible with the first workflow step in response to the first workflow step being a trigger step. In another embodiment, a system comprises: one or more processors; and a memory that stores computer program logic for execution by the one or more processors, the computer program logic including: a step selector configured to enable a developer to select a first workflow step; a compatible workflow step determiner configured to automatically determine one or more workflow steps compatible with the first workflow step; and a user interface generator configured to display the determined one or more workflow steps in association with the first workflow step. In an embodiment, the step selector is configured to enable the developer to select a second workflow step of the displayed one or more workflow steps, and to insert the second workflow step into a workflow template interconnected with the first workflow step. In an embodiment, the compatible workflow step determiner is configured to automatically determine one or more trigger steps compatible with the first workflow step in response to the first workflow step being an action step, and to automatically determine one or more action steps compatible with the first workflow step in response to the first workflow step being a trigger step. In an embodiment, a method in a computing device comprises: receiving an automated workflow template for publication that includes parameters; automatically anonymizing at least a portion of the parameters included in the automated workflow template to generate an anonymized automated workflow template; and publishing the anonymized automated workflow template. In an embodiment, the automatically anonymizing is performed based on an intended target audience for the publication of the received automated workflow template. In an embodiment, the automatically anonymizing is performed based on an identity of the user having selected the received automated workflow template for the publication. In an embodiment, the automatically anonymizing is performed based on an organization within which the received automated workflow template is being shared by the publication. In an embodiment, the automatically anonymizing is performed based on a context in which the received automated workflow template was built. In an embodiment, the automatically anonymizing comprises: displaying a user interface control to enable a developer to select whether to anonymize a messaging account identifier in the received automated workflow template. In an embodiment, the selectively anonymizing comprises: partially anonymizing a parameter included in the received automated workflow template. In another embodiment, a system comprises: one or more processors; and a memory that stores computer program logic for execution by the one or more processors, the computer program logic including: a selective anonymizer configured to automatically anonymize at least a portion of a set of parameters included in an automated workflow template to generate an anonymized automated workflow template; and a workflow template publisher configured to publish the anonymized automated workflow template. In an embodiment, the selective anonymizer is configured to automatically anonymize the at least a portion of a set of parameters based on an intended target audience for the publication of the received automated workflow template. In an embodiment, the selective anonymizer is configured to automatically anonymize the at least a portion of a set of parameters based on an identity of the user having selected the received automated workflow template for the publication. In an embodiment, the selective anonymizer is configured to automatically anonymize the at least a portion of a set of parameters based on an organization within which the received automated workflow template is being shared by the publication. In an embodiment, the selective anonymizer is configured to automatically anonymize the at least a portion of a set of parameters based on a context in which the received automated workflow template was built. In an embodiment, a UI (user interface) generator is configured to display a user interface control to enable a developer to select whether to anonymize a messaging account identifier in the received automated workflow template. In an embodiment, the selective anonymizer is configured to partially anonymize a parameter included in the received automated workflow template. In an embodiment, a system comprises: one or more processors; and a memory that stores program code configured to be executed by the at least one processors to perform operations, the operations including: receiving an automated workflow template for publication that includes parameters; automatically anonymizing at least a portion of the parameters included in the automated workflow template to generate an anonymized automated workflow template; and publishing the anonymized automated workflow template. In an embodiment, the automatically anonymizing is performed based on an intended target audience for the publication of the received automated workflow template. In an embodiment, the automatically anonymizing is performed based on an identity of the user having selected the received automated workflow template for the publication. In an embodiment, the automatically anonymizing is performed based on an organization within which the received automated workflow template is being shared by the publication. In an embodiment, the automatically anonymizing is performed based on a context in which the received automated workflow template was built. In an embodiment, the automatically anonymizing comprises: displaying a user interface control to enable a developer to select whether to anonymize a messaging account identifier in the received automated workflow template. V. Conclusion While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and details can be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. 15417845 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology
nasdaq:msft Microsoft Apr 26th, 2022 12:00AM Oct 9th, 2020 12:00AM https://www.uspto.gov?id=US11317241-20220426 Undesirable encounter avoidance A method disclosed herein for allows users to avoid such undesirable encounters by determining whether an undesirable contact of the user has opted for sharing location information, in response to determining that the undesirable contact has opted for sharing location information, collecting location signal from the undesirable contact, forecasting anticipated locations of the undesirable contact over a period based on the location signal of the undesirable contact, forecasting anticipated locations of the user over the period, determining potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period, generating an undesirable contact avoidance scheme based on the potential of encounter, and optionally notifying the user of the undesirable contact avoidance scheme. 11317241 1. A method of allowing a user to avoid unwanted encounters, the method comprising: determining whether an undesirable contact of the user has opted for sharing location information; in response to determining that the undesirable contact has opted for sharing the location information, collecting the location information of the undesirable contact; forecasting anticipated locations of the user over a period; forecasting anticipated locations of the undesirable contact over the period based on the location information of the undesirable contact; determining a potential undesirable encounter at an anticipated location; and displaying the anticipated location to the user without disclosing an identity of the undesirable contact. 2. The method of claim 1, the location information of the undesirable contact determined at least in part based on a location signal from the undesirable contact. 3. The method of claim 1, the location information of the undesirable contact determined at least in part based on social network postings of the undesirable contact. 4. The method of claim 1, further comprising determining the potential undesirable encounter based at least in part on analysis of the anticipated locations of the user and the anticipated locations of the undesirable contact over the period. 5. The method of claim 4, the analysis comprising a determination of a potential overlap of at least one anticipated location of the user and at least one anticipated location of the undesirable contact over the period. 6. The method of claim 1, further comprising sending a notification to a mobile device of the user to alter a route, based at least in part on the potential undesirable encounter. 7. The method of claim 6, the notification comprising a notification to a navigation application on the mobile device to alter a navigation route. 8. A physical article of manufacture including one or more devices encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising: receiving from a user a phone number identifying an undesirable contact; in response to determining that the undesirable contact has opted for sharing location information, collecting the location information of the undesirable contact; forecasting anticipated locations of the user over a period; forecasting anticipated locations of the undesirable contact over the period based on the location information of the undesirable contact; determining a potential undesirable encounter at an anticipated location; and displaying the anticipated location to the user without disclosing an identity of the undesirable contact. 9. The physical article of manufacture of claim 8, the location information of the undesirable contact determined at least in part based on a location signal from the undesirable contact. 10. The physical article of manufacture of claim 8, the location information of the undesirable contact determined at least in part based on social network postings of the undesirable contact. 11. The physical article of manufacture of claim 8, the computer process further comprising determining the potential undesirable encounter based at least in part on analysis of the anticipated locations of the user and the anticipated locations of the undesirable contact over the period. 12. The physical article of manufacture of claim 11, the analysis comprising a determination of a potential overlap of at least one anticipated location of the user and at least one anticipated location of the undesirable contact over the period. 13. The physical article of manufacture of claim 8, the computer process further comprising sending a notification to a mobile device of the user suggesting an alternative destination for the user, based at least in part on the potential undesirable encounter. 14. A system for allowing a user to avoid undesirable encounters, comprising: memory; one or more processors; one or more computer-executable instructions stored in the memory and executable by the one or more processors to: collect location information of the user; collect location information of an undesirable contact in response to determining that the undesirable contact has opted for sharing the location information; determine locations of the user and locations of the undesirable contact over a predetermined period; determine a potential undesirable encounter at an anticipated location based on anticipated locations of the user and anticipated locations of the undesirable contact over the predetermined period; and display the anticipated location to the user. 15. The system of claim 14, the location information of the undesirable contact determined at least in part based on a location signal from the undesirable contact. 16. The system of claim 14, the location information of the undesirable contact determined at least in part based on social network postings of the undesirable contact. 17. The system of claim 14, the potential undesirable encounter determined at least in part based on analysis of the anticipated locations of the user and the anticipated locations of the undesirable contact over the predetermined period. 18. The system of claim 17, the analysis comprising a determination of a potential overlap of at least one anticipated location of the user and at least one anticipated location of the undesirable contact over the predetermined period. 19. The system of claim 14, the one or more computer-executable instructions further executable by the one or more processors to send a notification to a mobile device of the user suggesting an alternative schedule for the user, based at least in part on the potential undesirable encounter. 20. The system of claim 19, the alternative schedule comprising an alternative time to visit the anticipated location. 20 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 16/421,100, filed May 23, 2019, titled “UNDESIRABLE ENCOUNTER AVOIDANCE” which is a continuation of U.S. patent application Ser. No. 16/142,963, filed Sep. 26, 2018, titled “UNDESIRABLE ENCOUNTER AVOIDANCE,” now U.S. Pat. No. 10,349,217, issued Jul. 9, 2019, which is a continuation of U.S. Non-Provisional application Ser. No. 15/623,155 titled “UNDESIRABLE ENCOUNTER AVOIDANCE” and filed on Jun. 14, 2017, now U.S. Pat. No. 10,117,054, issued Oct. 30, 2018, all of which are incorporated by reference herein in their entireties. BACKGROUND In today's increasingly connected world, a person's current location at is widely shared by various mobile apps and devices. As a result, a person's current location may be easily available to other users, including other users that are in the social network of the person. While such capability to share current location may be generally looked upon as beneficial, it also presents opportunity for other users to run into planned or unplanned encounters with a person, even when that person is not interested in such encounters or when that person is trying to actively avoid certain other users. SUMMARY Implementations described herein disclose a method of avoiding undesirable encounters with other people by determining whether an undesirable contact of the user has opted for sharing location information, in response to determining that the undesirable contact has opted for sharing location information, collecting a location signal from the undesirable contact, forecasting anticipated locations of the undesirable contact over a period based on the location signal of the undesirable contact, forecasting anticipated locations of the user over the period, determining a potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period, generating an undesirable contact avoidance scheme based on the potential of encounter, and optionally notifying the user of the undesirable contact avoidance scheme. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Other implementations are also described and recited herein. BRIEF DESCRIPTION OF THE DRAWINGS A further understanding of the nature and advantages of the present technology may be realized by reference to the figures, which are described in the remaining portion of the specification. In the figures, like reference numerals are used throughout several figures to refer to similar components. FIG. 1 illustrates an example implementation of a system that allows users to avoid undesirable encounters. FIG. 2 illustrates example operations that allows users to avoid undesirable encounters. FIG. 3 illustrates example operations to initiate a user's participation in an undesirable encounter avoidance system. FIG. 4 illustrates alternative example operations to generate various undesirable encounter avoidance schemes. FIG. 5 illustrates an example computing system that may be useful in implementing the described technology for undesirable encounter avoidance system. FIG. 6 illustrates an example mobile device that may be useful in implementing the described technology for undesirable encounter avoidance system. DETAILED DESCRIPTION The advances in the telecommunication, computing, and wireless technologies have resulted in capabilities for users to interact with other people using a large number of applications. Specifically, the advances in the mobile device technologies such as wireless networks and global positioning system (GPS) networks results in users being able to determine location of other people such as their friends and families. For example, smartphone apps allow users to find out if a friend or a family member is at a given location of interest such as a restaurant or within a certain range of the user. For example, a user Alice using a smartphone can determine if any of her friends Bart, Chris, Doug, and Emily are within a mile from Alice's current location. Alternatively, if Alice is going to a café called Coffee Palace, she can find out if any of Bart, Chris, Doug, and Emily are at the Coffee Palace. Furthermore, Alice may also be able to find out if any of Bart, Chris, Doug, and Emily have any plans to go to the Coffee Palace in next few hours or so. Such information about the current or future locations may be gathered based on the GPS system communicating with the mobile devices of these friends or based on analysis of various personal graphs of these friends, such as personal graphs generated based on their social network postings, their calendars, etc. Such capability to determine potential location of the friends may be very useful for Alice if she is interested in seeing any of Bart, Chris, Doug, and Emily. However, there may be situations where Alice is not interested in meeting one of them. For example, Alice may have recently ended a bad breakup with Bart, in which case Alice may be interested in avoiding any potential encounter with Bart. One way for Alice to accomplish this may be constantly monitor any available information about Bart. However, Alice may not have access to all such information, especially when Alice and Bart may be not connected by any network subsequent to the breakup. Alternatively, Alice may simply walk away when she spots Bart. However, in such a case Bart may still notice Alice and may try to approach her, making up an odd encounter. The system disclosed herein provides various implementations that allow Alice to avoid such undesirable encounters with Bart, or any other people. Specifically, the system for avoiding undesirable encounters with other people disclosed herein allows any user, such as Alice, to avoid such undesirable encounters by determining whether an undesirable contact of the user has opted for sharing location information, in response to determining that the undesirable contact has opted for sharing location information, collecting location signal from the undesirable contact, forecasting anticipated locations of the undesirable contact over a period based on the location signal of the undesirable contact, forecasting anticipated locations of the user over the period, determining potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period, generating an undesirable contact avoidance scheme based on the potential of encounter, and notifying the user of the undesirable contact avoidance scheme. FIG. 1 illustrates an example implementation of an undesirable encounter avoidance (UEA) system 100 that allows a user to avoid running into various selected people that the user is trying to avoid without compromising the privacy of the user or such other people. For example, by using the UEA system 100, Alice can avoid running into Bart, or any other persons. Specifically, in one implementation, the UEA system 100 allows Alice to avoid running into Bart if Bart has consented to participate in use of the UEA system 100. In yet alternative implementation, the UEA system 100 allows Alice to avoid running into Bart without knowing the location of Bart. The UEA system 100 includes a UEA server 102 that is configured to communicate with various information sources, users, and other components via a network 104. For example, such network 104 may be the Internet. One such source of information may be a social network server 106 that manages one or more social networks. Note that while only one social network server 106 is illustrated in FIG. 1, alternative implementations may include a large number of social networks that the UEA server 102 communicates to gather information about users. For example, such social networks may include social networks that allow users to connect with their friends and family, a social network that allows users to network with their professional network, a social network for posting locations, photographs, etc. In one implementation, the UEA server 102 collects information about various people that the user may be interested in avoiding. The UEA application server 102 also interacts with a mapping server 108 that provides mapping information, such as city maps, locations of restaurants, locations of various users. FIG. 1 illustrates such a map segment 150 including locations of various users and locations. A GPS satellite 110 may provide GPS data to various applications and the users. A traffic analysis API 112 may interact with the mapping server 108, the satellite 110, and various other data sources to determine traffic information in various different locations on the map segment 150 presented by the mapping server. Specifically, the map segment 130 may be disclosing location of Alice, who is traveling using the vehicle 132 to the Coffee Palace 136. The UEA system 100 may determine that Bart is at the Coffee Palace 136 or is going to be at the Coffee Palace 136 in the near future, based on data collected from Bart's mobile phone 134. In such a case, the UEA system 100 may find an alternate Café, such as the Donut Palace 136a and recommend Alice to go to the Donut Palace 136a instead of the Coffee Palace 136. If Alice approves such alternative destination, the UEA system 100 may suggest a route 138 to Alice to go to the Donut Palace 136a. However, if the UEA system 100 determines that another user, say Doug 140 is near the route 138 and if Alice has also requested to avoid Doug, who is using a tablet 140, the UEA system 100 suggests yet alternate route 138a to Alice to go to the Donut Palace 136a. The UEA system 100 provides these and other capabilities to Alice and other users using one or more of the modules on the UEA server 102 as discussed below. The UEA server 102 includes a signal collection module 114 that collects various information about users. For example, such information includes user location information collected from signal gathering components 126 on user devices, such as Bart's mobile phone 134, Doug's tablet 140. The signal gathering components 126 may verify if the user has consented to participate in the UEA system 100 and if the user has consented to share its location information with the UEA server 102. Upon determining such participation and consent, the signal gathering components 126 may upload location information to the signal gathering module 114. In one implementation, the frequency at which the signal gathering components 126 uploads the location information to the signal gathering module 114 may depend upon user input, the available battery power on the user device, the location of the user with respect to other people—specifically the people that the user is trying to void, the direction and/or the speed of the user, etc. For example, for Alice travelling using the vehicle 132, when she gets close to Bart at the Coffee Palace 136, the signal gathering component 126 may increase the frequency at which it uploads the location information. In an alternative implementation, when Alice's vehicle 132 gets close to Bart, the UEA system 100 may send a signal to the signal gathering component 126 on Bart's mobile phone 136 and as a result, the signal gathering component 126 on Bart's mobile phone 136 may also increase the frequency of uploading the location information to the signal collection module 114. The UEA server 102 may also include a location extrapolation module 116 that uses the location information collected at the signal collection module 114 to determine current and potential locations of various participants. In one implementation, the location extrapolation module 116 may use the speed at which a user is moving, speeds at which people that may result in undesirable encounter with the user, calendars of various users, etc., to extrapolate location information. For example, if Alice has elected to avoid encounters with Bart, the location extrapolation module 116 may analyze the calendar information from Bart's calendar to see where Bart is supposed to be during various time periods. If there are multiple people that a user is trying to avoid, location information about each of these people may be extrapolated by the location extrapolation module 116. The time window for which such location information is extrapolated may be determined based on user input, speed of user, etc. In one implementation, the location extrapolation module 116 may extrapolate location information for a given user and each of the people that the given user is trying to avoid for a period of two hours and updates such two-hour window for locations at a predetermined frequency of every ten minutes. For example, if Alice has elected to avoid Bart and Doug, the location extrapolation module 116 may extrapolate two-hour location information for each of Alice, Bart, and Doug and update such two-hour location information every ten minutes. The UEA server 102 also includes an encounter avoidance module 118 that analyzes the location information of various people for various time periods to determine if any of the people that a user is trying to avoid overlaps with the user at any location. In the above example where Alice is trying to avoid Bart and Doug, the encounter avoidance module 118 analyzes the two-hour location information for each of Alice, Bart, and Doug to determine any potential encounter between Alice and Bart or Alice and Doug. For example, if at 12:00 Alice's extrapolated to be at the Coffee Palace 136 at 12:30 to 13:30 PM, Bart is expected to be at the Coffee Palace 136 between 12:00 and 1:30, and Doug is expected to be on the route 138 at 12:15, at 12:00, the encounter avoidance module 118 may determine to suggest Alice to go to the Donut Palace 136a using route 138a, so as to avoid potential encounters with Bart and Doug. A user interaction module 120 of the UEA server 102 may work with a UEA app 128 based on the mobile devices of users to communicate with the user's. For example, such UEA app 128 may be installed at the user's request on their mobile devices, such Alice's mobile device 132a, Bart's mobile phone 134, Doug's table 140, etc. The user interaction module 120 may communicate with users to initiate their participation in the UEA system 100. For example, after hearing about the availability of the UEA app 128 Alice may install it on her mobile device 132a. After the installation, using a UEA user interface, Alice may request that she wants to avoid encounters with Bart and Doug. As part of these request, Alice may provide some identifying information about Bart and Doug, such as their mobile device numbers, their social network identifications, etc. Upon receiving the identifying information about Bart and Doug, the user interaction module 120 communicates with Bart and Doug to request them to join the UEA system or to simply receive their consent. For example, the user interaction module 120 may communicate with Bart and Doug via a text message, via a message on their social app, an email, etc. As an example, the user interaction module 120 may send a text message to invite them to download the UEA app 128 on their mobile device. Alternatively, the user interaction module 120 may send them a text message to inform that a user of the UEA system 100 has requested avoiding encounters with them without giving them the identity of such user. If any of Bart or Doug gives such consent, the user interaction module 120 may send a message to Alice to inform her that at least one person that she has requested for encounter avoidance has consented to her request. Once Alice's account is set up to have encounter avoidance with various users, the user interaction module 120 may also interact with the use, their mobile device, or one or more applications running on their mobile device to effectuate such encounter avoidance. For example, if Alice is using a navigation application on her mobile device 132a, upon detecting potential encounter with Bart, the user interaction module 120 may interact with such navigation application to suggest an alternative destination, an alternative route, etc. A privacy management module 122 of the UEA server 102 manages the privacy of various participants of the UEA system 100. Specifically, the privacy management module 122 ensures that location of an individual user not exposed to other users. Thus, it ensures that the UEA system 100 is not used by any user to locate other users, thus avoiding the potential of a user stalking or following another user. In one implementation, to achieve such privacy, the privacy management module 122 requires that when a user initiates their participation in the UEA system 100, they include at least a significant number of individual users that they want to avoid within a given geographic region. For example, when Alice signs up to use the UEA system 100 to avoid encounters with Bart and Doug, she is required to identify a significant number of users in the geographic area where each of Bart and Doug are located so that when Alice receives a signal about a potential undesired encounter, she cannot infer the identity of the person related to the potential undesired encounter. In an alternative implementation, Alice does not receive any signal about the potential undesired encounter and she is automatically redirected to alternative location or route. Furthermore, Alice may be able to set the UEA system for herself such that she can decide whether she wants to receive a signal about an undesirable encounter or whether she would prefer to be automatically redirected without knowledge about the potential encounters. In an example implementation, Alice is requested to identify at least five users. In an alternative implementation, the privacy management module 122 randomly selects a predetermined number of users to the pool of users avoided so as to ensure that location of any individual user is not exposed. Thus, if Alice requests that she wants to avoid Bart and Doug, the privacy management module 122 randomly selects a few additional users as well where Alice would also be notified to avoid encounter with these additional users. For example, the additional users may be auto-generated fake users and generate fake encounters. The privacy management module 122 also notifies Alice that she is going to receive notifications when she is likely to encounter users other than Bart and Doug as well. As a result, when Alice receives a notification about a potential encounter she cannot infer for sure who such potential encounter may be. In one implementation, the privacy management module 122 includes such random users such that for any geographical area, there are at least five users that Alice may be avoiding. A database 124 of the UEA server 102 stores various information about the users, such as user profiles, user consent data. In one implementation, the database 124 may also store previous usage patterns of the users such as the users' acceptance of suggestions for encounter avoidance, etc., to further refine generating the encounter avoidance notifications. For example, if a user is more likely to reject a suggestion when it adds significant commute time, the encounter avoidance module 118 may use such preference pattern to modify future suggestion generation. FIG. 2 illustrates example operations 200 that allows users to avoid undesirable encounters. Various operations 200 may be stored on a computer readable memory of a computing device, such as a cloud based server, and implemented by a processor. Specifically, the operations 200 provides a user an undesirable encounter avoidance scheme that would allow the user to avoid an undesirable encounter with one or more selected users. An operation 202 initiates a user in a UEA system (such as the UEA system 100 of FIG. 1). For example, the user may be initiated in the UEA system by installing an app on the user's mobile device, by registering the user online, etc. Furthermore, the operation 202 may also receive other information about the user, such as information from the user's calendar, the user's social graph, the user's professional graph, etc. An operation 204 receives a list of other people identified by the user as undesirable contacts with whom the user would like to avoid undesirable contacts. The user may identify such undesirable contacts by the phone numbers, the email addresses, social network identifiers, etc., of such undesirable contacts. Subsequently, an operation 206 determines whether an undesirable contact of the user has opted for sharing location information. For example, if the user, Alice, has identified Bart as an undesirable contact, the operation 206 determines if Bart has consented to share location information with the UEA system (such as the UEA system 100 of FIG. 1). Note that the operation 206 may determine whether Bart has consented to share location information with the UEA system at a number of different times. For example, once Alice identifies Bart as an undesirable contact, the UEA system may check to see if Bart has opted into the UEA system at some prior time. Alternatively, the UEA system may periodically check to see if Bart has opted into the UEA system at some later time and in response to Bart's opting in, Bart is added as an active undesirable contact for Alice to effectuate undesirable encounters between Alice and Bart. If the undesirable contact has consented to such sharing of information, an operation 208 collects location information from the undesirable contact. While the operation 208 is illustrated as a single operation, the collecting of the location information may be repeated at predetermined intervals. Such collection of the location information from the undesirable contact may be optimized based on various parameters such as user device battery life, closeness of the user with respect to the undesirable contact, etc. An operation 210 collects other information about the undesirable contact, such as calendar information, social graph, etc. Again, the operation 210 collects such other information about the undesirable contact only if the undesirable contact has consented to collection of such information. An operation 212 forecasts anticipated locations of the user over a period. For example, the operation 212 may forecast such anticipated locations over a period of two hours based on calendar of the user, the location and direction of the user, social network discussion of the user, etc. An operation 214 forecasts anticipated locations of various undesirable contacts over the similar time period. In doing such forecast, the operation 214 may use various information about such undesirable contact, such as their current location, their historic locations, their social and professional graph, their calendar, etc. For example, the operation 214 may take into consideration that when a user takes a particular route, it is generally to visit a certain relative of the user. An operation analyzes the anticipated locations of the undesirable contacts (determined at operation 214) in view of the anticipated location of the user (as determined at operation 212) to determine potential encounters between the user and one or more of the undesirable contacts. If there is a potential of any undesirable encounter between the user and at least one of the undesirable contacts, an operation 218 generates an undesirable contact avoidance scheme. For example, such an undesirable contact avoidance scheme may be an alternative destination, an alternative route, a change in the schedule of the user, etc. An operation 220 notifies the user of undesirable contact avoidance scheme. In one implementation, such notification may be direct in form of a text message or a message within a UEA app on the user's mobile device. In an alternative implementation, such notification may be via another application, such as a navigation application, a calendar application, etc. FIG. 3 illustrates example operations 300 to initiate a user's participation in an undesirable encounter avoidance system. Various operations 300 may be stored on a computer readable memory of a computing device, such as a cloud based server, and implemented by a processor. An operation 302 receives a request from a user to participate in the UEA system. For example, the user may download an app on their mobile device to initiate such participation. At an operation 304, the UEA app may receive various parameters of the user in response to downloading of such app. Alternatively, the UEA app may request information such as the location of the user and other personal information of the user via a UEA user interface. An operation 306 receives a list of undesirable contacts from the user. For example, the user may identify a list of phone numbers from the contact list on the mobile device, a list of members from her social network, etc., as undesirable contacts. Furthermore, an operation 306 receives various conditions related to each of the undesirable contacts. For example, Alice may select to avoid encounters with Bart when Alice and Chris are together. Alternatively, Alice may select to avoid Bart only in the evenings and weekends. Yet alternatively, Alice may select to avoid Doug in locations that sell alcoholic beverages. An operation 310 determines if the user has provided a significant number of undesirable contacts. Specifically, the operation 310 determines if the user has provided significant number of contacts per given geographic region. This it to ensure the privacy of the undesirable contact. For example, if Alice gives only Bart as the contact to avoid in the Herzliya metro area, any time she receives a notification about a potential undesirable encounter, Alice would be able to infer that Bart is at the desired location. This would compromise Bart's privacy. For example, the threshold number used by operation 310 may be five contacts per metro area. The operation 310 may evaluate the number of contacts at a number of different times, such as upon Alice's initiation in the system or on a periodic basis in the future. For example, when Alice joins the UEA system and identifies certain undesirable contact, such contact may already have opted into the UEA system and would count in meeting the threshold. Alternatively, if such a contact opts into the system in future, at that point the determination of whether dummy contacts is required is reevaluated. If the number of undesirable contacts is less than the threshold, an operation 312 notifies the user that the UEA system will be using dummy contacts as undesirable contacts. An operation 314 selects such dummy contacts. Alternatively, such dummy contacts may be randomly selected from other users with characteristics similar to one or more of the undesirable contacts identified by the user. As an example, if Alice has selected Bart as one of the undesirable contacts and Bart is more likely to frequent cafés, the operation 314 selects other users, either from contact list of Alice or otherwise, where these other users are also likely to visit Cafés. An operation 320 sends notifications to the undesirable contacts, those identified by the user and those randomly selected to receive their consent to be part of the UEA system. FIG. 4 illustrates alternative example operations 400 to generate various undesirable encounter avoidance schemes. Various operations 400 may be stored on a computer readable memory of a computing device, such as a cloud based server, and implemented by a processor. An operation 402 receives forecasted locations of the user and the list of undesirable contacts for a given time-period. For example, forecasted locations over a period of two-hours is provided. An operation 404 receives location parameters for various locations identified as potential locations for undesirable encounters between the user and one or more undesirable contacts. Such parameters about the locations may be the size of the location, crowd size at the location, the lighting conditions at the location, noise levels at the location, etc. An operation 406 evaluates the locations where the user may encounter an undesirable contact in view of these parameters to determine the potential encounters and their probabilities. For example, if it is identified that the user may run into an undesirable contact at a very crowded supermarket with high noise level, the operation 406 may assign a lower probability that the user will actually run into the undesirable contact. An operation 408 evaluates the encounter probability against a threshold to determine if a UEA scheme is appropriate. Such threshold may be given by the user. In one implementation, the user may give different threshold level for different undesirable contacts. For example, Alice may specify that she does not want to run into Bart ever, and even if there is only one percent chance of encounter with Bart, she would rather prefer having an option for a UEA avoidance scheme. On the other hand, Alice may be willing to take chance and not change her itinerary to avoid Doug as long as the probability of such encounter is less than 25 percent. If it is determined that a UEA is desirable, an operation 410 generates various UEA avoidance schemes. For example, such schemes may be based on alternative destination for the user, an alternative route for the user, a change in scheduled time of visit to the destination, etc. An operation 412 gives the user an option to provide an input in selection of the UEA scheme. For example, such input may be selection of time of visit, selection of route, etc. An operation 414 revises the UEA scheme based on the input. An operation 420 selects the appropriate UEA scheme. The selected UEA scheme may be presented to the user directly of via another application, such as a calendar, a navigation application, etc. FIG. 5 illustrates an example system 500 that may be useful in implementing the UEA system disclosed herein. The example hardware and operating environment of FIG. 5 for implementing the described technology includes a computing device, such as a general-purpose computing device in the form of a computer 20, a mobile telephone, a personal data assistant (PDA), a tablet, smart watch, gaming remote, or other type of computing device. In the implementation of FIG. 5, for example, the computer 20 includes a processing unit 21, a system memory 22, and a system bus 23 that operatively couples various system components including the system memory to the processing unit 21. There may be only one or there may be more than one processing unit 21, such that the processor of a computer 20 includes a single central-processing unit (CPU), or a plurality of processing units, commonly referred to as a parallel processing environment. The computer 20 may be a conventional computer, a distributed computer, or any other type of computer; the implementations are not so limited. In the example implementation of the computing system 500, the computer 20 also includes an UEA module 550 providing one or more functions of the UEA operations disclosed herein. The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, a switched fabric, point-to-point connections, and a local bus using any of a variety of bus architectures. The system memory may also be referred to as simply the memory, and includes read-only memory (ROM) 24 and random access memory (RANI) 25. A basic input/output system (BIOS) 26, containing the basic routines that help to transfer information between elements within the computer 20, such as during start-up, is stored in ROM 24. The computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM, DVD, or other optical media. The computer 20 may be used to implement a signal sampling module configured to generate sampled signals based on the reflected modulated signal 72 as illustrated in FIG. 1. In one implementation, a frequency unwrapping module including instructions to unwrap frequencies based on the sampled reflected modulations signals may be stored in memory of the computer 20, such as the read-only memory (ROM) 24 and random access memory (RAM) 25, etc. Furthermore, instructions stored on the memory of the computer 20 may be used by a system for delivering personalized user experience. Similarly, instructions stored on the memory of the computer 20 may also be used to implement one or more operations of a personalized user experience delivery system disclosed herein. The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical disk drive interface 34, respectively. The drives and their associated tangible computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer 20. It should be appreciated by those skilled in the art that any type of tangible computer-readable media may be used in the example operating environment. A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24, or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may generate reminders on the personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone (e.g., for voice input), a camera (e.g., for a natural user interface (NUI)), a joystick, a game pad, a satellite dish, a scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, computers typically include other peripheral output devices (not shown), such as speakers and printers. The computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer 49. These logical connections are achieved by a communication device coupled to or a part of the computer 20; the implementations are not limited to a particular type of communications device. The remote computer 49 may be another computer, a server, a router, a network PC, a client, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20. The logical connections depicted in FIG. 5 include a local-area network (LAN) 51 and a wide-area network (WAN) 52. Such networking environments are commonplace in office networks, enterprise-wide computer networks, intranets and the Internet, which are all types of networks. When used in a LAN-networking environment, the computer 20 is connected to the local area network 51 through a network interface or adapter 53, which is one type of communications device. When used in a WAN-networking environment, the computer 20 typically includes a modem 54, a network adapter, a type of communications device, or any other type of communications device for establishing communications over the wide area network 52. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program engines depicted relative to the personal computer 20, or portions thereof, may be stored in the remote memory storage device. It is appreciated that the network connections shown are example and other means of communications devices for establishing a communications link between the computers may be used. In an example implementation, software or firmware instructions for requesting, processing, and rendering mapping data may be stored in system memory 22 and/or storage devices 29 or 31 and processed by the processing unit 21. Mapping data and/or layer prioritization scheme data may be stored in system memory 22 and/or storage devices 29 or 31 as persistent data-stores. A UEA module 550 communicatively connected with the processing unit 21 and the memory 22 may enable one or more of the capabilities of the personalized UEA system disclosed herein. In contrast to tangible computer-readable storage media, intangible computer-readable communication signals may embody computer readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. FIG. 6 illustrates another example system (labeled as a mobile device 600) that may be useful in implementing the described technology. The mobile device 600 includes a processor 602, a memory 604, a display 606 (e.g., a touchscreen display), and other interfaces 608 (e.g., a keyboard). The memory 604 generally includes both volatile memory (e.g., RANI) and non-volatile memory (e.g., flash memory). An operating system 610, such as the Microsoft Windows® Phone operating system, resides in the memory 604 and is executed by the processor 602, although it should be understood that other operating systems may be employed. One or more application programs 612 are loaded in the memory 604 and executed on the operating system 610 by the processor 602. Examples of applications 612 include without limitation email programs, scheduling programs, personal information managers, Internet browsing programs, multimedia player applications, etc. A notification manager 614 is also loaded in the memory 604 and is executed by the processor 602 to present notifications to the user. For example, when a promotion is triggered and presented to the shopper, the notification manager 614 can cause the mobile device 600 to beep or vibrate (via the vibration device 618) and display the promotion on the display 606. The mobile device 600 includes a power supply 616, which is powered by one or more batteries or other power sources and which provides power to other components of the mobile device 600. The power supply 616 may also be connected to an external power source that overrides or recharges the built-in batteries or other power sources. The mobile device 600 includes one or more communication transceivers 630 to provide network connectivity (e.g., mobile phone network, Wi-Fi®, BlueTooth®, etc.). The mobile device 600 also includes various other components, such as a positioning system 620 (e.g., a global positioning satellite transceiver), one or more accelerometers 622, one or more cameras 624, an audio interface 626 (e.g., a microphone, an audio amplifier and speaker and/or audio jack), and additional storage 628. Other configurations may also be employed. In an example implementation, a mobile operating system, various applications, and other modules and services may be embodied by instructions stored in memory 604 and/or storage devices 628 and processed by the processing unit 602. User preferences, service options, and other data may be stored in memory 604 and/or storage devices 628 as persistent datastores. A UEA module 650 communicatively connected with the processor 602 and the memory 604 may enable one or more of the capabilities of the UEA system disclosed herein. The UEA system disclosed herein provides solution to a technological problem necessitated by allowing users to avoid running into other undesirable contacts using location information of the users and the undesirable contacts and by generating UEA schemes based on projected locations of the users and the undesirable contacts. A method disclosed herein for allows users to avoid such undesirable encounters by determining whether an undesirable contact of the user has opted for sharing location information, in response to determining that the undesirable contact has opted for sharing location information, collecting location signal from the undesirable contact, forecasting anticipated locations of the undesirable contact over a period based on the location signal of the undesirable contact, forecasting anticipated locations of the user over the period, determining potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period, generating an undesirable contact avoidance scheme based on the potential of encounter, and optionally notifying the user of the undesirable contact avoidance scheme. In one implementation of the method, notifying the user of the undesirable contact avoidance scheme further comprising notifying the user of the undesirable contact avoidance scheme without disclosing an identity of the undesirable contact. In another implementation, collecting the location signal of the user further includes collecting the location signal based on GPS of a mobile device of the user and collecting the location signal of the undesirable contact further includes collecting the location signal based on GPS of a mobile device of the undesirable contact. Alternatively, forecasting anticipated locations of the user over the period further includes forecasting anticipated locations of the user over the period based on a location signal collected from the user's mobile device. An implementation of the method to avoid such undesirable encounters further includes determining that the user has provided at least a predetermined number of contacts within a geographic region of the undesirable contact before collecting location signal from the undesirable contact. Another implementation of the method further includes determining that the user has not provided at least a predetermined number of contacts within a geographic region of the undesirable contact and in response to the determination generating a plurality of dummy contacts. Alternatively, determining potential of encounter between the user and the undesirable contact further includes evaluating one or more location parameters of an anticipated encounter location to determine probability of encounter at the anticipated encounter location. Yet alternatively, undesirable encounter avoidance scheme is based on at least one of an alternative destination for the user, an alternative route for the user, and an alternative schedule for the user. In one implementation, notifying the user of the undesirable encounter avoidance scheme further includes notifying a navigation application on a mobile device of the user to alter navigation route. A physical article of manufacture including one or more tangible computer-readable storage media, encoding computer-executable instructions for executing on a computer system a computer process, the computer process includes in response to determining that an undesirable contact has opted for sharing location information, collecting location signal from the undesirable contact, forecasting anticipated locations of the user over a period, forecasting anticipated locations of the undesirable contact over the period based on the location signal of the undesirable contact, determining potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period, and generating an undesirable encounter avoidance scheme based on the potential of encounter. In one implementation, the computer process further comprising notifying a navigation app of the user's mobile device of the undesirable encounter avoidance scheme. In an alternative implementation, the computer process of generating an undesirable encounter avoidance scheme based on the potential of encounter further comprising generating an undesirable encounter avoidance scheme based on an alternative destination for the user. Yet alternatively, the computer process of generating an undesirable encounter avoidance scheme based on the potential of encounter further comprising generating an undesirable encounter avoidance scheme based on an alternative navigation route for the user. In another implementation, the computer process of generating an undesirable encounter avoidance scheme based on the potential of encounter further comprising generating an undesirable encounter avoidance scheme based on an alternative schedule for the user. Yet alternatively, the computer process further comprising determining that the user has not provided at least a predetermined number of contacts within a geographic region of the undesirable contact and in response to the determination generating a plurality of dummy contacts. In an implementation of the physical article of manufacture, the computer process of wherein notifying the user of the undesirable contact avoidance scheme further comprising notifying the user of the undesirable contact avoidance scheme without disclosing an identity of the undesirable contact. Alternatively, the computer process further comprising determining that the user has provided at least a predetermined number of contacts within a geographic region of the undesirable contact before collecting location signal from the undesirable contact. A system for delivering personalized user experience includes a memory, one or more processor units, a signal collection module stored in the memory and executable by the one or more processor units, the signal collection module configured to collect location signals from a user and a plurality of undesirable contacts, a location extrapolation module configured to determine locations of the user and the plurality of undesirable contacts over a predetermined time period, and an encounter avoidance module configured determine potential of encounter between the user and the undesirable contact based on analysis of anticipated locations of the user and the anticipated locations of the undesirable contact over the period and generate an undesirable encounter avoidance scheme based on the potential of encounter. An implementation of the system further includes a user interaction module configured to interact a navigation app of the user's mobile device of the undesirable encounter avoidance scheme. Another implementation of the system includes a privacy management module configured to determine that the user has not provided at least a predetermined number of contacts within a geographic region of the undesirable contact and in response to the determination generating a plurality of dummy contacts. The above specification, examples, and data provide a complete description of the structure and use of exemplary embodiments of the invention. Since many implementations of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended. Furthermore, structural features of the different embodiments may be combined in yet another implementation without departing from the recited claims. 17067136 microsoft technology licensing, llc USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:00AM Apr 27th, 2022 09:00AM Technology Software & Computer Services Information Technology