Amazon

- NASDAQ:AMZN
Last Updated 2024-04-16

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jul 14th, 2020 12:00AM https://www.uspto.gov?id=US11316582-20220426 Prospective uplink grant for satellite communications Satellites provide communication between devices such as user terminals (UTs) and ground stations that are in turn connected to other networks, such as the Internet. Latency for signals to and from the satellite can introduce delays due at least in part to propagation time. The latency adversely interacts with data transfers that result in responses from the UT. Downstream data to the UT is processed to determine if a response is expected. Header data is associated with the downstream data that is sent to the satellite. A resource scheduler onboard the satellite uses the header data to provide a prospective grant to the UT to send the expected response. The UT receives the downstream data, response data such as an acknowledgement is generated, and the response data is sent to the satellite using the prospective grant. The system substantially reduces the latency associated with responsive traffic and improves overall throughput. 11316582 1. A system comprising: a first satellite; a first user terminal (UT) that is in communication with the first satellite; and a first system to: receive first data having a destination address associated with the first UT; determine, based at least in part on the first data, second data indicative of an expectation that a response to the first data will be sent by the first UT; and send the first data and the second data to the first satellite; and the first satellite to: receive the first data and the second data at a first time; determine, based on the second data, third data that is indicative of uplink resources allocated for the first UT to send data to the first satellite at a second time; send the first data and the third data to the first UT; receive, using the allocated uplink resources, fourth data from the first UT; and send the fourth data to the first system. 2. The system of claim 1, wherein the third data is indicative of one or more of: timeslot start data indicative of a third time at which the first UT is permitted to start sending data to the first satellite; timeslot duration data indicative of a duration of a timeslot during which the first UT is permitted to send data to the first satellite; timeslot end data indicative of a fourth time at which the first UT must cease sending data to the first satellite; subcarrier data indicative of a subcarrier to be used to send data to the first satellite; or one or more communication parameters comprising one or more of: frequency, or modulation coding. 3. The system of claim 1, wherein the fourth data comprises one or more of: fifth data that is responsive to at least a portion of the first data, sixth data indicative of data enqueued for transmission by the first UT, a scheduling request for allocation of uplink resources to the first UT to send data, or data previously enqueued at the first UT for transmission. 4. The system of claim 1, wherein the second data is determined based on one or more of: a traffic class, a flow label, a source address, a destination address; routing data, fragment data, authentication header data, encapsulating security data, destination option data, or host identity protocol. 5. The system of claim 1, the first UT further to: receive the first data; receive the third data; determine, based on the third data, fifth data indicative of an interval from a third time to a fourth time during which the fourth data may be sent; send the first data and the fifth data to a first device executing a first application; receive the fourth data; and send, based on the third data, the fourth data to the first satellite. 6. The system of claim 1, the first satellite further to: send fifth data to the first UT at a third time; receive sixth data from the first UT at a fourth time; determine a first time interval based on a difference between the third time and the fourth time, wherein the first time interval is indicative of an expected response delay; and wherein the third data is determined based on the first time interval such that the uplink resources are available at a time after the first time interval. 7. The system of claim 1, wherein the uplink resources comprise an uplink timeslot having a first time interval; the first system further to: determine a second time interval, wherein the second time interval is shorter than the first time interval and is within the first time interval; and wherein the third data is indicative of a third time interval that is less than the second time interval and is within the second time interval. 8. The system of claim 1, the first satellite further to: receive fifth data and sixth data at a third time; determine, based on the sixth data, seventh data that is indicative of second uplink resources allocated for the first UT to send data to the first satellite at a fourth time; send the fifth data and the seventh data to the first UT; determine the second uplink resources associated with the seventh data were unused by the first UT; receive, after the fourth time, eighth data having a destination address associated with the first UT; and send the eighth data to the first UT without allocation of uplink resources. 9. A method comprising: receiving first data addressed to a first user terminal (UT); determining second data, based on the first data, that is indicative of whether a response to the first data is expected; determining third data, based on the second data, indicative of uplink resources allocated to the first UT to facilitate communication with a satellite at a first time; sending the first data and the third data to the first UT at a second time; and receiving at the satellite, using the uplink resources indicated by the third data, fourth data from the first UT at a third time. 10. The method of claim 9, wherein the third data is indicative of one or more of: timeslot start data indicative of a fourth time at which the first UT is permitted to start sending data to the satellite; timeslot duration data indicative of a duration of a timeslot during which the first UT is permitted to send data to the satellite; timeslot end data indicative of a fifth time at which the first UT must cease sending data to the satellite; or subcarrier data indicative of a subcarrier to send data to the satellite. 11. The method of claim 9, wherein the fourth data comprises one or more of: data responsive to at least a portion of the first data, fifth data indicative of data enqueued for transmission by the first UT, a scheduling request for allocation of uplink resources to the first UT to send data, or data previously enqueued at the first UT for transmission. 12. The method of claim 9, wherein the first data comprises a packet; and the second data is determined based on one or more of: at least a portion of a header of the packet, a payload of the packet, a timestamp associated with transmission of the first data, or previous data sent to or from the first UT. 13. The method of claim 9, further comprising: sending fifth data from the satellite to the first UT at a fourth time, wherein the fourth time is before the first time; receiving at the satellite, sixth data from the first UT at a fifth time, wherein the fifth time is before the first time; determining seventh data indicative of an interval of time between the fourth time and the fifth time; and wherein the third data is indicative of a timeslot that begins on or after a sixth time, further wherein the sixth time is based on a sum of the second time and the seventh data. 14. The method of claim 9, wherein the uplink resources comprise a total uplink capacity measured in bits; and further comprising: determining a threshold value indicative of less than the total uplink capacity; and wherein the uplink resources allocated to one or more UTs at a specified time are less than the threshold value. 15. The method of claim 9, further comprising: determining a first plurality of data indicative of uplink resources allocated to the first UT at a plurality of different times; determining, at a fourth time, a first count of the uplink resources indicated by the first plurality of data that have been used by the first UT to send data to the satellite; determining the first count is less than a threshold value; and suspending allocation of uplink resources after the fourth time. 16. The method of claim 9, wherein the first data comprises a plurality of packets, and further comprising: determining a first count of packets that have second data indicative of a response expected; determining a first total based on the first count and an estimated data size of a response to each of the first count of packets; and wherein the uplink resources indicated by the third data are determined based on the first total. 17. A system comprising: at least one memory storing computer-executable instructions; and at least one processor to execute the instructions to: receive first data associated with a first user terminal (UT); determine second data that is indicative of whether a response to at least a portion of the first data is expected; determine, based on the second data, third data that is indicative of uplink resources allocated to the first UT to facilitate communication with a satellite at a first time; send the first data and the third data to the first UT at a second time; and receive, using the uplink resources indicated by the third data, fourth data from the first UT at a third time. 18. The system of claim 17, wherein the fourth data comprises one or more of: fifth data that is responsive to at least a portion of the first data, sixth data indicative of data enqueued for transmission by the first UT, a scheduling request for allocation of uplink resources to the first UT to send data, or data previously enqueued at the first UT for transmission. 19. The system of claim 17, the at least one processor to further execute instructions to: send fifth data to the first UT at a fourth time; receive sixth data from the first UT at a fifth time; determine a first interval based on the fourth time and the fifth time; and wherein the third data is determined based on the first interval such that the uplink resources are available at a time greater than the first interval. 20. The system of claim 17, the at least one processor to further execute instructions to: determine a first threshold based on a capacity for a satellite to receive data from one or more user terminals; and wherein the uplink resources allocated by the third data is less than the first threshold. 20 BACKGROUND Satellites in non-geosynchronous orbits (NGO) move relative to a body they are orbiting, such as the Earth. A device that is using these satellites for communication services will experience latency that includes propagation time for a signal to travel between the satellite and the device. Operational considerations including the latency combined with patterns of sending data and any resulting responses may adversely impact network throughput. BRIEF DESCRIPTION OF FIGURES The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features. The figures are not necessarily drawn to scale, and in some figures, the proportions or other aspects may be exaggerated to facilitate comprehension of particular aspects. FIG. 1 illustrates a system that provides prospective grants for a user terminal to send data to a satellite, according to some implementations. FIG. 2 is a block diagram of some systems associated with the satellite, according to some implementations. FIG. 3 illustrates a process of providing a prospective grant to a user terminal to facilitate communication, according to some implementations. FIG. 4 illustrates a half-duplex example of timing for a downlink and an uplink that supports use of a prospective grant to facilitate communication, according to some implementations. FIG. 5 illustrates data associated with operation of the system, according to some implementations. While implementations are described herein by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean “including, but not limited to”. DETAILED DESCRIPTION Non-geosynchronous orbit (NGO) satellites move relative to a body such as the Earth, Moon, Mars, and so forth. These NGOs may be part of a network to provide communication services between devices, such as user terminals (UT) located on or near the body. For example, a first UT on a first geographic location (geolocation) on the Earth may send upstream data to a first satellite that is in range of the first UT. The first satellite may send the upstream data to a ground station that is part of a ground station, another satellite, and so forth. Likewise, downstream data destined for the first UT may be sent by the ground station to the first satellite, which then sends the downstream data to the first UT. For ease of illustration, and not necessarily as a limitation, a satellite may be deemed “in range” of a UT when communication may take place. For example, a satellite may be in range when it is within line of sight of the UT, a distance between the satellite and the UT is less than a threshold distance, the UT is within a footprint of an antenna onboard the satellite, and so forth. A period, or duration, of an orbit varies depending upon the altitude of the satellite with respect to the body, eccentricity of the orbit, and so forth. For example, a satellite at an altitude of 600 kilometers (km) will take approximately 96 minutes to complete a circular orbit around the Earth. Depending on a variety of factors including the curvature of the Earth, a satellite may be in range of a particular UT for only a few minutes. In comparison, a geosynchronous satellite has an orbital period that coincides with the rotation of the earth, so a geosynchronous satellite appears to remain motionless with respect to the earth. Signals between the UT and the satellite are limited to travelling at the speed of light. The farther away a satellite is from the UT, the longer it takes for a signal to travel to the satellite and then travel back to Earth. For example, a hop (sending a signal to the satellite and back down to the ground, or vice versa) to a geosynchronous satellite introduces a delay of at least 240 milliseconds (ms). Additional delays due to signal processing, buffering, and so forth may also be present. Such delays, or latency, are undesirable for many types of communication. For example, a real-time voice call in which there is a ¼ second delay can be aggravating. Likewise, time-sensitive communication activities such as remote control of devices, industrial reporting and control, gaming, and so forth may be adversely affected by these latencies. In comparison, a hop involving an NGO satellite at an altitude of 600 km only introduces about 4 ms of latency. As a result, compared to using geosynchronous satellites, the use of NGO satellites significantly reduces latencies due to signal travel times. Using a constellation of many NGO satellites offers significant benefits. Latency is dramatically reduced, improving usefulness for communication. Shorter distances between the UT and the satellite allows for increased UT density by allowing greater frequency re-use and sharing. Power and antenna gain requirements for both the UT and the satellites are also reduced due to the shorter distances, compared to communication using geosynchronous satellites. This allows for relatively smaller and less expensive satellites to be used. However, while the latencies introduced by use of NGO satellites are less than those of geosynchronous satellites, they still may adversely impact communications. For example, downstream data may be sent from a server on the Internet to a user device connected to a user terminal. The downstream data is sent to a ground station, transmitted to a satellite, sent from the satellite to the user terminal, and then sent to the user device. The latency of a single “hop” from ground station to user terminal will have at a minimum the propagation delay of 2 ms up, 2 ms down, for a total of 4 ms. If the downstream data elicits a response, the response travels the path in reverse, introducing at least another 4 ms of delay. However, other latencies may be introduced due to resource sharing, retransmission, and so forth. As a result, the user terminal may have to wait or coordinate for an opportunity to send the upstream data comprising the response back to the ground station. Additionally, in some situations the server may await the response before sending additional data. For example, the transmission control protocol (TCP) will send a packet and expects to receive an acknowledgement (ACK) indicating that the packet was successfully received. Without the ACK, the server may re-send the packet. As a result, the additional latencies associated with satellite operation may interact adversely with these expected patterns of sending data and receiving a response. Described in this disclosure are techniques for substantially improving latency and throughput of a communication system that utilizes satellites by providing prospective grants of uplink resources to user terminals, responsive to analysis of downstream data. Downstream data that is addressed to a user terminal is processed to determine if a response is expected. This processing may include packet header inspection, deep packet inspection, traffic pattern analysis, and so forth. In one implementation, the downstream traffic is processed to determine if a particular protocol is in use. For example, a packet may be inspected to determine if the packet includes a TCP header indicative of a data transfer. If so, response header data may be generated and associated with the packet that indicates that a response packet such as an ACK is expected. In another implementation, traffic pattern analysis is performed that determines a response is likely to occur. For example, traffic pattern analysis may indicate that data originating from a particular network address to one or more UTs 108 has a high probability of being associated with data being sent from one or more UTs 108 to that network address. The satellite receives the response header data and the associated downstream data. A resource scheduler onboard the satellite uses the response header data to determine if the downstream data is expected to produce a response. If so, the resource scheduler generates prospective grant data that prospectively allocates uplink resources for the user terminal to send the expected response. The prospective grant data provides a reservation on an uplink to the satellite that may then be used to send upstream data produced in response to the downstream data. For example, the prospective grant data may specify a particular timeslot to be used to send the upstream data. In some implementations, the user terminal may send other data instead of or in addition to the upstream data. For example, if the upstream data is not ready for transmission at the time allocated by the prospective grant data, other information may be sent. For example, if the response was delayed or not provided, other information such as a buffer status report (BSR) indicative of data queued for transmission from the user terminal to the satellite, a scheduling request (SR) requesting allocation of uplink resources to send data to the satellite, previously enqueued data, and so forth may be sent. By providing the prospective grant data in advance, the user terminal is able to quickly send the upstream data without negotiating for or otherwise waiting for uplink resources. For example, the prospective grant data provides a specified time during which the upstream data may be sent, rather than waiting for a commonly accessed timeslot in which many user terminals are in contention to use the uplink to send data to the satellite. The prospective grant data significantly reduces the latency associated with allocating uplink resources to allow the user terminal to send data to the satellite. This substantially improves overall performance of the communication system. For example, re-transmissions of data due to delayed ACKs being returned to a sending device are reduced or eliminated because the ACK is more quickly returned. The prospective grant data also reduces or eliminates the need for the user terminal to request or otherwise negotiate uplink resources. This increases overall efficiency of the system by reducing the amount of management data sent and received, freeing up more resources for transfer of user data. By using the system and techniques described in this disclosure, one or more satellites are able to provide very low latency handovers of communications between satellites and user terminals. This allows the user terminal to provide low latency communication service to an end user. Illustrative System The ability to communicate between two or more locations that are physically separated provides substantial benefits. Communications over areas ranging from counties, states, continents, oceans, and the entire planet are used to enable a variety of activities including health and safety, logistics, remote sensing, interpersonal communication, and so forth. Communications facilitated by electronics use electromagnetic signals, such as radio waves or light to send information over a distance. These electromagnetic signals have a maximum speed in a vacuum of 299,792,458 meters per second, known as the “speed of light” and abbreviated “c”. Electromagnetic signals may travel, or propagate, best when there is an unobstructed path between the antenna of the transmitter and the antenna of the receiver. This path may be referred to as a “line of sight”. While electromagnetic signals may bend or bounce, the ideal situation for communication is often a line of sight that is unobstructed. Electromagnetic signals will also experience some spreading or dispersion. Just as ripples in a pond will spread out, a radio signal or a spot of light from a laser will spread out at progressively larger distances. As height above ground increases, the area on the ground that is visible from that elevated point increases. For example, the higher you go in a building or on a mountain, the farther you can see. The same is true for the electromagnetic signals used to provide communication services. A relay station having a radio receiver and transmitter with their antennas placed high above the ground is able to “see” more ground and provide communication service to a larger area. There are limits to how tall a structure can be built and where. For example, it is not cost effective to build a 2000 meter tall tower in a remote area to provide communication service to a small number of users. However, if that relay station is placed on a satellite high in space, that satellite is able to “see” a large area, potentially providing communication services to many users across a large geographic area. In this situation, the cost of building and operating the satellite is distributed across many different users and becomes cost effective. A satellite may be maintained in space for months or years by placing it into orbit around the Earth. The movement of the satellite in orbit is directly related to the height above ground. For example, the greater the altitude the longer the period or time it takes for a satellite to complete a single orbit. A satellite in a geosynchronous orbit at an altitude of 35,800 km may appear to be fixed with respect to the ground because the period of the geosynchronous orbit matches the rotation of the Earth. In comparison, a satellite in a non-geosynchronous orbit (NGO) will appear to move with respect to the Earth. For example, a satellite in a circular orbit at 600 km will circle the Earth about every 96 minutes. To an observer on the ground, the satellite in the 600 km orbit will speed by, moving from horizon to horizon in a matter of minutes. Building, launching, and operating a satellite is costly. Traditionally, geosynchronous satellites have been used for broadcast and communication services because they appear stationary to users on or near the Earth and they can cover very large areas. This simplifies the equipment needed by a station on or near the ground to track the satellite. However, there are limits as to how many geosynchronous satellites may be provided. For example, the number of “slots” or orbital positions that can be occupied by geosynchronous satellites are limited due to technical requirements, regulations, treaties, and so forth. It is also costly in terms of fuel to place a satellite in such a high orbit, increasing the cost of launching the satellite. The high altitude of the geosynchronous satellite can introduce another problem when it comes to sharing electromagnetic spectrum. The geosynchronous satellite can “see” so much of the Earth that special antennas may be needed to focus radio signals to particular areas, such as a particular portion of a continent or ocean, to avoid interfering with radio services on the ground in other areas that are using the same radio frequencies. Using a geosynchronous satellite to provide communication services also introduces a significant latency or delay because of the time it takes for a signal to travel up to the satellite in geosynchronous orbit and back down to a device on or near the ground. The latency due to signal propagation time of a single hop can be at least 240 milliseconds (ms). To alleviate these and other issues, satellites in NGOs may be used. The altitude of an NGO is high enough to provide coverage to a large portion of the ground, while remaining low enough to minimize latency due to signal propagation time. For example, the satellite at 600 km only introduces 4 ms of latency for a single hop. The lower altitude also reduces the distance the electromagnetic signal has to travel. Compared to the geosynchronous orbit, the reduced distance of the NGO reduces the dispersion of electromagnetic signals. This allows the satellite in NGO as well as the device communicating with the satellite to use a less powerful transmitter, use smaller antennas, and so forth. The system 100 shown here comprises a plurality (or “constellation”) of satellites 102(1), 102(2), . . . , 102(S), each satellite 102 being in orbit 104. Also shown is a ground station 106, user terminal (UTs) 108, and a user device 110. The constellation may comprise hundreds or thousands of satellites 102, in various orbits 104. For example, one or more of these satellites 102 may be in non-geosynchronous orbits (NGOs) in which they are in constant motion with respect to the Earth, such as low earth orbit (LEO). In this illustration, orbit 104 is depicted with an arc pointed to the right. A first satellite (SAT1) 102(1) is leading (ahead of) a second satellite (SAT2) 102(2) in the orbit 104. The satellite 102 is discussed in more detail with regard to FIG. 2. One or more ground stations 106 are in communication with one or more satellites 102. The ground stations 106 may pass data between the satellites 102, a management system 150, networks such as the Internet, and so forth. The ground stations 106 may be emplaced on land, on vehicles, at sea, and so forth. Each ground station 106 may comprise a communication system 140. Each ground station 106 may use the communication system 140 to establish communication with one or more satellites 102, other ground stations 106, and so forth. The ground station 106 may also be connected to one or more communication networks. For example, the ground station 106 may connect to a terrestrial fiber optic communication network. The ground station 106 may act as a network gateway, passing user data or other data between the one or more communication networks and the satellites 102. Such data may be processed by the ground station 106 and communicated via the communication system 140. The communication system 140 of a ground station may include components similar to those of the communication system of a satellite 102 and may perform similar communication functionalities. For example, the communication system 140 may include one or more modems, digital signal processors, power amplifiers, antennas (including at least one antenna that implements multiple antenna elements, such as a phased array antenna), processors, memories, storage devices, communications peripherals, interface buses, and so forth. The ground stations 106 are in communication with a management system 150. The management system 150 is also in communication, via the ground stations 106, with the satellites 102 and the UTs 108. The management system 150 coordinates operation of the satellites 102, ground stations 106, UTs 108, and other resources of the system 100. The management system 150 may comprise one or more of an orbital mechanics system 152 or a scheduling system 156. The orbital mechanics system 152 determines orbital data 154 that is indicative of a state of a particular satellite 102 at a specified time. In one implementation, the orbital mechanics system 152 may use orbital elements that represent characteristics of the orbit 104 of the satellites 102 in the constellation to determine the orbital data 154 that predicts location, velocity, and so forth of particular satellites 102 at particular times or time intervals. For example, the orbital mechanics system 152 may use data obtained from actual observations from tracking stations, data from the satellites 102, scheduled maneuvers, and so forth to determine the orbital elements. The orbital mechanics system 152 may also consider other data, such as space weather, collision mitigation, orbital elements of known debris, and so forth. The scheduling system 156 schedules resources to provide communication to the UTs 108. For example, the scheduling system 156 may determine handover data that indicates when communication is to be transferred from the first satellite 102(1) to the second satellite 102(2). Continuing the example, the scheduling system 156 may also specify communication parameters such as frequency, timeslot, subcarrier, modulation coding, and so forth. During operation, the scheduling system 156 may use information such as the orbital data 154, system status data 158, user terminal data 160, and so forth. The system status data 158 may comprise information such as which UTs 108 are currently transferring data, satellite availability, current satellites 102 in use by respective UTs 108, capacity available at particular ground stations 106, and so forth. For example, the satellite availability may comprise information indicative of satellites 102 that are available to provide communication service or those satellites 102 that are unavailable for communication service. Continuing the example, a satellite 102 may be unavailable due to malfunction, previous tasking, maneuvering, and so forth. The system status data 158 may be indicative of past status, predictions of future status, and so forth. For example, the system status data 158 may include information such as projected data traffic for a specified interval of time based on previous transfers of user data. In another example, the system status data 158 may be indicative of future status, such as a satellite 102 being unavailable to provide communication service due to scheduled maneuvering, scheduled maintenance, scheduled decommissioning, and so forth. The user terminal data 160 may comprise information such as a location of a particular UT 108. The user terminal data 160 may also include other information such as a priority assigned to user data associated with that UT 108, information about the communication capabilities of that particular UT 108, and so forth. For example, a particular UT 108 in use by a business may be assigned a higher priority relative to a UT 108 operated in a residential setting. Over time, different versions of UTs 108 may be deployed, having different communication capabilities such as being able to operate at particular frequencies, supporting different signal encoding schemes, having different antenna configurations, and so forth. The UT 108 includes a communication system 180 to establish communication with one or more satellites 102. The communication system 180 of the UT 108 may include components similar to those of the communication system 212 of a satellite 102 and may perform similar communication functionalities. For example, the communication system 180 may include one or more modems, digital signal processors, power amplifiers, antennas (including at least one antenna that implements multiple antenna elements, such as a phased array antenna), processors, memories, storage devices, communications peripherals, interface buses, and so forth. The UT 108 passes user data between the constellation of satellites 102 and the user device 110. The user data includes data originated by the user device 110 (upstream data 112) or addressed to the user device 110 (downstream data 142). The UT 108 may be fixed or in motion. For example, the UT 108 may be used at a residence, or on a vehicle such as a car, boat, aerostat, drone, airplane, and so forth. The UT 108 includes a tracking system 182. The tracking system 182 uses almanac data 184 to determine tracking data 186. The almanac data 184 provides information indicative of orbital elements of the orbit 104 of one or more satellites 102. For example, the almanac data 184 may comprise orbital elements such as “two-line element” data for the satellites 102 in the constellation that are broadcast or otherwise sent to the UTs 108 using the communication system 180. The tracking system 182 may use the current location of the UT 108 and the almanac data 184 to determine the tracking data 186 for the satellite 102. For example, based on the current location of the UT 108 and the predicted position and movement of the satellites 102, the tracking system 182 is able to calculate the tracking data 186. The tracking data 186 may include information indicative of azimuth, elevation, distance to the second satellite, time of flight correction, or other information at a specified time. The determination of the tracking data 186 may be ongoing. For example, the first UT 108 may determine tracking data 186 every 100 ms, every second, every five seconds, or at other intervals. With regard to FIG. 1, an uplink is a communication link which allows data to be sent to a satellite 102 from a ground station 106, UT 108, or device other than another satellite 102. Uplinks are designated as UL1, UL2, UL3 and so forth. For example, UL1 is a first uplink from the ground station 106 to the second satellite 102(2). In comparison, a downlink is a communication link which allows data to be sent from the satellite 102 to a ground station 106, UT 108, or device other than another satellite 102. For example, DL1 is a first downlink from the second satellite 102(2) to the ground station 106. The satellites 102 may also be in communication with one another. For example, an intersatellite link 190 provides for communication between satellites 102 in the constellation. A traffic analysis system 146 processes data that is associated with one or more of the UTs 108. In one implementation, the traffic analysis system 146 may receive downstream data 142 addressed to a particular UT 108 from a device such as a server 144 that is accessed via an external network, such as the Internet. In another implementation, the downstream data 142 may be received from one or more devices within the system 100. For example, the management system 150 may generate downstream data 142 comprising a control message that is addressed to the particular UT 108. The traffic analysis system 146 may determine response header data 168. The response header data 168 may comprise information indicative of whether a response to the downstream data 142 is expected. The traffic analysis system 146 may use one or more techniques such as packet header inspection, deep packet inspection, traffic pattern analysis, processing with a machine learning system, and so forth. Packet header inspection may comprise assessing the values of the packet headers to determine if a response is expected. This determination may be specific to a type of protocol associated with the downstream data 142. For example, if a packet header indicates that the packet is a transmission control protocol (TCP) packet sending payload and a sequence number. The receiving device is expected to receive the TCP packet and respond with an acknowledgement (ACK) packet and a sequence number. In comparison a user datagram protocol (UDP) packet may not be expected to produce a response as UDP does not utilize an acknowledgement process. Deep packet inspection may be performed to analyze the downstream data 142 and determine if the downstream data 142 is expected to produce a response. Deep packet inspection may comprise assessing at least a portion of the data in the payload of packets comprising the downstream data 142. This assessment may involve comparing data present in the downstream data 142 with previously determined values. For example, the payload may begin with a characteristic set of bits that are associated with a subsequent response. The traffic analysis system 146 may maintain data for some period of time to determine if downstream data 142 is likely to be associated with a response. For example, the traffic analysis system 146 may store at least a portion of downstream data 142 and upstream data 112 for several seconds. The traffic analysis system 146 may use information such as source address, destination address, port number, sequence numbers, timestamps, minimum response time, and so forth to determine if downstream data 142 is associated with upstream data 112. For example, at time=1203 milliseconds (relative to some reference epoch) downstream data 142 comprising a first packet with a source address of server 144(1) and a destination address of UT 108(1) is processed by the traffic analysis system 146. This first packet is sent to the UT 108(1). At time=1207 a second packet is received by the traffic analysis system 146 with a source address of UT 108(1) and a destination address of server 144(1). At time=1208 a third packet is also received by the traffic analysis system 146 with a source address of UT 108(1) and a destination address of server 144(1). Due to the propagation delays associated with signals to and from the satellite 102, the minimum response time possible for two hops may be determined to be 8 ms. In this example, the traffic analysis system 146 determines that the second packet is not responsive to the first packet because there was not enough time for the second packet to be received and the UT 108(1) to respond. However, the third packet was received after the minimum response time and so may be deemed responsive to the first packet. In some implementations the traffic analysis system 146 may determine a time interval indicative of an expected response delay for a response to be sent. For example, the traffic analysis system 146 may compare a downlink timestamp indicative of a first time the downstream data 142 was transmitted to the UT 108 with an uplink timestamp indicative of a second time the upstream data 112 was transmitted from the UT 108 to the satellite 102. The expected response delay may be calculated as the difference between the first time and the second time. The expected response delay may be a minimum, maximum, average, moving average, and so forth of the differences from multiple packets. The expected response delay may be included in the response header data 168. The traffic analysis system 146 may use previous data sent to or from the UT 108 to determine the response header data 168. For example, downstream data 142 previously sent to the UT 108 may be compared to upstream data 112 to determine if a response is expected. In one implementation, the traffic analysis system 146 may operate on individual packets, frames, or other units of data transfer. For example, all packets may be analyzed. In another example, a portion of packets may be analyzed. In another implementation, the traffic analysis system 146 may operate on groups of packets, frames, or other units of data transfer. For example, the traffic analysis system 146 may aggregate groups of packets having a common destination address for a UT 108. A portion of the packets in the group of packets may be analyzed, and if a threshold count of packets is determined to be associated with responsive upstream data 112, a prospective grant may be provided. In one implementation the response header data 168 may be appended to the downstream data 142. In another implementation the response header data 168 may be incorporated into the downstream data 142. For example, information indicative of a response being likely may be conveyed as data within a fixed header, extension header, and so forth of a packet to the UT 108. The satellite 102 may include a resource scheduler 172. The resource scheduler 172 may use the response header data 168 to determine prospective grant data 166 that allocates uplink resources to a UT 108. The uplink from the UT 108 to a satellite 102 may be subject to various limiting factors. For example, a single satellite 102 may provide communication services to many UTs 108. Various techniques may be used to coordinate uplink communication from the UTs 108 to the satellite 102. For example, each UT 108 may be assigned a timeslot out of a regular recurring cycle, during which the UT 108 may transmit data. In another example, the resource scheduler 172 may allocate a UT 108 some uplink resources, such as a timeslot, responsive to receiving a scheduling request. In yet another example, each UT 108 may transmit when ready, with the result that many UTs 108 may be simultaneously transmitting to the satellite 102 and overlapping with one another. If no acknowledgement from the satellite 102 is received by the UT 108, the UT 108 may retransmit at some time. These various techniques introduce various delays. For example, waiting for a previously specified timeslot involves a delay for that timeslot to come around. The process of sending a scheduling request, waiting for a response, and receiving a grant for uplink resources also takes time and consumes communication resources. The process of transmitting when ready in a system with a large number of UTs 108 serviced by a single satellite 102 may result in poor throughput due to contention for the uplink resources, retransmission, and so forth. The resource scheduler 172 assesses the response header data 168 of the downstream data 142 that is queued for transmission to the UT 108. If the response header data 168 indicates that a response is expected, the resource schedule 172 will allocate uplink resources for the expected upstream data 112. The prospective grant data 166 comprises the information indicative of this allocation and is sent to the UT 108. For example, the prospective grant data 166 may be indicative of one or more of a timeslot start, timeslot duration, or communication parameters such as subbeam data, frequency data, subcarrier data, modulation coding, and so forth. In some implementations, the resource scheduler 172 may take into consideration the expected response delay while allocating the uplink resources. For example, the uplink resources may indicate a timeslot that begins on or after a time that is determined based on a sum of a time value indicative of when the downstream data 142 is to be sent and the expected response delay. By considering the interval of time of the expected response delay, the resource scheduler 172 avoids allocating uplink resources for a time before the responsive upstream data 112 is ready for transmission by the UT 108. For example, if the expected response delay is 50 ms, the resource scheduler 172 would not allocate uplink resources that are available until a time after transmission is complete plus the expected response delay. Continuing the example, if the transmission is expected to complete at time 3315 ms, and the expected response delay is 50 ms, the uplink resources would be allocated for a timeslot that begins on or after 3365 ms. In one implementation the prospective grant data 166 may be sent with the downstream data 142 to the UT 108. In another implementation, the prospective grant data 166 may be sent separately to the UT 108. For example, the prospective grant data 166 may be transmitted to the UT 108 using a management channel while a data channel is used to send the downstream data 142. In some implementations, at least a portion of the traffic analysis system 146 or equivalent functionality may be implemented by the resource scheduler 172 of the satellite 102. For example, protocol-based traffic analysis may be performed onboard the satellite 102 to determine a packet in the downstream data 142 is a TCP data transfer and prospective grant data 166 may be determined for the responsive ACK in the upstream data 112. The UT 108 may provide additional information to another device that is using the UT 108 for communication services. For example, a user device 110 may be executing an application module 188 that is sending a series of packets to the server 144 and receiving a series of packets from the server 144. In some implementations the UT 108 and the user device 110 may be combined into a single device. In some implementations the application module 188 may execute on the UT 108. The UT 108 may determine response window data that is indicative of a time that the responsive upstream data 112 must be received by the UT 108 for prompt transmission to the satellite 102 to take place. For example, the UT 108 may determine a response window that indicates a start time and an end time during which upstream data 112 received from the application module 188 may be sent to the satellite 102 without substantial delay. In comparison, if the application module 188 is unable to provide the upstream data 112 that is responsive to the downstream data 142 within the response window, an additional delay may be incurred while the UT 108 requests a grant for uplink resources, waits for those uplink resources to be available, and so forth. In the event that the upstream data 112 is not available, or additional capacity is available, the UT 108 may opportunistically use the prospective grant to send other data. For example, the UT 108 may send one or more of the upstream data 112 (if available), a scheduling request, a buffer status report, or other data enqueued for transmission. The resource scheduler 172 may constrain use of prospective grants to avoid saturation of the uplink. For example, the resource scheduler 172 may limit a total amount of resources associated with prospective grants during a particular interval to some fraction of the total. Continuing the example, if the uplink resources allow for a total uplink capacity measured in bits of 50 megabits of data transfer per timeslot, a maximum of 10 megabits may be allocated to prospective grants that are distributed using the prospective grant data 166. In one implementation the uplink resources comprise a total uplink capacity measured in bits. A threshold value indicative of less than the total uplink capacity may be determined. For example, the threshold value may indicate a fractional value of the total uplink capacity. The uplink resources allocated to one or more UTs 108 at a specified time are less than the threshold value. For example, a sum of all bits of uplink resources allocated in prospective grants may be less than or equal to the threshold value. In another example, if the uplink resources allow for a total uplink capacity of 100 milliseconds of uplink time, 15 milliseconds of the 100 may be allocated for use by prospective grants. In one implementation, a threshold may be determined that is indicative of a percentage or other portion of the time available for uplink that is allocated for prospective grants. For example, the threshold may indicate that 15% of an uplink timeslot is to be allocated to prospective grants. The prospective grants may be issued until the threshold is reached. This is discussed with regard to FIG. 4. The system 100 may take into account the utilization of previously assigned prospective grants to determine future prospective grants. For example, in addition to the upstream data 112, the traffic analysis system 146 may receive data indicative as to whether the upstream data 112 utilized the prospective grant data 166 and corresponding uplink resources. A utilization value may be determined that indicates the percentage of prospective grants that were actually used for responsive upstream data 112. The traffic analysis system 146 may include the utilization value in the response header data 168 data. In some implementations the resource scheduler 172 may prioritize the allocation of uplink resources based at least in part on the utilization value. For example, packets having a high utilization value may be assigned prospective grants. If uplink resources remain available, the resource scheduler 172 may then assign prospective grants in descending order of utilization value. The satellite 102, the ground station 106, the user terminal 108, the user device 110, the management system 150, or other systems described herein may include clocks. These clocks may be synchronized to a common source. In some implementations the clock may be a global positioning system (GPS) disciplined clock or an atomic clock that provides a high accuracy and high precision time source. Output from the clock may be used to coordinate operation of the system 100. The satellite 102, the ground station 106, the user terminal 108, the user device 110, the management system 150, or other systems described herein may include one or more computer devices or computer systems comprising one or more hardware processors, computer-readable storage media, and so forth. For example, the hardware processors may include application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), digital signal processors (DSPs), and so forth. Embodiments may be provided as a software program or computer program including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform the processes or methods described herein. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage medium may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet. FIG. 2 is a block diagram of some systems associated with the satellite 102, according to some implementations. The satellite 102 may comprise a structural system 202, a control system 204, a power system 206, a maneuvering system 208, one or more sensors 210, and a communication system 212. A pulse per second (PPS) system 214 may be used to provide timing reference to the systems onboard the satellite 102. One or more busses 216 may be used to transfer data between the systems onboard the satellite 102. In some implementations, redundant busses 216 may be provided. The busses 216 may include, but are not limited to, data busses such as Controller Area Network Flexible Data Rate (CAN FD), Ethernet, Serial Peripheral Interface (SPI), and so forth. In some implementations the busses 216 may carry other signals. For example, a radio frequency bus may comprise coaxial cable, waveguides, and so forth to transfer radio signals from one part of the satellite 102 to another. In other implementations, some systems may be omitted or other systems added. One or more of these systems may be communicatively coupled with one another in various combinations. The structural system 202 comprises one or more structural elements to support operation of the satellite 102. For example, the structural system 202 may include trusses, struts, panels, and so forth. The components of other systems may be affixed to, or housed by, the structural system 202. For example, the structural system 202 may provide mechanical mounting and support for solar panels in the power system 206. The structural system 202 may also provide for thermal control to maintain components of the satellite 102 within operational temperature ranges. For example, the structural system 202 may include louvers, heat sinks, radiators, and so forth. The control system 204 provides various services, such as operating the onboard systems, resource management, providing telemetry, processing commands, and so forth. For example, the control system 204 may direct operation of the communication system 212. The control system 204 may include one or more flight control processors 220. The flight control processors 220 may comprise one or more processors, FPGAs, and so forth. A tracking, telemetry, and control (TTC) system 222 may include one or more processors, radios, and so forth. For example, the TTC system 222 may comprise a dedicated radio transmitter and receiver to receive commands from a ground station 106, send telemetry to the ground station 106, and so forth. A power management and distribution (PMAD) system 224 may direct operation of the power system 206, control distribution of power to the systems of the satellite 102, control battery 234 charging, and so forth. The power system 206 provides electrical power for operation of the components onboard the satellite 102. The power system 206 may include components to generate electrical energy. For example, the power system 206 may comprise one or more photovoltaic arrays 230 comprising a plurality of photovoltaic cells, thermoelectric devices, fuel cells, and so forth. One or more PV array actuators 232 may be used to change the orientation of the photovoltaic array(s) 230 relative to the satellite 102. For example, the PV array actuator 232 may comprise a motor. The power system 206 may include components to store electrical energy. For example, the power system 206 may comprise one or more batteries 234, fuel cells, and so forth. The maneuvering system 208 maintains the satellite 102 in one or more of a specified orientation or orbit 104. For example, the maneuvering system 208 may stabilize the satellite 102 with respect to one or more axes. In another example, the maneuvering system 208 may move the satellite 102 to a specified orbit 104. The maneuvering system 208 may include one or more of reaction wheel(s) 240, thrusters 242, magnetic torque rods 244, solar sails, drag devices, and so forth. The thrusters 242 may include, but are not limited to, cold gas thrusters, hypergolic thrusters, solid-fuel thrusters, ion thrusters, arcjet thrusters, electrothermal thrusters, and so forth. During operation, the thrusters 242 may expend propellent. For example, an electrothermal thruster may use water as propellent, using electrical power obtained from the power system 206 to expel the water and produce thrust. During operation, the maneuvering system 208 may use data obtained from one or more of the sensors 210. The satellite 102 includes one or more sensors 210. The sensors 210 may include one or more engineering cameras 250. For example, an engineering camera 250 may be mounted on the satellite 102 to provide images of at least a portion of the photovoltaic array 230. Accelerometers 252 provide information about acceleration of the satellite 102 along one or more axes. Gyroscopes 254 provide information about rotation of the satellite 102 with respect to one or more axes. The sensors 210 may include a global navigation satellite system (GNSS) 256 receiver, such as Global Positioning System (GPS) receiver, to provide information about the position of the satellite 102 relative to Earth. In some implementations the GNSS 256 may also provide information indicative of velocity, orientation, and so forth. One or more star trackers 258 may be used to determine an orientation of the satellite 102. A coarse sun sensor 260 may be used to detect the sun, provide information on the relative position of the sun with respect to the satellite 102, and so forth. The satellite 102 may include other sensors 210 as well. For example, the satellite 102 may include a horizon detector, radar, lidar, and so forth. The communication system 212 provides communication with one or more other devices, such as other satellites 102, ground stations 106, user terminals 108, and so forth. The communication system 212 may include the resource scheduler 172, one or more modems 276, digital signal processors, power amplifiers, antennas (including at least one antenna that implements multiple antenna elements, such as a phased array antenna) 282, processors, memories, storage devices, communications peripherals, interface buses, and so forth. Such components support communications with other satellites 102, ground stations 106, user terminals 108, and so forth using radio frequencies within a desired frequency spectrum. The communications may involve multiplexing, encoding, and compressing data to be transmitted, modulating the data to a desired radio frequency, and amplifying it for transmission. The communications may also involve demodulating received signals and performing any necessary de-multiplexing, decoding, decompressing, error correction, and formatting of the signals. Data decoded by the communication system 212 may be output to other systems, such as to the control system 204, for further processing. Output from a system, such as the control system 204, may be provided to the communication system 212 for transmission. The communication system 212 may include hardware to support the intersatellite link 190. For example, an intersatellite link FPGA 270 may be used to modulate data that is sent and received by an ISL transceiver 272 to send data between satellites 102. The ISL transceiver 272 may operate using radio frequencies, optical frequencies, and so forth. A communication FPGA 274 may be used to facilitate communication between the satellite 102 and the ground stations 106, UTs 108, and so forth. For example, the communication FPGA 274 may direct operation of a modem 276 to modulate signals sent using a downlink transmitter 278 and demodulate signals received using an uplink receiver 280. The satellite 102 may include one or more antennas 282. For example, one or more parabolic antennas may be used to provide communication between the satellite 102 and one or more ground stations 106. In another example, a phased array antenna may be used to provide communication between the satellite 102 and the UTs 108. FIG. 3 illustrates a process of providing prospective grant data 166 to a user terminal 110 to facilitate communication, according to some implementations. In this illustration, time 302 generally increases from top to bottom of the page. For example, an operation depicted at the top of the page may occur before an operation depicted at the bottom of the page. In this illustration, the management system 150 and traffic analysis system 146, the satellite 102, the UT 108, and a user device 110 are shown. The ground station(s) 106 and other elements are omitted from this illustration for clarity, and not as a limitation. While the system 100 is operating, the traffic analysis system 146 is processing at least a portion of the downstream data 142 to determine the response header data 168. At 304 the downstream data 142 that has a destination address associated with the UT 108 is received by the traffic analysis system 146. At 306 the traffic analysis system 146 determines the response header data 168 that is indicative of the downstream data 142 being associated with an expected response. At 308 the response header data 168 and the downstream data 142 are sent to the satellite 102. For example, the ground station 106 may use a first uplink to send the response header data 168 and the downstream data 142 to the satellite 102. At 310 the satellite 102 determines the prospective grant data 166. For example, the resource scheduler 172 executing onboard the satellite 102 processes the response header data 168 and determines that one or more packets in the downstream data 142 are designated as expected to have responsive upstream data 112 sent. The resource scheduler 172 allocates uplink resources that are determined to be available and generates prospective grant data 166 indicative of those uplink resources. For example, if the downstream data 142 comprises 271 packets for the UT 108 that are all designated as likely to have responses, the prospective grant data 166 may provide sufficient allocation of uplink resources for corresponding responsive upstream data 112 to be sent. Continuing the example, if the responsive packets are estimated at being 2 kilobits in length, the prospective grant data 166 may provide uplink resources for at least 271*2 or 542 kilobits, plus overhead. The resource scheduler 172 may also consider the age of the downstream data 142 when determining the prospective grant data 166 and sending the downstream data 142 to the UT 108. For example, the downstream data 142 may be sorted in descending order by greatest time since sent to least. The oldest packets of downstream data 142 may be sent to the UT 108 first, and prospective grant data 166 may also provide for a similar priority of response, to allow the responsive upstream data 112 associated with those oldest packets to be prioritized during use of the prospective grant. The resource scheduler 172 may also take into consideration the expected response delay. For example, the response header data 168 may indicate an expected response delay of 50 ms. Because the responsive upstream data 112 would not be available immediately, instead taking 50 ms, the prospective grant data 166 may be associated with a time that is at least 50 ms after the expected receipt of the downstream data 142 by the UT 108. In some implementations the prospective grant data 166 may be determined by a device other than the satellite 102. For example, the scheduling system 156 may determine the prospective grant data 166. At 312 the satellite 102 sends the prospective grant data 166 to the UT 108. In some implementations the prospective grant data 166 and the downstream data 142 may be sent as part of the same transmission, or via different channels. For example, the prospective grant data 166 may be transmitted using a management channel while the downstream data 142 is sent using a data channel. At 314 the UT 108 receives the prospective grant data 166 and the downstream data 142. At 316 the UT 108 may determine response window data 318 and send it to the user device 110. The response window data 318 comprises information indicative of a time interval or window within which the responsive upstream data 112 must be received by the UT 108 to allow for prompt transmission to the satellite 102 to take place using the prospective grant. For example, the UT 108 may determine a response window that indicates a start time and an end time. The response window indicated by the response window data 318 may be determined based on one or more of the prospective grant data 166, data indicative of operation of the UT 108, and so forth. For example, the prospective grant data 166 may indicate a particular timeslot that corresponds to a particular interval of time relative to a clock for the system 100. Continuing the example, data indicative of the operation of the UT 108 may comprise an internal delay value that accounts for the time the UT 108 takes to process upstream data 112 and prepare that data for transmission. In one implementation a start time of the response window may be determined by subtracting the internal delay value from the timeslot start time. The end time of the response window data 318 may be determined by subtracting from the timeslot end time a sum of the internal delay value and an expected time value to account for transmission of the upstream data 112. In other implementations, other determinations may be used to determine the response window data 318. The user device 110, or an application module 188 executing thereon, may receive the response window data 318. At 320 the upstream data 112 is determined. For example, the application module 188 receives the downstream data 142 and generates responsive upstream data 112. In some implementations, the determination of the upstream data 320 may be based at least in part on the response window data 318. For example, the application module 188 may use the responsive window data 318 to prioritize the determination of the upstream data 112, so the upstream data 112 is available in time to use the uplink resources allocated in the prospective grant data 166. At 322 the upstream data 112 is sent from the user device 110 to the UT 108. At 324 the UT 108 receives the upstream data 112 and sends the upstream data 112 using the uplink resources indicated in the prospective grant data 166. For example, the prospective grant data 166 may allocate a particular portion of a prospective grant access interval 326, during which UTs 108 issued prospective grants may transmit their data to the satellite 102. The satellite 102 receives the upstream data 112 and transmits the upstream data 112 to the ground station 106. The ground station 106 may provide at least a portion of the upstream data 112 to the traffic analysis system 146. For example, header data associated with the upstream data 112 may be provided to the traffic analysis system 146 to facilitate the traffic analysis system 146 with determining the utilization value. Because the UT 108 had received the prospective grant data 166 and had been allocated uplink resources, no additional delay is incurred by the system 100. For example, the UT 108 did not need to wait for a regularly scheduled timeslot, contend for access, send a request to the satellite 102 for uplink resources for the response, and so forth. As a result, the overall latency is substantially improved. Additional interchanges between the satellite 102 and the UT 108 to request allocation of uplink resources are eliminated or substantially reduced, improving efficiency and throughput of the system. Retransmissions due to delays in sending responsive ACKs are also substantially reduced, improving overall throughput by avoiding transmission of duplicative data. FIG. 4 illustrates at 400 a half-duplex example of timing for a downlink and an uplink that supports use of a prospective grant to facilitate communication, according to some implementations. In this illustration, time 402 generally increases from top to bottom of the page. In this half-duplex illustration, a downlink timeslot 404 and an uplink timeslot 406 are depicted. The downlink timeslot 404 may be subdivided into a first interval 410 and a second interval 412. During the first interval 410, data that is expected to produce an expected response is sent from the satellite 102 to the UT 108. For example, the downstream data 142 that the response header data 168 indicates is likely to be associated with a response may be enqueued for transmission before data that is not. By sending the downstream data 142 expected to produce a response first, more time is available for the UT 108 or the user device 110 to determine the upstream data 112. During the second interval 412 data that is unknown or unlikely to produce an expected response is sent from the satellite 102 to the UT 108. The uplink timeslot 406 comprises a time interval that may be subdivided into multiple intervals. The prospective grant access interval 326 may be provided first. During the prospective grant access interval 326, the UT 108 that has been provided with prospective grant data 166 may use the uplink resources that have been allocated to communicate with the satellite 102 and send data such as the upstream data 112. In some implementations the prospective grant access interval 326 may be provided at the end of the uplink timeslot 406. In another implementation the uplink timeslot 406 may include a plurality of prospective grant access intervals 326, such as one at the beginning, one at the end, and so forth. The use of uplink resources for prospective grants may be limited to avoid saturation of the uplink or other adverse effects. In some implementations, a fraction of total uplink resources may be reserved for allocation and use by prospective grants. The uplink resources may be specified with respect to one or more of data transferred, time, and so forth. For example, the total uplink resources may be designated in terms of available data transfer, such as 50 megabits during an uplink session. Some portion of this data may be reserved for allocation to prospective grants and subsequent use. Continuing the example, 20% of the data permitted to be transferred during an uplink session, or 10 megabits, may be allocated for use by prospective grants. In another example, the uplink resources may be designated in terms of a portion of available time for data transfer, such as a time interval of the uplink timeslot 406. In this example, a fraction of the total uplink timeslot 406 is allocated as the prospective grant access interval 326. For example, if the uplink resources allow for a total uplink capacity of 100 milliseconds of uplink time as indicated by the uplink timeslot 406, 15 milliseconds of the 100 may be allocated for use by prospective grants. A threshold may specify the portion of total uplink resources that are available for use by the prospective grants. For example, the threshold may indicate a quantity of data, interval of time, and so forth. In the event the threshold has been reached, the determination of prospective grant data 166 may be temporarily suspended. For example, the resource scheduler 172, having allocated all of the available uplink time to prospective grants, may suspend resource scheduling until the next uplink timeslot 406. In some implementations the resource scheduler 172 may oversubscribe the uplink resources. For example, if the uplink resources associated with an uplink are limited to 10 megabits, the resource scheduler 172 may allocate prospective grants that total 11 megabits. The oversubscription may occur in situations where some of the allocated uplink resources associated with the prospective grants historically have been unused. For example, if 5% of uplink resources associated with prospective grants are historically not used, the resource scheduler 172 may oversubscribe to maintain maximum utilization of the uplink resources. The uplink timeslot 406 may include a contention access interval 422. During the contention access interval 422, the UTs 108 serviced by the satellite 102 may contend with one another for access to the satellite 102. For example, a UT 108 may transmit a scheduling request for upstream data 112 during the contention access interval 422. In some implementations, a portion of the contention access interval 422 may be designated as a priority contention access interval 424. In some implementations, the prospective grant data 166 may provide access to the priority contention access interval 424. During the priority contention access interval 424, a relatively small subset of the UTs 108 serviced by the satellite 102 are in contention with one another. For example, during the contention access interval 422 any one of thousands of UTs 108 may attempt to communicate with the satellite 102. Continuing the example, during the priority contention access interval 424, the prospective grant data 166 may have provided permission for a few hundred UTs 108 to contend for communication with the satellite 102. A response window 430 is also depicted. As described above, the response window 430 may specify a time interval within which the responsive upstream data 112 must be received by the UT 108 to allow for transmission to the satellite 102 to take place using the prospective grant. In this illustration, the response window 430 begins before the uplink timeslot 406 and ends before the contention access interval 422. This offset in time may be used to accommodate latencies due to operation of the UT 108, such as the internal delay value. If the UT 108 receives the upstream data 112 during the response window 430, the UT 108 would be expected to be able to send the upstream data 112 during the prospective grant access interval 326. In the event that the upstream data 112 is unavailable within the response window 430, the UT 108 will still send the upstream data 112. For example, the UT 108 may attempt to send the upstream data 112 during the contention access interval 422. If the upstream data 112 responsive to the downstream data 142 is unavailable within the response window 430, other data queued for transmission may be sent. For example, a buffer status report, scheduling request, previously undelivered upstream data 112 that is oldest, and so forth may be sent instead. FIG. 5 illustrates data associated with operation of the system 100, according to some implementations. The data may include a parameter 502 and an associated value 504. The downstream data 142 may comprise one or more headers. For example, packets compliant with Internet Protocol (IP) may include a packet fixed header 506, a packet extension header 508, a packet payload 510, and so forth. For example, the packet fixed header 506 may comprise version data, traffic class data, flow label data, source address, destination address, and so forth. In another example, the packet extension header 508 may comprise routing data, fragmentation data, authentication data, encapsulating security data, destination option data, host identity protocol data, and so forth. One or more of the packet fixed header 506 or the packet extension header 508 may be implemented consistent with the Internet Protocol Version 6 (IPv6) specification promulgated by the Internet Engineering Task Force (IETF). (See Deering, S., “Internet Protocol, Version 6 (IPv6) Specification”, RFC 8200, DOI 10.17487/RFC8200, July 2017, www.rfc-editor.org/info/rfc8200.) For example, the version data indicates the version of internet protocol in use. The traffic class data indicates the class of data for use in differentiated services and explicit congestion notification. The flow label may comprise data provided by a source of the downstream data 142 to label sequences of packets that are to be treated as a single flow. The source address is indicative of the source of the downstream data 142. The destination address is indicative of the destination address to which the downstream data 142 is to be delivered. The routing data is indicative of one or more intermediate network nodes that were used to transfer the packet. The fragment data is data that is used by a source of the downstream data 142 to indicate that data having a bit size larger than would otherwise fit within a single message transfer unit to the destination has been fragmented across several packets. The authentication header data and encapsulating security data are indicative of authentication and encapsulation associated with the downstream data 142. (See Kent, S., “IP Authentication Header”, RFC 4302, DOI 10.17487/RFC4302, December 2005, www.rfc-editor.org/info/rfc4302.) The destination option data may comprise optional information to be examined by a destination node. The host identity protocol data may comprise data such as digital certificates to implement a host identity name space. (See Moskowitz, Ed., “Host Identity Protocol Version 2 (HIPv2)”, RFC 7401, DOI 10.17487/RFC7401, April 2015, www.rfc-editor/info/rfc7401.) The upstream data 112 may also comprise one or more headers as described with regard to the downstream data 142. During operation, the traffic analysis system 146 may analyze at least a portion of the downstream data 142. The traffic analysis system 146 may also analyze at least a portion of the upstream data 112. For example, the traffic analysis system 146 may compare source address, destination address, sequence numbers, and so forth to determine a particular packet in upstream data 112 is associated with a particular packet in downstream data 142. The response header data 168 may comprise information indicative of whether a response to the downstream data 142 is expected. The parameters 502 may include response expected, expected response size, expected response delay, and so forth. For example, the response expected may have a single bit value that indicates 1 if a response is expected and 0 if no response is expected. The expected response size may be indicative of an estimated data size that is expected in the responsive upstream data 112. The expected response size may be used to determine the prospective grant data 166. For example, the expected response size and a count of expected responses may be used to determine the amount of data estimated to be sent from the UT 108 to the satellite 102. Based on the amount of data estimated, the prospective grant data 166 may be appropriately sized. The expected response delay is an estimate of the time from when the UT 108 processes or forwards the downstream data 142 and when the responsive upstream data 112 is received. For example, the traffic analysis system 146 may compare a downlink timestamp indicative of a first time the downstream data 142 was transmitted to the UT 108 with an uplink timestamp indicative of a second time the upstream data 112 was transmitted from the UT 108 to the satellite 102. The expected response delay may be calculated as the difference between the first time and the second time. The expected response delay may be a minimum, maximum, average, moving average, and so forth of the differences from multiple packets. The expected response delay may be included in the response header data 168. In some implementations, the expected response delay may be determined by the UT 108 and then provided to the traffic analysis system 146. For example, the expected response delay may be determined by the UT 108 and then included in the upstream data 112. One or more of the resource scheduler 172 or the traffic analysis system 146 may extract the expected response delay for subsequent use with regard to similar packets. For example, the resource scheduler 172 may use the expected response delay to determine when uplink resources should be allocated. This minimizes the occurrence of a prospective grant being provided when the responsive upstream data 112 is not yet ready for transmission. The prospective grant data 166 comprises information that is indicative of uplink resources that are allocated to at least the UT 108. The prospective grant data 166 may include one or more parameters 502 such as a UT identifier, timeslot start data, timeslot duration data, timeslot end data, priority contention data, communication parameters associated with the allocated uplink resources, and so forth. For example, the UT identifier may comprise a media access control value that indicates a particular UT 108. The timeslot start may indicate a time that the prospective grant access interval 326 begins, relative to a synchronization signal, clock time, or other time source. The timeslot duration may indicate how long the timeslot is available. In another implementation the prospective grant data 166 may include timeslot end data indicative of a time that the allocated timeslot concludes. In implementations where the priority contention access interval 424 is in use, priority content data may indicate permission to use the priority contention access interval 424. One or more communication parameters may specify modulation coding, uplink frequency data indicative of an uplink frequency to be used, downlink frequency data indicative of a downlink frequency to be used, subcarrier data indicative of a subcarrier to be used, and so forth. The processes and methods discussed in this disclosure may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more hardware processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation. Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described herein. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage medium may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet. Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Thus, although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art. Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, physical spaces, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims. 16928317 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Apr 16th, 2019 12:00AM https://www.uspto.gov?id=US11316733-20220426 Client configurable hardware logic and corresponding signature Disclosed are techniques regarding aspects of implementing client configurable logic within a computer system. The computer system can be a cloud infrastructure. The techniques can include associating signature information with the client configurable logic for various purposes. 11316733 1. A system, comprising: programmable logic hardware configured to: be configured using first configuration data; and generate a first signature as a result of being configured using the first configuration data; a verification circuit configured to: obtain a second signature that is associated with second configuration data from a component of the system that is separate from the programmable logic hardware; obtain the first signature generated by the programmable logic hardware; compare the first signature to the second signature; determine, by comparing the first signature to the second signature, whether the programmable logic hardware is correctly configured according to the second configuration data; and perform an action based on the determination of whether the programmable logic hardware is correctly configured according to the second configuration data. 2. The system of claim 1, wherein: the first signature is embedded in an image generated by compiling the first configuration data; and wherein the programmable logic hardware is configured based on loading the image into the programmable logic hardware. 3. The system of claim 1, wherein the programmable logic hardware is configured to provide the first signature by storing the first signature in a register accessible by the verification circuit, or by outputting the first signature to the verification circuit via a data stream. 4. The system of claim 3, wherein: the first signature has a time-varying component; and the programmable logic hardware is configured to provide the time-varying component of the first signature based on a current time. 5. The system of claim 1, wherein the system further comprises: a physical interface to receive the first configuration data; and shell logic coupled to the programmable logic hardware and to the physical interface; wherein the shell logic is configured to: prevent the programmable logic hardware from accessing the physical interface except through the shell logic. 6. The system of claim 5, wherein the shell logic resides within a host domain; wherein at least a part of the programmable logic hardware that is programmable by the second configuration data resides in a client domain; and wherein the second configuration data is client configuration data. 7. The system of claim 1, wherein the second signature is included in metadata that corresponds to the second configuration data. 8. The system of claim 7, wherein the metadata is included in a manifest, the manifest being linked to the second configuration data in a logic repository database. 9. The system of claim 1, wherein: the verification circuit or the programmable logic hardware is configured to perform a data transformation of the first signature to obtain a transformed first signature; and the verification circuit is configured compare the first signature with the second signature by comparing the transformed first signature with the second signature. 10. The system of claim 1, wherein: each of the first configuration data and the second configuration data comprises a first portion and a second portion; the first portion of the first configuration data is associated with a first portion of the first signature; the second portion of the first configuration data is associated with a second portion of the first signature; the first portion of the second configuration data is associated with a first portion of the second signature; the second portion of the second configuration data is associated with a second portion of the second signature; and the verification circuit is configured to determine whether the programmable logic hardware is correctly configured according to the second configuration data by comparing the first portion of the first signature with the first portion of the second signature, and based on comparing the second portion of the first signature with the second portion of the second signature. 11. The system of claim 10, wherein the verification circuit is configured to, based on determining that the first portion of the first signature matches the first portion of the second signature and that the second portion of the first signature mismatches with the second portion of the second signature, output an indication that the second portion of the first configuration data contains an error. 12. The system of claim 1, wherein the first signature is generated at least in part by performing at least one of: a checksum, a secure hash, or a cyclic redundancy check on the first configuration data. 13. The system of claim 1, wherein the action is determined based on the first signature. 14. The system of claim 1, wherein the verification circuit is configured to, responsive to determining that the programmable logic hardware is not correctly configured according to the second configuration data, perform the action, the action comprising at least one of: notifying a virtual machine coupled to and included in a same domain as the programmable logic hardware, enabling a shell logic that encapsulates the programmable logic hardware to isolate the programmable logic hardware, or enabling the shell logic to reconfigure the programmable logic hardware. 15. The system of claim 1, further comprising a hardware processor configured to execute a repository service to configure the programmable logic hardware based on at least one of the first configuration data or the second configuration data. 16. The system of claim 1, wherein the programmable logic hardware comprises at least one of: a field programmable gate array (FPGA), a programmable logic array (PLA), a programmable array logic (PAL), a generic array logic (GAL), or a complex programmable logic device (CPLD). 17. The system of claim 1, wherein: the first configuration data is the same as the second configuration data; and the first signature matches the second signature indicating that the programmable logic hardware is correctly configured according to the second configuration data. 18. A method comprising: obtaining a first signature generated by a programmable logic hardware as a result of being configured using first configuration data; obtaining a second signature by a verification circuit from a component that is separate from the programmable logic hardware, the second signature being associated with second configuration data; comparing the first signature to the second signature; determining, by comparing the first signature to the second signature, whether the programmable logic hardware is correctly configured according to the second configuration data; and performing an action based on the determination of whether the programmable logic hardware is correctly configured according to the second configuration data. 19. The method of claim 18, wherein: the first signature is embedded in an image generated by compiling the first configuration data; and the programmable logic hardware is configured based on loading the image into the programmable logic hardware. 20. The method of claim 19, wherein the first signature is obtained from at least one of: a register, or a data stream output by the programmable logic hardware. 20 CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 15/669,808, filed Aug. 4, 2017, issued as U.S. Pat. No. 10,326,651 on Jun. 18, 2019, and titled “CLIENT CONFIGURABLE HARDWARE LOGIC AND CORRESPONDING SIGNATURE”, which claims priority to and is a non-provisional application of U.S. Provisional Application No. 62/486,907, filed Apr. 18, 2017, and titled “CLIENT CONFIGURABLE HARDWARE LOGIC AND CORRESPONDING METADATA WITHIN A VIRTUALIZED ENVIRONMENT”, the contents of which are herein incorporated in their entireties. BACKGROUND Cloud computing techniques can include use of networked computing resources of a cloud infrastructure to be made available for use by clients of a cloud services provider. Clients can access the networked computing resources via a network, such as the internet, to configure the networked computing resources to provide a service or to access a service provided by another client. Cloud service providers may receive monetary compensation from clients in exchange for access to or use of the networked computing resources. Cloud computing techniques can facilitate more efficient use of networked computing resources by, for example, enabling the resources to be allocated as needed between clients (e.g., for clients to scale services operating on cloud computing resources or to prototype new cloud enabled services) and/or by allocating hardware sequentially to service several clients. Clients of cloud services may have diverse computing requirements resulting from different use cases. A cloud service provider can include various computer systems having different types of components with varying levels of performance and/or functionality. Thus, a client can select a computer system that is potentially more efficient at executing a particular task. For example, the cloud service provider can provide systems with varying combinations of processing performance, memory performance, storage capacity or performance, and networking capacity or performance. However, some clients may desire to use hardware that is proprietary or highly specialized for executing their computing tasks. BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which: FIG. 1 illustrates a cloud computing system and corresponding domains according to certain embodiments. FIG. 2 illustrates a system diagram showing an example of a system including a client configurable logic circuit. FIG. 3 illustrates a system diagram showing additional features of a system including a client configurable logic circuit. FIG. 4 illustrates a system diagram showing an example of a system including a logic repository service for supplying configuration data to a client configurable logic circuit. FIG. 5 illustrates an example system diagram showing a plurality of virtual machine instances running in a multi-tenant environment including computer devices having a client configurable logic circuit. FIG. 6 illustrates an example system according to certain embodiments including a client domain and a host domain. FIG. 7 illustrates a system diagram showing an example of a system including a logic repository service for supplying configuration data to a client configurable logic circuit. FIG. 8 illustrates an example system diagram illustrating features of the disclosure related to signatures. FIG. 9 illustrates an example flowchart implementing certain features of the system of FIG. 7. FIG. 10 illustrates an example environment of a computer system for implementing aspects in accordance with some embodiments. FIG. 11 illustrates an example architecture for features and systems described herein that includes one or more service provider computers and/or a user device connected via one or more networks, according to some embodiments. DETAILED DESCRIPTION Cloud service providers can utilize virtualization techniques which can include physically or logically isolating host services to a host domain and client services to client domains. A domain refers to, for example, a group of logical computer devices and/or hardware computers and devices on a network that are administered as a unit with common rules and procedures. By isolating a client domain from a host domain, a client may be able to access certain functionality provided by a cloud service and may be restricted from accessing functionality of the host domain. For example, host domain functionality can include management of the cloud infrastructure which, if accessed by a client system, could jeopardize functionality of the cloud system. Furthermore, logical domain separation allows various platforms and services to be abstracted from the underlying hardware, providing flexibility within a cloud infrastructure to service a multitude of varied and differing client requirements and more efficiently assign the underlying hardware and other computer resources between client domains. A certain amount of overhead may be introduced to facilitate logical separation of different domains. This overhead may take the form of reduced performance of client services and/or client access to services. Certain clients may desire to have specialized computing resources (e.g., hardware computing resources) that may enable more efficient processing of certain client functions. Disclosed are techniques that can be used to enable use of specialized hardware computing resources within a cloud environment while still providing separation between a host domain and a client domain on which the specialized hardware computing resources may reside. A solution for providing specialized hardware computing resources in a cloud environment is to provide a networked computing resource including a client configurable logic circuit. The client configurable logic circuit can be included within an add-in card to a networked computer device and may include configurable logic, such as a field-programmable gate array (FPGA). Configurable logic is hardware that can be programmed or configured to perform a logic function. Configurations of the configurable logic can be specified according to configuration data that is applied to or loaded by the configurable logic. For example, a user of the computing resources can provide a specification (such as source code written in a hardware description language) for configuring the configurable logic, the configurable logic can be configured according to the specification, and the configured logic can thereafter be used to perform a task for the user. However, allowing a client of a cloud service access to low-level hardware of a cloud computer device can potentially introduce security and privacy issues within the cloud infrastructure. As a specific example, a faulty or malicious design from one client could potentially cause a denial of service to other users if the configured logic causes one or more devices within the cloud infrastructure to malfunction (e.g., crash, hang, or reboot) or be denied infrastructure resources. As another specific example, a faulty or malicious design from one user could potentially corrupt or read data from another client if the configured logic is able to read and/or write memory of the other client's memory space. As described herein, a cloud infrastructure can include a variety of computer resources, where one type of the computer resources can include a computer device comprising a client configurable logic circuit. The client configurable logic circuit can be programmed or configured by a client of the cloud infrastructure so that hardware (e.g., the configurable logic) of the computing resource is customized by the client. For example, the client can program the configurable logic so that it functions as a hardware accelerator that is tightly coupled to the computer device. As a specific example, the hardware accelerator can be accessible via a local interconnect, such as Peripheral Component Interconnect Express (PCI-Express or PCIe), of the computer device. The client can execute an application on the computer device and tasks of the application can be performed by the hardware accelerator using PCIe commands. By tightly coupling the hardware accelerator to the computer device, the latency between the accelerator and the computer device can be reduced which can potentially increase the processing speed of the application. Management and organization of client configuration data within a cloud infrastructure system to implement client access to hardware components with the system can introduce a number of issues. As should be understood, a cloud infrastructure system may service a large number (e.g., thousands) of clients that may concurrently access and utilize functionality of a cloud infrastructure system, including configurable hardware disclosed herein. Within the cloud infrastructure, hardware programmable logic component(s) may be reconfigured in a time partitioned and/or space portioned manner to service requests of clients of the cloud infrastructure system. Thus a cloud infrastructure system disclosed herein may configure and reconfigure hardware components accordingly to a relatively large number of images of client configuration data and at a relatively high rate. Correct management to ensure proper configuration of the hardware components can mitigate incorrect configuration of hardware components (e.g., selecting an incorrect image for configuration and/or an incorrect logic device) and/or improper configuration of hardware components (e.g., due to a corrupt configuration). Techniques disclosed herein can be used to associate and manage signatures with corresponding sets/images of client configuration data. A specific client configuration data image can include a signature. After a hardware component is configured according to the client configuration data image, the hardware component may output or otherwise make available the signature (or data indicative of the signature). The signature may be used to uniquely identify the corresponding client configuration data image and/or verify that the hardware component was properly configured according to the client configuration data image. A corresponding signature may be stored as metadata corresponding to the client configuration data image. The corresponding signature may be a copy of the signature imbedded within the client configuration data image or otherwise be a counterpart to the imbedded signature. By comparing the imbedded signature with the stored signature, a device of a cloud infrastructure can determine that an intended hardware component was configured with an intended client configuration data image and/or that the hardware component was not corrupted during configuration. FIG. 1 illustrates a simplified logical diagram of a host domain 101 of a cloud infrastructure system that may provide one or more cloud enabled services to a client 106 or a type of client referred to as a partner device 108. Host domain 101 can reside within a cloud infrastructure system. Computer devices 100a-100c and host access device 122 can each reside within the cloud infrastructure system. Computer devices 100a-100c and host access devices 122 can reside within host domain 101. Hypervisor 112, client virtual machine 110, host privileged virtual machine 120, and hardware 118 can reside within computer device 100a. Hypervisor 112 can provide separation between domains of the cloud infrastructure system by, for example, managing interactions between logical and/or physical devices across between domains. A hypervisor may be used to generate and/or manage logical components that can be implemented using hardware components of a cloud infrastructure system. Partner device 108 may be a client of host domain 101 that is privileged to utilize cloud resources to provide a service. For example, partner device 108 can be used to request, via host access device 122, one or more resources of the cloud infrastructure system to enable a service. Client 106 may be a user of a service of partner device 108. Thus, partner device 108 may have more privileged access to cloud infrastructure system than client 106. The service can be enabled through use of one or more hardware components 118 of computer device 100a within host domain 101. The one or more hardware devices 118 can be logically abstracted, via hypervisor 112, into a client virtual machine 110 that client 106 or partner device 108 is privileged to access. Hypervisor 112 can manage client virtual machine 110 and other virtual machines that may operate within host domain 101 (such as host privileged virtual machine 120). Host privileged virtual machine 120 is a privileged type of virtual machine that may have direct access to hardware 118, drivers, or an operating system of computer device 100a. Hardware 118 can include processors, memory, fixed function hardware, peripherals, and/or client configurable logic 114. The operating system may manage/schedule interactions between logical virtual machines and physical components within host domain 101. Client virtual machine 110 can be one of several virtual machines operating within computer device 100a and can be logically separated from hardware devices of host domain 101 that services client virtual machine 110. In certain embodiments, a computer device, such as computer device 100b may be a purely hardware device that is included within a client domain (such as client domain 102). A client, via client device 106, may access computer device 100b via a corresponding client domain. Thus, hardware device(s) may be configured for access by a client of a service provider/cloud infrastructure without use of logical/virtualization techniques by isolating access to the hardware device(s) to the client (while included within the client domain). An access control entity can limit access between components within a service-provider infrastructure (such as a cloud infrastructure network) in a spatial and/or time partitioned manner to enable each of client or users of the service provider network access to components within their respective domain while limiting access by other users. However, access may still be allowed for a device or user having a higher privilege level (e.g., a device or user within host domain 101). A hypervisor is an example of an access control entity that can manage access between logical and/or physical devices between domains. The logical separation of client virtual machine 110 can be accomplished by logically isolating client virtual machine 110 into a client domain 102. Client domain 102 can be separated from a host domain 101 of a cloud infrastructure system. Hypervisor 112 may reside on the host domain 101 but have access to client domain 102 whereas virtual or physical devices of client domain 102 may be prevented from accessing virtual or physical devices of host domain 101 (or other client domains). Techniques disclosed herein can be used to create and manage client configurable logic 114 within the cloud infrastructure system. Client configurable logic 114 can include configurable hardware logic that can be used by partner device 108, for example, to implement and have access to a hardware device within the cloud infrastructure system. Client configurable logic 114 can be configured to act as a hardware accelerator, for example. The hardware accelerator can be created using programmable logic device(s) such that multiple clients may be able to configure differing accelerators using a same underlying hardware device. As disclosed herein, client configurable logic 114 may reside within client domain 102. However, access between client virtual machine 110 and client configurable logic 114 may pass through a host domain 101 of a cloud infrastructure system so that the cloud infrastructure system can manage and monitor access to the underlying hardware components implementing client configurable logic 114. FIG. 2 is a system diagram showing an example of a computing system 200 including a peripheral device 210 and a computer device 220. System 200 can be used to implement client virtual machine 110 and/or client configurable logic 114 of FIG. 1. For example, client configurable logic 240 can be similar to client configurable logic 114 and client virtual machine 110 can be implemented within computer device 220. Computer device 220 can include a central processing unit (CPU) 222, memory 224, and a host interface 226. The CPU 222 can be used to execute instructions stored in the memory 224. For example, the memory 224 can be loaded with all or a portion of the cloud service and the CPU 222 can execute the instructions of the cloud service. The cloud service can communicate with a hardware accelerator of the peripheral device 210 by issuing commands using the host interface 226. A command can be a read request, a write request, a read response, a message, an interrupt, or other various data transmittals. The command can occur on a bus shared by multiple components. Specifically, values of signal lines of the bus can be modulated to transfer data on the bus using a communications protocol of the bus. The command can occur over one or more phases, such as an address phase and one or more data phases. Additionally or alternatively, the command can occur using one or more serial lines of a point-to-point interconnect that connects two components. Specifically, the command can be sent in a packet that is transmitted over the point-to-point interconnect. The host interface 226 can include a bridge for communicating between the CPU 222 using a local or front-side interconnect and components using a peripheral or expansion interconnect. Specifically, the host interface 226 can be connected to a physical interconnect that is used to connect the computer device 220 to the peripheral device 210 and/or to other components. For example, the physical interconnect can be an expansion bus for connecting multiple components together using a shared parallel bus or serial point-to-point links. As a specific example, the physical interconnect can be PCI express, PCI, or another physical interconnect that tightly couples the computer device 220 to the peripheral device 210. Thus, the computer device 220 and the peripheral device 210 can communicate using PCI bus commands or PCIe packets, for example. The peripheral device 210 can include a client configurable logic circuit 225 including shell logic 230 and client configurable logic 240. The shell logic 230 can include a peripheral interface 212, a management module 214, and data path module 216. The client configurable logic 240 can include hardware that is configurable to implement a hardware accelerator, for example. In other words, the client configurable logic 240 can include logic that is programmable to perform a given function. For example, the client configurable logic 240 can include programmable logic blocks comprising combinational logic and/or look-up tables (LUTs) and sequential logic elements (such as flip-flops and/or latches), programmable routing and clocking resources, programmable distributed and block random access memories (RAMs), digital signal processing (DSP) bitslices, and/or programmable input/output pins. The shell logic 230 can be used to encapsulate the client configurable logic 240. For example, the client configurable logic 240 can interface with various components of the shell logic 230 using predefined interfaces so that the client configurable logic 240 is restricted in access to components of peripheral device 210. The shell logic 230 can include logic that isolates different components of the peripheral device 210 from the client configurable logic 240. As one example, hard macros of the peripheral device 210 (such as a configuration access port or circuits for signaling on the physical interconnect) can be masked off so that the client configurable logic 240 cannot directly access the hard macros. The shell logic 230 can include the peripheral interface 212 for communicating with the computer device 220. Specifically, the peripheral interface 212 can be used to enable communicate with the computer device 220 using a communication protocol and a physical interconnect. As one example, the computer device 220 can communicate with the peripheral device 210 using a command including an address associated with the peripheral device 210. Similarly, the peripheral device 210 can communicate with the computer device 220 using a command including an address associated with the computer device 220. The addresses associated with the various devices connected to host interface 226 can be predefined by a system architect and programmed into the devices. Additionally or alternatively, the communication protocol can include an enumeration sequence where the devices connected to the host interface 226 are queried and where addresses are assigned to each of devices as part of the enumeration sequence. As one example, the host interface 226 can issue queries to each of the devices connected to the host interface 226. The peripheral interface 212 can respond to the queries by providing information about the peripheral device 210, such as how many functions are present on the peripheral device 210, and a size of an address range associated with each of the functions of the peripheral device 210. Based on this information, addresses of the computing system 200 can be allocated such that each function of each device connected to the physical interconnect is assigned a non-overlapping range of addresses. After enumeration, the peripheral interface 212 can route commands to functions of the peripheral device 210 based on an address of the command. The shell logic can include the management module 214 that can be used for managing and configuring the peripheral device 210. Commands and data can be sent from the computer device 220 to the management module 214 using commands that target the address range of the management module 214. For example, the computer device 220 can generate commands to transfer data (e.g., configuration data) and/or write control registers of the peripheral device 210 that are mapped to one or more addresses within the address range of the management module 214. Writing the control registers can cause the peripheral device 210 to perform operations, such as configuring and managing the peripheral device 210. As a specific example, configuration data corresponding to configurable logic to be implemented in the client configurable logic 240 can be transmitted from the computer device 220 to the peripheral device 210 in one or more commands between host interface 227 and peripheral interface 212. A command 250 to configure the client configurable logic 240 with the configuration data can be transmitted from the computer device 220 to the peripheral device 210. Specifically, the command 250 can write a value to a control register mapped to the management module 214 address space that will begin configuring the client configurable logic 240. In one embodiment, the configuration data can be transferred from the computer device 220 to the peripheral device 210 before the configuration of the client configurable logic 240 begins. For example, the management module 214 can cause the configuration data to be stored in an on-chip or off-chip memory accessible by the peripheral device 210, and the configuration data can be read from the memory when the client configurable logic 240 is being configured. In another embodiment, the configuration data can be transferred from the computer device 220 to the peripheral device 210 after the configuration of the client configurable logic 240 begins. For example, a control register can be written to begin configuration of the client configurable logic 240 and the configuration data can be streamed into or loaded onto the client configurable logic 240 as commands including the configuration data are processed by the management module 214. The shell logic 230 can include a data path module 216 that can be used to exchange information (e.g., data input/output 260) between the computer device 220 and the peripheral device 210. Specifically, commands and data can be sent from the computer device 220 to the data path module 216 using commands that target the address range of the data path module 216. Similarly, the peripheral device 210 can communicate with the computer device 220 using a command including an address associated with the computer device 220. The data path module 216 can act as a translation layer between the peripheral interface 212 and the client configurable logic 240. Specifically, the data path module 216 can include an interface for receiving information from the client configurable logic 240 and the data path module 216 can format the information for transmission from the peripheral interface 212. Formatting the information can include generating control information for one or more commands and partitioning data into blocks that are sized to meet protocol specifications. Thus, the data path module 216 can be interposed between the client configurable logic 240 and the physical interconnect. In this manner, the client configurable logic 240 can potentially be blocked from formatting commands and directly controlling the signals used to drive the physical interconnect so that the client configurable logic 240 cannot be used to inadvertently or maliciously violate protocols of the physical interconnect. FIG. 3 illustrates a system diagram showing an example of a system 300 including a peripheral device 310 and a computer device 320. Peripheral device 310 can be similar to peripheral device 210 and computer device 320 can be similar to computer device 220. The computer device 320 and the peripheral device 310 can be connected via a physical interconnect 330. For example, the physical interconnect 330 can be PCI express, PCI, or any other interconnect that couples the computer device 320 to the peripheral device 310. The computer device 320 can include a CPU 322, memory 324, and an host interface 326. For example, the host interface 326 can provide bridging capability so that the computer device 320 can access devices that are external to the computer device 320. For example, the host interface 326 can include root complex functionality as used in PCI express. The peripheral device 310 can include a client configurable logic circuit and other hardware. The client configurable logic circuit can be configured or programmed to perform various functions of the peripheral device 310. The client configurable logic circuit can be configured multiple times with different configurations so that the client configurable logic circuit can perform different functions to service multiple clients. The functions of the peripheral device 310 can be categorized based upon the purpose or capabilities of each function, or based upon when the function is loaded into the peripheral device 310. For example, the peripheral device 310 can include static logic, reconfigurable logic, and hard macros. The functionality for the static logic, reconfigurable logic, and hard macros can be configured at different times. Thus, the functionality of the peripheral device 310 can be loaded incrementally. A hard macro can perform a predefined function and can be available when the peripheral device 310 is powered on. For example, a hard macro can include hardwired circuits that perform a specific function. As specific examples, the hard macros can include a configuration access port (CAP) 311 for configuring the peripheral device 310, a serializer-deserializer transceiver (SERDES) 312 for communicating serial data, a memory or dynamic random access memory (DRAM) controller 313 for signaling and controlling off-chip memory (such as a double data rate (DDR) DRAM 381), and a storage controller 314 for signaling and controlling a storage device 382. Client configurable logic and shell logic may reside on a same programmable logic device. The static logic can be loaded at boot time into the shell logic. Configuration data specifying the functionality of the static logic can be loaded from an on-chip or off-chip flash memory device during a boot-up sequence. The boot-up sequence can include detecting a power event (such as by detecting that a supply voltage has transitioned from below a threshold value to above the threshold value) and deasserting a reset signal in response to the power event. An initialization sequence can be triggered in response to the power event or the reset being deasserted. The initialization sequence can include reading configuration data stored on the flash device and loading the configuration data onto the peripheral device 310 using the configuration access port 311 so that at least a portion of the shell logic is programmed with the functionality of the static logic. After the static logic is loaded, the peripheral device 310 can transition from a loading state to an operational state that includes the functionality of the static logic. The client configuration data can be used to configure programmable logic hardware within the client configurable logic while the peripheral device 310 is operational (e.g., after the static logic has been loaded). The client configuration data corresponding to the client configurable logic can be stored in an on-chip or off-chip memory and/or the configuration data can be received or streamed from an interface (e.g., the peripheral interface 356) of the peripheral device 310. The client configurable logic circuit can include several non-overlapping client configurable logic regions, which can each interface with static logic of shell logic. For example, the regions can be arranged in an array or other regular or semi-regular structure. For example, the array structure may include holes or blockages where hard macros are placed within the array structure. The different regions can communicate with each other, the static logic, and the hard macros by using signal lines that can be specified as static logic of shell logic. The different regions can be configured at different points in time so that a first region can be configured at a first point in time and a second region can be configured at a second point in time. Commands from the computer device 320 bound for the peripheral device 310 can be identified using an address within the command. Specifically, if the address of the command falls within the range of addresses assigned to the peripheral device 310, the command is destined for the peripheral device 310. The command can be sent over the physical interconnect 330 and received at the peripheral interface 356. The peripheral interface 356 can be an endpoint of the physical interconnect 330. It should be understood that the physical interconnect 330 can include additional devices (e.g., switches and bridges) arranged in a fabric for connecting devices or components to the computer device 320. The address mapping module 350 can analyze the address of the command and determine where to route the command within the peripheral device 310 based on the address. For example, the management block 352 can be assigned a first range of addresses and different functions of the management plane can be accessed by using different addresses within that range. Commands with addresses falling within the range assigned to the management block 352 can be routed through the host logic private fabric 360 to the different blocks within the host domain. For example, commands can be addressed to a management and configuration block 362. Similarly, the client block 354 can be assigned a second range of addresses and different functions can be accessed by using different addresses within that range. The management and configuration block 362 can include functions related to managing and configuring the peripheral device 310. For example, the management and configuration block 362 can provide access to the configuration access port 311 so that the reconfigurable logic blocks can be configured. For example, the computer device 320 can send a command to the management and configuration block 362 to initiate loading of the client configuration data into the client configurable logic 340. Client configuration data corresponding to a respective configuration of the client configurable logic 340 can be sent from the computer device 320 to the management block 352. The management block 352 can route the client configuration data through the host logic private fabric 360 to the configuration access port 311 so that the client configuration data can be loaded. As another example, the management and configuration block 362 can store data about the peripheral device 310. For example, versions of the different client configuration data images, update histories, and other information can be stored in memory of the management and configuration block 362. The computer device 320 can read the memory to retrieve some or all of the data. Specifically, the computer device 320 can send a read request targeting the memory of the management and configuration block 362 and the management and configuration block 362 can generate read response data to return to the computer device 320. In certain embodiments, management and configuration block 362 can configure a logic device implementing client configurable logic 340. For example, configuration registers of a programmable logic device can be populated by management and configuration block 362 to, for example, set configurations of input/output pins, set clock values that at least a portion of the device operates according to, set a clock divider value, set operating voltage(s), set permissions between portions of the logic device, etc. In certain embodiments, components external to the logic device may be configured, for example an externally provided clock to the logic device. Depending on a particular configuration of the logic device (e.g., per client configuration), management and configuration block 362 can configure the logic device accordingly. For example, a first client configuration may require the client configurable logic 340 to operate at a first frequency and a second client configuration may require the client configurable logic 340 to operate at a second frequency. It may be desirable to prevent a client from being able to access certain aspects regarding a configuration of a logic device implementing client configurable logic. For example, misconfiguration of the logic device may damage the device or may have adverse effects on shell logic or other clients' configurations of client configurable logic 340 (that might be concurrently operating, for example). The management block 352 can also be used to access private peripherals of the peripheral device 310. Private peripherals can include a JTAG (e.g., IEEE 1149.1) controller 370, light emitting displays (LEDs) 371, a microcontroller 372, a universal asynchronous receiver/transmitter (UART) 373, a memory 374 (e.g., a serial peripheral interface (SPI) flash memory), and other components that may be accessible via a host domain. The management block 352 can access the private peripherals by routing commands through the shell logic private fabric 360 and the private peripheral interface(s) 375. The private peripheral interface(s) 375 can directly communicate with the private peripherals. Public peripherals are configurable and may reside on a client domain and may be accessible by a client configurable logic circuit or may reside on a host domain. For example, the public peripherals can be accessed by addressing commands within the address range assigned to the management block 352. The public peripherals can be accessed by addressing commands within the address range assigned to the client block 354. Examples of the public peripherals are other configurable hardware platform(s) (CHP(s)) 380, DRAM 381 (e.g., DDR DRAM), storage devices 382 (e.g., hard disk drives and solid-state drives), and other various components that can be used to generate, store, or process information. The public peripherals can be accessed via the public peripheral interfaces 385. Thus, the public peripheral interfaces 385 can be an intermediary layer transposed between the public peripherals and the other functions of the peripheral device 310. Specifically, the public peripheral interfaces 385 can format communications to the public peripherals into a native protocol of the public peripherals. Mailboxes 390 can be used to pass messages and other information between the client domain and the host domain. For example, the mailboxes 390 can include buffers, control registers (such as semaphores), and status registers. By using the mailboxes 390 as an intermediary between the client and host domains, isolation between the client domain and the host domain can be maintained while still providing functionality to pass messages between the two. The client block 354 can be used to access components residing on the client domain, such as the client configurable logic 340. For example, a command directed to the client configurable logic 340 can cause data to be loaded, processed, and/or returned to the computer device 320. Specifically, the client domain component can be accessed using commands having an address within the range assigned to the client block 354. For example, a command can be sent from the computer device 320 to the client configurable logic 340 via the client block 354. Specifically, commands addressed to the client block 354 can be routed through the peripheral fabric 364 to the client configurable logic 340. Responses from the client configurable logic 340 can be routed through the peripheral fabric 364 to the client block 354, and then back to the computer device 320. Additionally, the data and commands generated by the client configurable logic 340 can be monitored using a usage and command monitoring block 366. The monitoring block 366 can potentially identify commands or data that violate predefined rules and can generate an alert to be sent. Additionally or alternatively, the monitoring block 366 can terminate any commands generated by the client configurable logic 340 that violate any criteria of the monitoring block 366. Additionally, the monitoring block 366 can analyze information moving to or from the client configurable logic 340 so that statistics about the information can be collected and accessed. Data can also be transferred between the computer device 320 and the configurable logic by programming a direct memory access (DMA) engine 342. The DMA engine 342 can include control and status registers for programming or specifying DMA transfers from a source location to a destination location. As one example, the DMA engine 342 can be programmed to pull information stored within the memory 324 of computer device 320 into the client configurable logic 340 or into the public peripherals of the peripheral device 310. As another example, the DMA engine 342 can be programmed to push data that has been generated by the client configurable logic 340 to the memory 324 of the computer device 320. The data generated by the client configurable logic 340 can be streamed from the client configurable logic 340 or can be written to the public peripherals, such as the memory 381 or storage 382. The client configurable logic 340 can communicate with other configurable hardware platforms 380. For example, the other configurable hardware platforms 380 can be connected by one or more serial lines that are in communication with the SERDES 312. The client configurable logic 340 can generate commands to the different configurable hardware platforms 380, and the commands can be routed through the CHP fabric 344 to the corresponding serial lines (via the SERDES 312) of the configurable hardware platforms 380. Similarly, the client configurable logic 340 can receive information from other configurable hardware platforms 380 using the reverse path. FIG. 4 illustrates a system diagram showing an example of a system 400 including a logic repository service 410 for managing configuration data that can be used to configure configurable resources within a fleet of compute resources 420. System 400 can be utilized within the cloud infrastructure of FIG. 1. A cloud service provider can maintain the fleet of computer resources 420 for users of the services to deploy when a computing task is to be performed. The computer resources 420 can include computer devices 440 having client configurable logic circuit(s) 442 that can be programmed to include client specific hardware accelerator(s). The cloud service provider can manage the computer resources 420 using techniques disclosed herein to manage the configuration and operation of the client configurable logic circuit(s) 442. As one example, a host privileged machine (e.g., Host Privileged Machine 120) can execute a logic repository service 410 for accessing client logic data 432 (e.g., a specific client logic image/design) specified by a user, generating client configuration data 436 for configuring the client configurable logic circuit based on the logic design of the user, and downloading the validated configuration data 462 in response to a request 460 to configure an instance of the client configurable logic circuit. The download request 460 can be from the user that developed the configurable logic (e.g., a partner) or from a user that has acquired a license to use the configurable logic (e.g., a client). The client configuration data 436 can be created by the host privileged machine, a user, or a third-party separate from the user or the host privileged machine. For example, a marketplace of accelerator intellectual property (IP) can be provided to the users of the cloud infrastructure, and the users can potentially increase the speed of their applications by selecting an accelerator from the marketplace. The logic repository service 410 can be a network-accessible service, such as a web service. Web services are commonly used in cloud computing. A web service is provided at a network address over the web or the cloud. Users can initiate web service requests to computer devices of a cloud infrastructure and the computer devices can process the requests and return appropriate responses. The client web service requests are typically initiated using, for example, an API request. For purposes of simplicity, web service requests will be generally described below as API requests, but it is understood that other web service requests can be made. An API request is a programmatic interface to a defined request-response message system, typically expressed in JSON or XML, which is exposed via the web—most commonly by means of an HTTP-based web server. Thus, in certain implementations, an API can be defined as a set of Hypertext Transfer Protocol (HTTP) request interfaces, along with a definition of the structure of the messages used to invoke the API and the response messages, which can be in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. The API can specify a set of functions or routines that perform an action, which includes accomplishing a specific task or allowing interaction with a computer device. When a web service receives the API request from a client device, the web service can generate a response to the request and send the response to the endpoint identified in the request. Additionally or alternatively, the web service can perform actions in response to the API request without generating a response to the endpoint identified in the request. The logic repository service 410 can receive an API request 430 to generate client configuration data for a client configurable logic circuit, such as the client configurable logic circuit(s) 442 of the computer device 440. For example, the API request 430 can be originated by a developer or partner user of the compute services provider. The request 430 can include fields for specifying data and/or data about the logic design, the configurable hardware platform, user information, access privileges, production status, and various additional fields for describing information about the inputs, outputs, and users of the logic repository service 410. As specific examples, the request can include a description of the design, a production status (such as trial or production), an encrypted status of the input or output of the service, a reference to a location for storing an input file (such as the hardware design source code), a type of the input file, an instance type of the configurable hardware, and a reference to a location for storing an output file or report. In particular, the request can include a reference to a hardware design specifying client logic data 432 for implementation on the configurable hardware platform. Specifically, a specification of the client logic data 432 and/or of the host logic 434 can be a collection of files, such as source code written in a hardware description language (HDL), a netlist generated by a logic synthesis tool, and/or placed and routed logic gates generated by a place and route tool. The compute resources 420 can include many different types of hardware and software categorized by instance type. In particular, an instance type specifies at least a portion of the hardware and software of a resource. For example, hardware resources can include servers with central processing units (CPUs) of varying performance levels (e.g., different clock speeds, architectures, cache sizes, and so forth), servers with and without co-processors (such as graphics processing units (GPUs) and configurable logic), servers with varying capacity and performance of memory and/or local storage, and servers with different networking performance levels. Example resources can include different operating systems, application programs, and drivers. One example instance type can comprise the computer device 440 including a central processing unit (CPU) 444 in communication with the client configurable logic circuit(s) 442. The client configurable logic circuit(s) 442 can include programmable logic such as an FPGA, a programmable logic array (PLA), a programmable array logic (PAL), a generic array logic (GAL), or a complex programmable logic device (CPLD), for example. As specific examples, an “F1.small” instance type can include a first type of computer device with one capacity unit of FPGA resources, an “F1.medium” instance type can include the first type of computer device with two capacity units of FPGA resources, an “F1.large” instance type can include the first type of computer device with eight capacity units of FPGA resources, and an “F2.large” instance type can include a second type of computer device with eight capacity units of FPGA resources. The logic repository service 410 can generate client configuration data 436 in response to receiving the API request 430. The generated client configuration data 436 can be based on the client logic data 432 and the host logic 434. Specifically, the generated client configuration data 436 can include information that can be used to program or configure the client configurable logic circuit(s) 442 so that it performs the functions specified by the client logic data 432 and the host logic 434. As one example, the cloud service provider can generate the host logic 434 including logic for interfacing between the CPU 444 and the client configurable logic circuit(s) 442. Specifically, the host logic 434 can include logic for masking or shielding the client logic data 432 from communicating directly with the CPU 444 so that all CPU-configurable logic commands pass through the host logic 434. In this manner, the host logic 434 can potentially reduce security and availability risks that could be introduced by the client logic data 432. Generating the client configuration data 436 can include performing checks and/or tests on the client logic data 432, integrating the client logic data 432 into a host logic 434 wrapper, synthesizing the client logic data 432, and/or placing and routing the client logic data 432. Checking the client logic data 432 can include verifying the client logic data 432 complies with one or more criteria of the compute services provider. For example, the client logic data 432 can be analyzed to determine whether interface signals and/or logic functions are present for interfacing to the host logic 434. In particular, the analysis can include analyzing source code and/or running the client logic data 432 against a suite of verification tests. The verification tests can be used to confirm that the configurable logic is compatible with the shell logic. As another example, the client logic data 432 can be analyzed to determine whether the client logic data 432 fits within a designated region of the specified instance type. As another example, the client logic data 432 can be analyzed to determine whether the client logic data 432 includes any prohibited logic functions, such as ring oscillators or other potentially harmful circuits. As another example, the client logic data 432 can be analyzed to determine whether the client logic data 432 has any naming conflicts with the host logic 434 or any extraneous outputs that do not interface with the host logic 434. As another example, the client logic data 432 can be analyzed to determine whether the client logic data 432 attempts to interface to restricted inputs, outputs, or hard macros of the client configurable logic circuit(s) 442. If the client logic data 432 passes the checks of the logic repository service 410, then the client configuration data 436 can be generated. If any of the checks or tests fail, the generation of the client configuration data 436 can be aborted. Generating the client configuration data 436 can include compiling and/or translating source code of the client logic data 432 and the host logic 434 into data that can be used to program or configure the client configurable logic circuit(s) 442. For example, the logic repository service 410 can integrate the client logic data 432 into a host logic 434 wrapper. Specifically, the client logic data 432 can be instantiated in a system design that includes the client logic data 432 and the host logic 434. The integrated system design can be synthesized, using a logic synthesis program, to create a netlist for the system design. The netlist can be placed and routed, using a place and route program, for the instance type specified for the system design. The placed and routed design can be converted to client configuration data 436 which can be used to program the client configurable logic circuit(s) 442. For example, the client configuration data 436 can be directly output from the place and route program. As one example, the generated client configuration data 436 can include a complete or partial bitstream for configuring all or a portion of the configurable logic of an FPGA. An FPGA can include configurable logic and non-configurable logic. The configurable logic can include programmable logic blocks comprising combinational logic and/or look-up tables (LUTs) and sequential logic elements (such as flip-flops and/or latches), programmable routing and clocking resources, programmable distributed and block random access memories (RAMs), digital signal processing (DSP) bitslices, and programmable input/output pins. The bitstream can be loaded into on-chip memories of the configurable logic using configuration logic (e.g., a configuration access port). The values loaded within the on-chip memories can be used to control the configurable logic so that the configurable logic performs the logic functions that are specified by the bitstream. Additionally, the configurable logic can be divided into different regions which can be configured independently of one another. As one example, a full bitstream can be used to configure the configurable logic across all of the regions and a partial bitstream can be used to configure only a portion of the configurable logic regions. The non-configurable logic can include hard macros that perform a specific function within the FPGA, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and configuration logic for loading the configuration data onto the configurable logic. The logic repository service 410 can store the generated client configuration data 436 in a logic repository database 450. The logic repository database 450 can be stored on removable or non-removable media, including magnetic disks, direct-attached storage, network-attached storage (NAS), storage area networks (SAN), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed by the logic repository service 410. Additionally, the logic repository service 410 can be used to store input files (such as the specifications for the client logic data 432 and the host logic 434) and data about the logic designs and/or the users of the logic repository service 410. The generated client configuration data 436 can be indexed by one or more properties such as a user identifier, an instance type or types, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. The logic repository service 410 can receive an API request 460 to download configuration data. For example, the request 460 can be generated when a user of the compute resources 420 launches or deploys a new instance (e.g., an F1 instance) within the compute resources 420. As another example, the request 460 can be generated in response to a request from an application executing on an operating instance. The request 460 can include a reference to the source and/or destination instance, a reference to the configuration data to download (e.g., an instance type, a marketplace identifier, a machine image identifier, or a configurable hardware identifier), a user identifier, an authorization token, and/or other information for identifying the configuration data to download and/or authorizing access to the configuration data. If the user requesting the configuration data is authorized to access the configuration data, the configuration data can be retrieved from the logic repository database 450, and validated configuration data 462 (e.g. a full or partial bitstream) can be downloaded to the requesting instance (e.g., computer device 440). The validated configuration data 462 can be used to configure the configurable logic of the destination instance. The logic repository service 410 can verify that the validated configuration data 462 can be downloaded to the requesting instance. Validation can occur at multiple different points by the logic repository service 410. For example, validation can include verifying that the client logic data 432 is compatible with the host logic 434. In particular, a regression suite of tests can be executed on a simulator to verify that the host logic 434 performs as expected after the client logic data 432 is added to the design. Additionally or alternatively, it can be verified that the client logic data 432 is specified to reside only in reconfigurable regions that are separate from reconfigurable regions of the host logic 434. As another example, validation can include verifying that the validated configuration data 462 is compatible with the instance type to download to. As another example, validation can include verifying that the requestor is authorized to access the validated configuration data 462. If any of the validation checks fail, the logic repository service 410 can deny the request to download the validated configuration data 462. Thus, the logic repository service 410 can potentially safeguard the security and the availability of the computer resources 420 while enabling a user to customize hardware of the computer resources 420. FIG. 5 illustrates a computing system diagram of a network-based compute service provider 500 that illustrates one environment in which embodiments described herein can be used. By way of background, the compute service provider 500 (i.e., the cloud provider) is capable of delivery of computing and storage capacity as a service to a community of end recipients. In an example embodiment, the compute service provider can be established for an organization by or on behalf of the organization. That is, the compute service provider 500 may offer a “private cloud environment.” In another embodiment, the compute service provider 500 supports a multi-tenant environment, wherein a plurality of clients operate independently (i.e., a public cloud environment). Generally speaking, the compute service provider 500 can provide the following models: Infrastructure as a Service (“IaaS”), Platform as a Service (“PaaS”), and/or Software as a Service (“SaaS”). Other models can be provided. For the IaaS model, the compute service provider 500 can offer computers as physical or virtual machines and other resources. The virtual machines can be run as guests by a hypervisor, as described further below. The PaaS model delivers a computing platform that can include an operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on the compute service provider platform without the cost of buying and managing the underlying hardware and software. Additionally, application developers can develop and run their hardware solutions on configurable hardware of the compute service provider platform. The SaaS model allows installation and operation of application software in the compute service provider. In some embodiments, end users access the compute service provider 500 using networked client devices, such as desktop computers, laptops, tablets, smartphones, etc. running web browsers or other lightweight client applications. Those skilled in the art will recognize that the compute service provider 500 can be described as a “cloud” environment. The particular illustrated compute service provider 500 includes a plurality of computer devices 502A-502C. While only three computer devices are shown, any number can be used, and large centers can include thousands of computer devices. The computer devices 502A-502C can provide computing resources for executing instances 506A-506C. In one embodiment, the instances 506A-506C are virtual machines. As known in the art, a virtual machine can include logical resources that are emulated using physical components of machine(s) (e.g. computers). In the example of a virtual machine, each of the computer devices 502A-502C can be configured to execute a hypervisor 508 or another type of program configured to enable the execution of multiple instances 506 on a single computer device. Additionally, each of the instances 506 can be configured to execute one or more applications. The applications can include user or non-privileged programs, kernel or privileged programs, and/or drivers. In another embodiment (not shown), the instances can include an operating system and application programs controlled by a single client. Thus, the computer service provider 500 can partition the resources of a given computer device among multiple clients (such as by providing a different virtual machine to each client) and/or provide the full resources of a computer device to a single client. Each of instances 506 can reside within a respective client domain for the respective client (along with client configurable logic hardware). It should be appreciated that although the embodiments disclosed herein are described primarily in the context of virtual machines, other types of instances can be utilized with the concepts and technologies disclosed herein. For instance, the technologies disclosed herein can be utilized with storage resources, data communications resources, and with other types of computing resources. The embodiments disclosed herein might also execute all or a portion of an application directly on a computer system without utilizing virtual machine instances. The computer devices 502A-502C can include a heterogeneous collection of different hardware resources or instance types. Some of the hardware instance types can include configurable hardware that is at least partially configurable by a user of the compute service provider 500. One example of an instance type can include the computer device 502A which is in communication with client configurable logic hardware 504A. Specifically, the computer device 502A and the client configurable logic hardware 504A can communicate over a local interconnect such as PCIe. Another example of an instance type can include the computer device 502B and client configurable logic hardware 504B. For example, the client configurable logic hardware 504B can be integrated within a multi-chip module or on the same die as a CPU of the computer device 502B. Yet another example of an instance type can include the computer device 502C without any client configurable logic hardware. Thus, hardware instance types with and without configurable logic can be present within the resources of the compute service provider 500. One or more computer devices 520 can be reserved for executing software components for managing the operation of the computer devices 502 and the software instances 506. For example, the computer device 520 can execute a management component 522. A client can access the management component 522 to configure various aspects of the operation of the software instances 506 purchased by the client. For example, the client can purchase, rent or lease instances and make changes to the configuration of the software instances. The configuration information for each of the software instances can be stored as a machine image (MI) 542 on the network-attached storage 540. As a specific example, the MI 542 can describe the information used to launch a VM instance. The MI can include a template for a root volume of the instance (e.g., an OS and applications), launch permissions for controlling which client accounts can use the MI, and a block device mapping which specifies volumes to attach to the instance when the instance is launched. The MI can also include a reference to a configurable hardware image (CHI) 542 which is to be loaded on configurable hardware 504 when the instance is launched. The CHI includes configuration data for programming or configuring at least a portion of the configurable hardware 504. As another specific example, the MI 542 can describe the information used to launch an instance of an operating system directly on one of the computer devices 520. The client can also specify settings regarding how the purchased instances are to be scaled in response to demand. The management component can further include a policy document to implement client policies. An auto scaling component 524 can scale the instances 506 based upon rules defined by the client. In one embodiment, the auto scaling component 524 allows a client to specify scale-up rules for use in determining when new instances should be instantiated and scale-down rules for use in determining when existing instances should be terminated. The auto scaling component 524 can consist of a number of subcomponents executing on different computer devices 502A-502C or other computer devices. The auto scaling component 524 can monitor available computing resources over an internal management network and modify resources available based on need. A deployment component 526 can be used to assist clients in the deployment of new instances 506 of computing resources. The deployment component can have access to account information associated with the instances, such as who is the owner of the account, credit card information, country of the owner, etc. The deployment component 526 can receive a configuration from a client that includes data describing how new instances 506 should be configured. For example, the configuration can specify one or more applications to be installed in new instances 506, provide scripts and/or other types of code to be executed for configuring new instances 506, provide cache logic specifying how an application cache should be prepared, and other types of information. The deployment component 526 can utilize the client-provided configuration and cache logic to configure, prime, and launch new instances 506. For example, the deployment component 526 can be invoked when a client launches an instance from a control console, another instance, or a marketplace page. The control console can be a web-based service that provides an interface to a client of the compute service provider 500 so that the client can manage his or her account and access services. As one example, the control console can enable a user to upload MIs and/or CHIs to a private catalog, and images corresponding to a particular MI or CHI can be selected by the user from the private catalog when an instance is to be deployed. The configuration, cache logic, and other information used for launching instances may be specified by a client using the management component 522 or by providing this information directly to the deployment component 526. The instance manager can be considered part of the deployment component. Client account information 528 can include any desired information associated with a client of the multi-tenant environment. For example, the client account information can include a unique identifier for a client, a client address, billing information, licensing information, customization parameters for launching instances, scheduling information, auto-scaling parameters, previous IP addresses used to access the account, a listing of the MI's and CHI's accessible to the client, etc. One or more computer devices 530 can be reserved for executing software components for managing the download of configuration data to configurable hardware 504 of the computer devices 502A-502C. For example, the computer device 530 can execute a logic repository service comprising an ingestion component 532, a library management component 534, and a download component 536. Computer device 530 can implement functions of logic repository service 410. The ingestion component 532 can receive shell logic and configurable logic designs or specifications and generate configuration data that can be used to configure the configurable hardware 504. The library management component 534 can be used to manage source code, user information, and configuration data associated with the logic repository service. For example, the library management component 534 can be used to store configuration data generated from a user's design in a location specified by the user on the network-attached storage 540. In particular, the configuration data can be stored within a configurable hardware image 542 on the network-attached storage 540. Additionally, the library management component 534 can manage the versioning and storage of input files (such as the specifications for the configurable logic and the shell logic) and data about the logic designs and/or the users of the logic repository service. The library management component 534 can index the generated configuration data by one or more properties such as a user identifier, an instance type, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. The download component 536 can be used to authenticate requests for configuration data and to transmit the configuration data to the requestor when the request is authenticated. For example, agents on the computer devices 502A-B can send requests to the download component 536 when the instances 506 are launched that use the configurable hardware 504. As another example, the agents on the computer devices 502A-B can send requests to the download component 536 when the instances 506 request that the configurable hardware 504 be partially reconfigured while the configurable hardware 504 is in operation. The network-attached storage (NAS) 540 can be used to provide storage space and access to files stored on the NAS 540. For example, the NAS 540 can include one or more computer devices used for processing requests using a network file sharing protocol, such as Network File System (NFS). The NAS 540 can include removable or non-removable media, including magnetic disks, storage area networks (SANs), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed over the network 550. The network 550 can be utilized to interconnect the computer devices 502A-502C, the computer devices 520 and 530, and the storage 540. The network 550 can be a local area network (LAN) and can be connected to a Wide Area Network (WAN) 560 so that end users can access the compute service provider 500. It should be appreciated that the network topology illustrated in FIG. 5 has been simplified and that many more networks and networking devices can be utilized to interconnect the various computing systems disclosed herein. FIG. 6 illustrates details of an example system 600 including a peripheral device 610. The peripheral device 610 can be connected to a computer device 620 using a local physical interconnect, such as PCIe. In an alternative embodiment, the peripheral device 610 can be integrated within the hardware of the computer device 620. As one example, the computer device 620 can be one of the plurality of computer devices 502A-502B of the compute service provider 500 of FIG. 5. The computer device 620 can have underlying hardware 622 including one or more CPUs, memory, storage devices, interconnection hardware, etc. The hypervisor or kernel layer 624 can be classified as a type 1 or type 2 hypervisor. A type 1 hypervisor can run directly on the host hardware 622 to control the hardware and to manage guest operating systems. A type 2 hypervisor can run within a conventional operating system environment. Thus, in a type 2 environment, the hypervisor can be a distinct layer running above the operating system and the operating system interacts with the system hardware. Different types of hypervisors include Xen-based, Hyper-V, ESXi/ESX, Linux, etc., but other hypervisors can be used. A host privileged machine 630 (such as Domain 0 of the Xen hypervisor) can be part of the hypervisor or separated therefrom and generally includes device drivers needed for accessing the hardware 622. Host privileged machine 630 can reside on a host domain wherein client machine(s) 640 can each reside on a client domain with a corresponding client configurable logic circuit 617. Client machine(s) 640 can be logical units of isolation within the hypervisor. Each client machine 640 can be allocated its own portion (e.g., using time or space division) of the hardware layer's memory, CPU allocation, storage, interconnect bandwidth, etc. to the corresponding client domain Additionally, each client machine 640 can include a guest virtual machine and its own guest operating system. As such, each client machine 640 can be an abstract portion of capacity designed to support its own virtual machine independent of the other domains. The host privileged machine 630 can be used to perform management services for the client machines 640 and the peripheral device 610. The host privileged machine 630 can communicate with web services (such as a deployment service, a logic repository service 650, and a health monitoring service) of the compute service provider, the client machines 640, and the peripheral device 610. The management services can include services for launching and terminating client machines 640, and configuring, reconfiguring, and tearing down the configurable logic of the peripheral device 610. As a specific example, the host privileged machine 630 can launch a new client machine 640 in response to a request from a deployment service (such as the deployment component 426 of FIG. 4). The request can include a reference to an MI and/or a CHI. The MI can specify programs and drivers to load on the client machine 640 and the CHI can specify configuration data to load on the peripheral device 610. The host privileged machine 630 can initialize the client machine 640 based on the information associated with the MI and can cause the configuration data associated with the CHI to be loaded onto the peripheral device 610. The initialization of the client machine 640 and the peripheral device 610 can occur concurrently so that the time to make the instance operational can be reduced. The host privileged machine 630 can be used to manage programming and monitoring of the peripheral device 610. By using the host privileged machine 630 for this purpose, access to the configuration data a lower privilege levels can be restricted from directly accessing the host privileged machine 630. The configuration ports of the peripheral device 610 can be restricted. Specifically, users with us, the configurable logic cannot be modified without using the infrastructure of the cloud service provider and any third party IP used to program the configurable logic can be protected from viewing by unauthorized users. The host privileged machine 630 can include a client configurable logic (CL) management module 632 for communicating with web services (such as the logic repository service 650 and a health monitoring service), the peripheral device 610, and the client machines 640. For example, the CL application management module 632 can issue a request to the logic repository service 650 to fetch configuration data in response to a client machine 640 being launched. The CL application management module 632 can communicate with the client machine 640 using shared memory of the hardware 622 or by sending and receiving inter-partition messages over the interconnect connecting the computer device 620 to the peripheral device 610. The messages can include requests by an end-user application 641 to reconfigure or tear-down the peripheral device 610. The CL application management module 632 can issue a request to the logic repository service 650 to fetch configuration data in response to a request to reconfigure the peripheral device 610. The CL application management module 632 can initiate a tear-down sequence in response to a request to tear down the peripheral device 610. The CL application management module 632 can perform watchdog related activities to determine whether the communication path to the client machine 640 is functional. CL configuration module 634 can access the configuration port 612 (e.g., a configuration access port) of the peripheral device 610 so that client configuration data can be loaded onto the peripheral device 610. For example, the CL configuration layer 634 can send a command or commands to the configuration port 612 to perform a full or partial configuration of the peripheral device 610. The CL configuration module 634 can send the client configuration data (e.g., a bitstream) to the configuration port 612 so that the configurable logic can be programmed according to the client configuration data. CL configuration module 634 can also configured shell logic 619 used to encompass client configurable logic 616 an isolate the client configurable logic 616 to a user domain with a corresponding client machine 640. As disclosed herein, shell logic 619 can be implemented on a same logic device (e.g., FPGA) as the client configurable logic 616. Shell logic 619 can facilitate interaction between client machine 640 and/or host privileged machine 630 to the client configurable logic 616. Management driver 636 can be used for communicating over the physical interconnect connecting the computer device 620 to the peripheral device 610. The management driver 636 can encapsulate commands, requests, responses, messages, and data originating from the host privileged machine 630 for transmission over the physical interconnect. Additionally, the management driver 636 can de-encapsulate commands, requests, responses, messages, and data sent to the host privileged machine 630 over the physical interconnect. Specifically, the management driver 636 can communicate with the management block 613 of the peripheral device 610. For example, the management block 613 can include physical or virtual function(s) mapped to an address range during an enumeration of devices connected to the physical interconnect. The management driver 636 can communicate with the management block 613 by addressing commands to the address range assigned to the management block 613. Mailbox 611 can be similar to mailbox 390 and can be used to pass data between management block 613 and application block 615 while still maintain domain separation. The host privileged machine 630 can include a CL management and monitoring module 638. The CL management and monitoring module 638 can monitor and analyze commands occurring on the physical interconnect to determine a health of the peripheral device 610 and/or to determine usage characteristics of the peripheral device 610. The peripheral device 610 can include non-configurable hard macros and configurable logic. The hard macros can perform specific functions within the peripheral device 610, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and a configuration port 612. The CL can be programmed or configured by loading configuration data onto the peripheral device 610. For example, the configuration port 612 can be used for loading the configuration data. As one example, configuration data can be stored in a memory (such as a Flash memory) accessible by the configuration port 612 and the configuration data can be automatically loaded during an initialization sequence (such as during a power-on sequence) of the peripheral device 610. Additionally, the configuration port 612 can be accessed using an off-chip processor or an interface within the peripheral device 610. The peripheral interface logic 614 can include circuitry (e.g., hard macros and/or configurable logic) for signaling on the physical interconnect and implementing a communications protocol. The communications protocol specifies the rules and message formats for communicating over the interconnect. The application block 615 can be used to communicate with drivers of the client machines 640. Specifically, the application block 615 can be a physical or virtual function mapped to an address range during an enumeration of devices connected to the physical interconnect. The application drivers can communicate with the application block 615 by addressing commands to the address range assigned to the application block 615. Specifically, the application block 615 can communicate with an configurable logic management driver 642 to exchange commands, requests, responses, messages, and data. The application block 615 can communicate with an configurable data driver 643 to exchange commands, requests, responses, messages, and data. The shell logic 619 can shield the interfaces of at least some of the hard macros from the end-users so that the end-users have limited access to the hard macros and to the physical interconnect. For example, the shell logic 619 can include the configuration port 612, the management block 613, the peripheral interface 614, and the application block 615. The end-users can cause the client configurable logic 616 to be loaded on the peripheral device 610, and can communicate with the client configurable logic 616 from the client machines 640 (via the application block 615). The client machine 640 can include functionality for interfacing an end-user application 641 to the peripheral device 610. Specifically, the client machine 640 can include a CL-Application API 644 for providing the end-user application 641 with access to the peripheral device 610. The CL-Application API 644 can include a library of methods or functions for communicating with the peripheral device 610 and the host privileged machine 630. For example, the end-user application 641 can send a command or data to the client configurable logic 616 by using an API of the CL-Application API 644. In particular, the API of the CL-Application API 644 can interface with the configurable logic (AL) data driver 643 which can generate a command targeted to the application block 615 which can communicate with the client configurable logic 616. In this manner, the end-user application 641 can cause the client configurable logic 616 receive, process, and/or respond with data to potentially accelerate tasks of the end-user application 641. As another example, the end-user application 641 can send a command or data to the host privileged machine 630 by using an API of the CL-Application API 644. In particular, the API of the CL-Application API 644 can interface with the AL management driver 642 which can generate a command targeted to the application block 615. In this manner, the end-user application 641 can cause the host privileged machine 630 to provide operational or data about the peripheral device 610 and/or to request that the client configurable logic 616 be reconfigured. The client machine 640 in conjunction with the hypervisor or kernel 624 can be used to limit the operations available to perform over the physical interconnect by the end-user application 641. For example, the cloud service provider can provide the AL management driver 642, the AL data driver 643, and the CL-Application API 644 (such as by associating the files with a machine image). These components can be protected from modification by only permitting users and services having a higher privilege level than the end-user to write to the files. The AL management driver 642 and the AL data driver 643 can be restricted to using only addresses within the address range of the application block 615. Additionally, an input/output memory management unit (I/O MMU) can restrict interconnect commands to be within the address ranges of the application block 615 or the management function 613. FIG. 7 illustrates a system diagram 700 showing an example of a cloud infrastructure system including a logic repository service for supplying configuration data to a client configurable logic circuit. System 700 can be similar to system 400. Illustrated ii system 700 is a cloud infrastructure system 720 that can receive, from a partner 702 or other client, client logic data 732 that can the form of a binary file or source code for a configuration of a configurable logic circuit(s) 742. Along with the client logic data 732, the cloud infrastructure system 720 may receive ancillary data 733. The ancillary data 733 can take the form of additional data that may be associated with the client logic data 732. For example, the ancillary data 733 may contain client requested configuration data for configurable logic circuit(s) 742 when configured according to client logic data 732 (such as a desired clock speed, for example). Ancillary data 733 may contain an identifier, such as PCIe device identifier, a version of the logic data 732, billing information, customer identification information, or other information. Ancillary data 733 may be presented to partner 702 as a form to be filled out by partner 702. For example, ancillary data 733 may be presented from partner 702 via an extensible markup language (xml) file, via telephone, in person, or through other means. In certain embodiments, a menu of possible values for variables within ancillary data 733 can be presented to partner 702 for the partner to select one or more of the possible values. For example, various combinations of clock speeds of a logic device of configurable logic circuit(s) 742 may be presented to partner 702. The presented possible clock speeds may be predefined or tested by owners of the cloud infrastructure system 720 to determine which combinations of clock speeds are acceptable or compatible with a given logic device of the cloud infrastructure system 720. In certain embodiments, the presented clock speed(s) may be presented as a tiered values wherein each tier corresponds to a respective level of service that may be provided by the cloud infrastructure system for implementing client logic data 732. For example, each tier may correspond to a performance level that may be tied to a corresponding level of compensation required for implementing the performance level for the client logic data 732. Similarly, different input/output options, logic device(s), device version numbers, compiler version number, priority assignment, etc. may correspond to a performance level. Sets of client configuration data 760 can be linked to an associated one of manifests 762 in a variety of manners. For example, each of manifests 762 can include a pointer to a linked one sets of client configuration data 760 and vice-versa. In certain embodiments, the data structure used to store sets of client configuration data 760 and manifests 762 can be used to determine links between them. For example, a linked manifest may be a predetermined distance within memory space from the corresponding set of configuration data. A data structure may have pairs of linked containers, each pair including a set of client configuration data and a manifest. In certain embodiments, a manifest template can be used to generate a completed manifest. The manifest template can include a number of fields that may be populated with information. The fields of the template can be arranged in a certain order and the order may not be user reconfigurable. Thus, a template can provide a standardized shell that can be used for a plurality of sets of client configurable data. The template may include user selected options within each field, in any combination. For example, a field may correspond to a number of clock speeds that a logic device may operate at. The user may be able to select which clock speed they wish to use. As another example, several fields may be linked to a service level that a user may be able to select from a plurality of service values. Depending the service level, clocks, priorities, or other fields within the manifest may be adjusted or set. Logic repository service module 710 of cloud infrastructure system 720 can receive the client logic data 732 and ancillary data 733 from partner. Logic repository service module 710 can be implemented by, for example, host access device 122 of FIG. 1. Using the client logic data 732 and ancillary data 733, the logic repository service module 710 can generate a corresponding manifest 737 and client configuration data 736. Client configuration data 736 can be similar to client configuration data 436. The manifest 737 can contain data corresponding to the client configuration data 736. Logic repository database 750 can include various sets of client configuration data 760 each associated with a manifest 762. The sets stored within logic repository database 750 can correspond to any number of clients of cloud infrastructure system 720, in any combination. In certain embodiments, a manifest can be human-readable and received from a partner, for example. Information can be provided to the partner in a menu format for the partner to select one from a discrete number of selectable options for each of several attributes. Data can be generated from the information and that data may be machine-readable. For example, data may contain a value of a register to set a clock speed. A value of an attribute within a manifest can be populated with the value of the register. In certain embodiments, logic repository service 710 may compile source code provided by partner 702 to generate client configuration data 736. Client configuration data 736 can be compiled to include a signature value. The signature can be used to verify that the intended client configuration data was used to configure configurable logic circuit(s), as will be further explained herein. In certain embodiments, logic repository service 710 may receive, from partner 702, precompiled hardware device image(s) that may each contain a respective signature. Manifest 737 can include several attributes. For example, the following is a non-exhaustive list of attributes that may be included in manifest 737: manifest_format_version can include a version number of the manifest to support future versions; pci_vendor_id can include a vendor identifier for use with PCI identification of a configuration of client configurable logic; pci_device_id can include a device identifier for use with PCI identification of a configuration of client configurable logic; pci_subsystem_id can include a subsystem identifier for use with PCI identification of a configuration of client configurable logic; pci_subsystem_vendor_id can include a subsystem and vendor identifier for use with PCI identification of a configuration of client configurable logic; dcp_hash can include an expected hash or checksum value to verify a configuration of client configurable logic. An example can be an sha256sum value; shell_version can include a version of shell logic compatible with a client configurable logic configuration; dcp_file_name can include a filename or other identifier of the corresponding client configurable logic configuration; hdk_version can include a hardware configuration version of compatible hardware configurable logic for client configuration data; date can include a data that the corresponding client configuration data was created, for example; clock_recipe can include an identifier indicated one or more clocks to configure a logic device to be configured using the corresponding client configuration data; and CL_signature can include a unique signature for the corresponding client configuration data. Certain logic devices may include one or more clock islands or domains. For example, certain FPGAs may include register(s) that may be configure the FPGA to operate at corresponding clock(s) depending on the configuration of the register(s). Manifest 737 may include a value to populate the register(s) to setup a logic device to operate at desired clock values. Certain logic devices may be driven by an external clock in order to set their operating frequency (or frequencies). Still other logic devices may include a combination of register(s) and externally provided clock(s). Manifest 737 may include desired clock frequenc(ies) and/or register values for certain logic devices in any combination. When a certain configuration of client configurable logic within configurable logic circuit(s) 742 is requested (such as by a client of cloud infrastructure system 720), logic repository service 710 can provide validated client configuration data 762 (which may be a validated binary of the desired configuration of a programmable logic device) and accessory data 763. Accessory data 763 can be similar to manifest 737 and/or may be generated using manifest 737. For example, manifest 737 may be logic device agnostic. As one example, manifest 737 may include desired frequenc(ies). Logic repository service 710, having knowledge of a target logic device, may generate register value(s) within accessory data 763 corresponding to the desired frequenc(ies). Logic repository service 710 can provide accessory data 763 to computer device 770. CPU 744 and/or component(s) of client configurable logic circuit(s) 742 may configure programmable logic device(s) to operate at frequenc(ies) specified by accessory data 763. As disclosed, client logic data 732, client configuration data 736, and/or validated client configuration data 762 may include a signature therein. The signature may be output by a logic device after being configured according to the corresponding validated client configuration data 762 and/or may be read from the logic device after being configured. Signature 767 is provided as an example signature output by configurable logic circuit(s) 742 post configuration. Signature 767′ can match or otherwise correspond to signature 767 and can be stored as data accompanying corresponding client configuration data. Logic repository service may also provide accessory data 763 that may include, for example, a signature that a corresponding validated client configuration data 762 is expected to output. The signature 767′ provided via accessory data 763 can be matched against a signature 767 embedded within the configurable logic circuit(s) 742 due to being configured using validated client configuration data 762. For example, CPU 744 and/or logic repository service 710 may verify that configurable logic circuit(s) 742, post configuration output or contain the expected signature. If not, it could indicate that the configurable logic circuit(s) 742 were configured using the incorrect client configuration data and/or the configuration become corrupted. If so, then a new image may loaded into the configurable logic circuit(s) 742 and/or various other corrective actions may be performed. In certain embodiments, a signature can be used to verify integrity of corresponding client configuration data (such as a client bitstream or image). For example, the signature may be embedded within the client bitstream or image such that when a hardware device is correctly configured with the client bitstream or image, the hardware device will output or otherwise make available the signature. In this manner, the signature can act as a checksum. If the hardware device is incorrectly configured (for example, an incorrect gate is configured or an incorrect interconnection is configured), the signature may become corrupted. Thus, the signature may be used to validate that a specific hardware configuration of a hardware device matches an intended configuration. Computer device 770 includes CPU 744, interface 746, and configurable logic circuit(s) 742 that may corresponding respectively to CPU 222, host interface 226, and client configurable logic circuit 225. Client configurable logic circuit 225 can include one or more host physical logic devices (e.g., FPGAs) that may communicate with CPU 744 via interface 746. Interface 746 may be a PCI or similar interface in which configurable logic circuit(s) 742 may present themselves as one or more devices on interface 746 using, for example, a vendor, device, or subsystem identifier, in any combination. The identifier may be used to a corresponding virtual machine to load driver(s) for communicating with the configurable logic circuit(s), for example. These drivers can be similar to management driver 642 and/or data driver 643 of FIG. 6. However, the identifier(s) of the host physical logic device (e.g., FPGA) may be presented via interface 746. Thus, unintended driver(s), or no drivers, may be loaded by a client virtual machine communicating with configurable logic circuit(s) 742 as the identifier of the host physical logic device may be provided instead of the client hardware device configured within the host physical logic device. In certain embodiments, a hypervisor of computer device 770 (now shown), that may be similar to hypervisor 624 may intercept identifier(s) provided to a client virtual machine and provide, to the client virtual machine, identifier(s) specific in accessory data 763 (and/or manifest 737). The interception can include identifying that a client device has requested an identifier from a logic device, preventing a host logic device from responding, and/or providing client identifier(s) to the client virtual machine. In certain embodiments, the hypervisor may populate specified addresses within PCI configuration space using the client identifier(s). FIG. 8 illustrates an example cloud infrastructure system 800 that can be similar in at least some respects to cloud infrastructure system 720. Cloud infrastructure system 800 can include a computer device 804, which can implement logic repository service 710. Computer device 804 can be coupled to client device 802. Client device 802 can be similar to partner device 108 or client device 106. Client device 802 can be used by a client to provide information for client configuration data 812 to computer device 804. The information for client configuration data 812 can include source code (e.g., Hardware Design Language (HDL) source code) for configuration of a client configurable logic circuit within peripheral device 806. The information for client configuration data 812 can, in certain embodiments, include a list of desired functions/features such that computer device 804 may provide functions or hardware configurations to meet the desired functions/features. For example, computer device 804 may include or have access to a library of prepopulated hardware configurations or configuration modules that may be selected by a client via client device 802. Computer device 804 may, using any combination of selected or provided source code, selected features, or compiled data, generate or store compiled configuration data 814. Compiled configuration data 814 can by referred to as an image/binary/set and can include compiled data segregated into distinct blocks or portions 814a-814e. Compiled configuration data 814 may also include an embedded signature which may also be segregated into block or portions 816a-816e, wherein each portion of the signature can correspond to a respective portion of the compiled configuration data. Each portion (816a-186e) of the signature can be used to verify the respective portion (814a-814e) of compiled configuration data 814. Computer device 804 may generate and embed the signature during compiling source code obtained from client. In certain embodiments, the signature can be embedded into an already compiled image that may be provided by a client. The signature may be used to uniquely identify and/or verify loading of a hardware component, as will be disclosed later. Generation of the signature can utilize checksum and/or cyclic redundancy check techniques, for example, such that each portion (814a-814e) of compiled configuration data 814 can be checksumed to obtain a corresponding portion of a signature. In certain embodiments. A checksum, secure hash, cyclic redundancy check (CRC) or similar technique can be performed on all of compiled configuration data 814 instead of on individual portions. The signature can include a combination of uniquely identifying and/or verification (e.g., checksum/CRC) data. In certain embodiments, the signature can be used to identify a class or type of hardware component for implementing a client configuration data image. Signature 810 can also be generated by computer device 804, for example. Signature 810 can be stored within metadata corresponding to compiled configuration data 814, such as within manifest 808. Signature 810 can include portions 810a-810e each corresponding to portions 816a-816e of a signature 816 of compiled configuration data 814. Signature 810 may be identical to a signature 816 of compiled configuration data 814. In certain embodiments, signature 810 may be a counterpart of a signature 816 of compiled configuration data 814. For example, signature 810 may include results of performing an operation on portions 816a-816e of a signature 816 of compiled configuration data 814. Thus, signature 810 can be used in conjunction with a data transformation performed on a signature 816 of compiled configuration data 814 that can further be used to ensure proper setup of a hardware component. Manifest 808 can be similar to manifest 737. Signature 810 can be similar to signature 767′. Signature 816 can be similar to signature 767. Peripheral device 806 can include client configurable logic hardware as disclosed herein. Peripheral device 806 can be similar to peripheral device 210. Client configurable logic hardware of peripheral device 806 can be configured according to compiled configuration data 814. After the client configurable logic hardware is configured, it may output a signature 816 of compiled configuration data 814. Outputting the signature 816 can include writing the signature 816 to a register of a component of peripheral device 806 or configure the component to output the signature via a data stream (such as a serial or parallel communication), for example. As should be understood signature 816 may not exactly be output by a component of peripheral device 806. The output signature may be a portion of signature 816 or otherwise derived from signature 816. For example, the output signature may be a result of a data transformation performed on signature 816. In certain embodiments, the output signature may include a time component such that a certain stimulus is output at a certain time. The output signature may also be output in response to receiving a request for the signature to be output from a host computer device. In certain embodiments, signature 816 can be generated by shell logic coupled to client configurable logic hardware that can be configured by corresponding client configuration data. For example, shell logic may generate signature 816 by performing one or more operations on configured client configurable logic hardware coupled to the shell logic. The one or more operations can include performing CRC, checksum, or other integrity checking operations on the configured logic. The one or more operations can generate a signature that may identify and/or verify a specific configuration of client configurable logic to ensure that the client configurable logic has been configured according to an intended set of client configuration data and/or property configured according to a set of client configuration data. Each of portions 816a-816e can correspond to a respective portion of client configuration data and/or area of client configurable logic hardware and can be generated by shell logic. Portions 816a-816e can, as disclosed herein, be used to provide greater granularity when determining an error with configuration of client configurable logic hardware. Computer device 804 (or any other device within a privileged/host domain) can receive a signature generated by peripheral device as a result of being configured by compiled configuration data 814. The signature received from peripheral device 806 can be compared to signature 810 stored within manifest 808 corresponding to compiled configuration data 814. As one example, signature 816 may be output by peripheral device 806 and compared to signature 810. If signature 810 matches signature 816, than computer device 804 may determine that peripheral device 806 was configured with the intended compiled configuration data (as opposed to another configuration data image). Computer device 804 may also determine that peripheral device 806 has been correctly configured as a result of being configured by compiled configuration data 814. For example, the signature received from peripheral device 806 can be generated by checksuming, hashing, and/or CRC checking one or more portions of logic used to configure peripheral device 806. This information can be matched to signature 810, for example, to determine whether peripheral device 806 became corrupted during configuration or a failure has occurred. The signature can also be used to identify what portion of a configuration has become corrupted. Manifest 808 can be stored with a repository database, such as logic repository database 750. Signatures 816 and/or 810 may also indicate a device vendor or type that is intended for compiled configuration data 814. For example, signatures 816 and/or 810 may include a device identifier or other similar information. Such information may be output by a hardware component (such as an FPGA) used to implement client configurable logic hardware. Thus, signatures 816 and/or 810 can be used to verify and/or select a correct device manufacturer, model, or type was selected for loading of compiled configuration data 814. When stored in a database, signature 816 and/or 810 be used to group a plurality of images of compiled configuration data and/or manifests to expedite selection and loading of certain peripheral devices, for example. For example, signatures 816 and/or 810 may be grouped according to client such that when a client requests loading of a certain image, the signature can be used to expedite identification of an image stored within a repository for loading. FIG. 9 illustrates a flowchart 900 including features of the disclosure according to certain embodiments. Features of flowchart 900 can be utilized by the system 700 illustrated in FIG. 7. At 902, a client can provide information for configuration of client configurable logic hardware. For example, the client can provide source code, precompiled binary data, and/or selection criteria of precompiled function(s). The information may include signature information for later verification of configuration of a hardware component using the information. At 904, hardware component(s) can optionally be identified that may be configured according to the information for configuration of client configurable hardware logic of 902. The components can be identified according to manufacturer, type (e.g., FPGA, CPLD, etc.), model, version, etc. At 906, a signature can be generated based on the information for configuration of client configurable hardware logic and/or a binary image file compiled using the information for configuration of client configurable hardware logic. As disclosed herein, the signature may also be generated or provided by a client along with the information for configuration of client configurable hardware logic. The signature can include information used to identify the hardware components of 904. The signature may include several portions corresponding to portions of the information for configuration of client configurable hardware logic and/or compiled binary image. The signature may be generated using checksum and/or CRC techniques. At 908, the signature can be embedded into the compiled client configuration data image. Embedding the signature can include, during compiling, including a function that outputs the signature, populates a register, etc. At 910, the signature (or a corresponding signature) can be stored as metadata (e.g., a manifest) corresponding to the compiled client configuration data. For example, the signature can be stored within a manifest, such as manifest 808. At 912, the hardware component can be configured according to the compiled client configuration data. Configuration can include bistreaming binary information and/or otherwise loading a binary image into the hardware component. After configuration, the hardware component may output the signature. In certain embodiments, the signature can be output to a verification circuit operating within a host or privileged domain. At 914, the signature can be used, by the verification circuit, to verify correct configuration of the hardware component according to the compiled client configuration data. Verification can include comparing the output signature to a stored signature of 910, for example. If the two signature correspond, the hardware component can be determined to have been correctly loaded (e.g., with the intended image and/or without corruption). If not, one or more corrective action may be performed. The corrective actions can include notifying a virtual machine coupled to and included in a same domain as the hardware component, isolating the hardware component, and/or reconfiguring the hardware component. In certain embodiments, a signature (such as a signature stored as metadata) can be used to determine what corrective actions are to be performed and/or for which detected error modes. FIG. 10 illustrates an example of a computer system 1000. Functionality and/or several components of the computer system 1000 may be used without limitation with other features disclosed elsewhere in this disclosure, without limitations. In certain examples, the computer system 1000 may include processing logic 1002, a configuration module 1004, a management module 1006, a bus interface module 1008, memory 1010, and a network interface module 1012. These modules may be hardware modules, software modules, or a combination of hardware and software. In certain instances, modules may be interchangeably used with components or engines, without deviating from the scope of the disclosure. The computer system 1000 may include additional modules, not illustrated here, such as components discussed with respect to the nodes disclosed in FIG. 10. In some implementations, the computer system 1000 may include fewer modules. In some implementations, one or more of the modules may be combined into one module. One or more of the modules may be in communication with each other over a communication channel 1010. The communication channel 1010 may include one or more busses, meshes, matrices, fabrics, a combination of these communication channels, or some other suitable communication channel. The processing logic 1002 may include application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems-on-chip (SoCs), network processing units (NPUs), processors configured to execute instructions or any other circuitry configured to perform logical arithmetic and floating point operations. Examples of processors that may be included in the processing logic 1002 may include processors developed by ARM®, MIPS®, AMD®, Intel®, Qualcomm®, and the like. In certain implementations, processors may include multiple processing cores, wherein each processing core may be configured to execute instructions independently of the other processing cores. Furthermore, in certain implementations, each processor or processing core may implement multiple processing threads executing instructions on the same processor or processing core, while maintaining logical separation between the multiple processing threads. Such processing threads executing on the processor or processing core may be exposed to software as separate logical processors or processing cores. In some implementations, multiple processors, processing cores or processing threads executing on the same core may share certain resources, such as for example busses, level 1 (L1) caches, and/or level 2 (L2) caches. The instructions executed by the processing logic 1002 may be stored on a computer-readable storage medium, for example, in the form of a computer program. The computer-readable storage medium may be non-transitory. In some cases, the computer-readable medium may be part of the memory 1010. The memory 1010 may include either volatile or non-volatile, or both volatile and non-volatile types of memory. The memory 1010 may, for example, include random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, and/or some other suitable storage media. In some cases, some or all of the memory 1010 may be internal to the computer system 1000, while in other cases some or all of the memory may be external to the computer system 1000. The memory 1010 may store an operating system comprising executable instructions that, when executed by the processing logic 1002, provides the execution environment for executing instructions providing networking functionality for the computer system 1000. The memory may also store and maintain several data structures and routing tables for facilitating the functionality of the computer system 1000. In some implementations, the configuration module 1004 may include one or more configuration registers. Configuration registers may control the operations of the computer system 1000. In some implementations, one or more bits in the configuration register can represent certain capabilities of the computer system 1000. Configuration registers may be programmed by instructions executing in the processing logic 1002, and/or by an external entity, such as a host device, an operating system executing on a host device, and/or a remote device. The configuration module 1004 may further include hardware and/or software that control the operations of the computer system 1000. In some implementations, the management module 1006 may be configured to manage different components of the computer system 1000. In some cases, the management module 1006 may configure one or more bits in one or more configuration registers at power up, to enable or disable certain capabilities of the computer system 1000. In certain implementations, the management module 1004 may use processing resources from the processing logic 1002. In other implementations, the management module 1006 may have processing logic similar to the processing logic 1002, but segmented away or implemented on a different power domain than the processing logic 1002. The bus interface module 1008 may enable communication with external entities, such as a host device and/or other components in a computing system, over an external communication medium. The bus interface module 1008 may include a physical interface for connecting to a cable, socket, port, or other connection to the external communication medium. The bus interface module 1008 may further include hardware and/or software to manage incoming and outgoing transactions. The bus interface module 1008 may implement a local bus protocol, such as Peripheral Component Interconnect (PCI) based protocols, Non-Volatile Memory Express (NVMe), Advanced Host Controller Interface (AHCI), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Serial AT Attachment (SATA), Parallel ATA (PATA), some other standard bus protocol, or a proprietary bus protocol. The bus interface module 1008 may include the physical layer for any of these bus protocols, including a connector, power management, and error handling, among other things. In some implementations, the computer system 1000 may include multiple bus interface modules for communicating with multiple external entities. These multiple bus interface modules may implement the same local bus protocol, different local bus protocols, or a combination of the same and different bus protocols. The network interface module 1012 may include hardware and/or software for communicating with a network. This network interface module 1012 may, for example, include physical connectors or physical ports for wired connection to a network, and/or antennas for wireless communication to a network. The network interface module 1012 may further include hardware and/or software configured to implement a network protocol stack. The network interface module 1012 may communicate with the network using a network protocol, such as for example TCP/IP, Infiniband, RoCE, Institute of Electrical and Electronics Engineers (IEEE) 802.9 wireless protocols, User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM), token ring, frame relay, High Level Data Link Control (HDLC), Fiber Distributed Data Interface (FDDI), and/or Point-to-Point Protocol (PPP), among others. In some implementations, the computer system 1000 may include multiple network interface modules, each configured to communicate with a different network. For example, in these implementations, the computer system 1000 may include a network interface module for communicating with a wired Ethernet network, a wireless 802.9 network, a cellular network, an Infiniband network, etc. The various components and modules of the computer system 1000, described above, may be implemented as discrete components, as a System on a Chip (SoC), as an ASIC, as an NPU, as an FPGA, or any combination thereof. In some embodiments, the SoC or other component may be communicatively coupled to another computing system to provide various services such as traffic monitoring, traffic shaping, computing, etc. In some embodiments of the technology, the SoC or other component may include multiple subsystems as disclosed with respect to FIG. 10. FIG. 11 illustrates a network 1100, illustrating various different types of network devices 1000 of FIG. 10, such as nodes comprising the network device, switches and routers. Network 1100 can form a network infrastructure that can be used to implement a service provider network (e.g., cloud provider network). In certain embodiments, the network 1100 may be based on a switched architecture with point-to-point links. As illustrated in FIG. 11, the network 1100 includes a plurality of switches 1104a-1004d, which may be arranged in a network. In some cases, the switches are arranged in a multi-layered network, such as a Clos network. Switches generally operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the Open System Interconnect (OSI) Reference Model and may support several packet protocols. Switches 1104a-1004d may be connected to a plurality of nodes 1102a-1002h and provide multiple paths between any two nodes. The network 1100 may also include one or more network devices 1000 for connection with other networks 1108, such as other subnets, LANs, wide area networks (WANs), or the Internet, and may be referred to as routers 1106. Routers use headers and forwarding tables to determine the best path for forwarding the packets, and use protocols such as internet control message protocol (ICMP) to communicate with each other and configure the best route between any two devices. In some examples, network(s) 1100 may include any one or a combination of many different types of networks, such as cable networks, the Internet, wireless networks, cellular networks and other private and/or public networks. Interconnected switches 1104a-1004d and router 1106, if present, may be referred to as a switch fabric, a fabric, a network fabric, or simply a network. In the context of a computer network, terms “fabric” and “network” may be used interchangeably herein. Nodes 1102a-1102h may be any combination of host systems, processor nodes, storage subsystems, and I/O chassis that represent user devices, service provider computers or third party computers. Any of nodes 1102a-1102h can be or include functionality of computer system 1000. User devices may include computing devices to access an application 1132 (e.g., a web browser or mobile device application). In some aspects, the application 1132 may be hosted, managed, and/or provided by a computing resources service or service provider. The application 1132 may allow the user(s) to interact with the service provider computer(s) to, for example, access web content (e.g., web pages, music, video, etc.). The user device(s) may be a computing device such as for example a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a netbook computer, a desktop computer, a thin-client device, a tablet computer, an electronic book (e-book) reader, a gaming console, etc. In some examples, the user device(s) may be in communication with the service provider computer(s) via the other network(s) 1108. Additionally, the user device(s) may be part of the distributed system managed by, controlled by, or otherwise part of the service provider computer(s) (e.g., a console device integrated with the service provider computers). The node(s) of FIG. 11 may also represent one or more service provider computers. One or more service provider computers may provide a native application that is configured to run on the user devices, which user(s) may interact with. The service provider computer(s) may, in some examples, provide computing resources such as, but not limited to, client entities, low latency data storage, durable data storage, data access, management, virtualization, cloud-based software solutions, electronic content performance management, and so on. The service provider computer(s) may also be operable to provide web hosting, databasing, computer application development and/or implementation platforms, combinations of the foregoing or the like to the user(s). In some embodiments, the service provider computer(s) may be provided as one or more virtual machines implemented in a hosted computing environment. The hosted computing environment may include one or more rapidly provisioned and released computing resources. These computing resources may include computing, networking and/or storage devices. A hosted computing environment may also be referred to as a cloud computing environment. The service provider computer(s) may include one or more servers, perhaps arranged in a cluster, as a server farm, or as individual servers not associated with one another and may host the application 1132 and/or cloud-based software services. These servers may be configured as part of an integrated, distributed computing environment. In some aspects, the service provider computer(s) may, additionally or alternatively, include computing devices such as for example a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a netbook computer, a server computer, a thin-client device, a tablet computer, a gaming console, etc. In some instances, the service provider computer(s), may communicate with one or more third party computers. In one example configuration, the node(s) 1102a-1102h may include at least one memory 1118 and one or more processing units (or processor(s) 1120). The processor(s) 1120 may be implemented in hardware, computer-executable instructions, firmware, or combinations thereof. Computer-executable instruction or firmware implementations of the processor(s) 1120 may include computer-executable or machine-executable instructions written in any suitable programming language to perform the various functions described. In some instances, the hardware processor(s) 1120 may be a single core processor or a multi-core processor. A multi-core processor may include multiple processing units within the same processor. In some embodiments, the multi-core processors may share certain resources, such as buses and second or third level caches. In some instances, each core in a single or multi-core processor may also include multiple executing logical processors (or executing threads). In such a core (e.g., those with multiple logical processors), several stages of the execution pipeline and also lower level caches may also be shared. The memory 1118 may store program instructions that are loadable and executable on the processor(s) 1120, as well as data generated during the execution of these programs. Depending on the configuration and type of the node(s) 1102a-1102h, the memory 1118 may be volatile (such as RAM) and/or non-volatile (such as ROM, flash memory, etc.). The memory 1118 may include an operating system 1128, one or more data stores 1130, one or more application programs 1132, one or more drivers 1134, and/or services for implementing the features disclosed herein. The operating system 1128 may support nodes 1102a-1102h basic functions, such as scheduling tasks, executing applications, and/or controller peripheral devices. In some implementations, a service provider computer may host one or more virtual machines. In these implementations, each virtual machine may be configured to execute its own operating system. Examples of operating systems include Unix, Linux, Windows, Mac OS, iOS, Android, and the like. The operating system 1128 may also be a proprietary operating system. The data stores 1130 may include permanent or transitory data used and/or operated on by the operating system 1128, application programs 1132, or drivers 1134. Examples of such data include web pages, video data, audio data, images, user data, and so on. The information in the data stores 1130 may, in some implementations, be provided over the network(s) 1108 to user devices 1104. In some cases, the data stores 1130 may additionally or alternatively include stored application programs and/or drivers. Alternatively or additionally, the data stores 1130 may store standard and/or proprietary software libraries, and/or standard and/or proprietary application user interface (API) libraries. Information stored in the data stores 1130 may be machine-readable object code, source code, interpreted code, or intermediate code. The drivers 1134 include programs that may provide communication between components in a node. For example, some drivers 1134 may provide communication between the operating system 1128 and additional storage 1122, network device 1124, and/or I/O device 1126. Alternatively or additionally, some drivers 1134 may provide communication between application programs 1132 and the operating system 1128, and/or application programs 1132 and peripheral devices accessible to the service provider computer. In many cases, the drivers 1134 may include drivers that provide well-understood functionality (e.g., printer drivers, display drivers, hard disk drivers, Solid State Device drivers). In other cases, the drivers 1134 may provide proprietary or specialized functionality. The service provider computer(s) or servers may also include additional storage 1122, which may include removable storage and/or non-removable storage. The additional storage 1122 may include magnetic storage, optical disks, solid state disks, flash memory, and/or tape storage. The additional storage 1122 may be housed in the same chassis as the node(s) 1102a-1102h or may be in an external enclosure. The memory 1118 and/or additional storage 1122 and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for the computing devices. In some implementations, the memory 1118 may include multiple different types of memory, such as SRAM, DRAM, or ROM. The memory 1118 and the additional storage 1122, both removable and non-removable, are examples of computer-readable storage media. For example, computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in a method or technology for storage of information, the information including, for example, computer-readable instructions, data structures, program modules, or other data. The memory 1118 and the additional storage 1122 are examples of computer storage media. Additional types of computer storage media that may be present in the node(s) 1102a-1102h may include, but are not limited to, PRAM, SRAM, DRAM, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, solid state drives, or some other medium which can be used to store the desired information and which can be accessed by the node(s) 1102a-1102h. Computer-readable media also includes combinations of any of the above media types, including multiple units of one media type. Alternatively or additionally, computer-readable communication media may include computer-readable instructions, program modules or other data transmitted within a data signal, such as a carrier wave or other transmission. However, as used herein, computer-readable storage media does not include computer-readable communication media. The node(s) 1102a-1102h may also include I/O device(s) 1126, such as a keyboard, a mouse, a pen, a voice input device, a touch input device, a display, speakers, a printer, and the like. The node(s) 1102a-1102h may also include one or more communication channels 1136. A communication channel 1136 may provide a medium over which the various components of the node(s) 1102a-1102h can communicate. The communication channel or channels 1136 may take the form of a bus, a ring, a switching fabric, or a network. The node(s) 1102a-1102h may also contain network device(s) 1126 that allow the node(s) 1102a-1102h to communicate with a stored database, another computing device or server, user terminals and/or other devices on the network(s) 1100. The network device(s) 1124 of FIG. 11 may include similar components discussed with reference to the network device 900 of FIG. 9. In some implementations, the network device 1126 is a peripheral device, such as a PCI-based device. In these implementations, the network device 1126 includes a PCI interface for communicating with a host device. The term “PCI” or “PCI-based” may be used to describe any protocol in the PCI family of bus protocols, including the original PCI standard, PCI-X, Accelerated Graphics Port (AGP), and PCI-Express (PCIe) or any other improvement or derived protocols that are based on the PCI protocols discussed herein. The PCI-based protocols are standard bus protocols for connecting devices, such as a local peripheral device to a host device. A standard bus protocol is a data transfer protocol for which a specification has been defined and adopted by various manufacturers. Manufacturers ensure that compliant devices are compatible with computing systems implementing the bus protocol, and vice versa. As used herein, PCI-based devices also include devices that communicate using Non-Volatile Memory Express (NVMe). NVMe is a device interface specification for accessing non-volatile storage media attached to a computing system using PCIe. For example, the bus interface module 1108 may implement NVMe, and the network device 1126 may be connected to a computing system using a PCIe interface. A PCI-based device may include one or more functions. A “function” describes operations that may be provided by the network device 1126. Examples of functions include mass storage controllers, network controllers, display controllers, memory controllers, serial bus controllers, wireless controllers, and encryption and decryption controllers, among others. In some cases, a PCI-based device may include more than one function. For example, a PCI-based device may provide a mass storage controller and a network adapter. As another example, a PCI-based device may provide two storage controllers, to control two different storage resources. In some implementations, a PCI-based device may have up to eight functions. In some implementations, the network device 1126 may include single-root I/O virtualization (SR-IOV). SR-IOV is an extended capability that may be included in a PCI-based device. SR-IOV allows a physical resource (e.g., a single network interface controller) to appear as multiple resources (e.g., sixty-four network interface controllers). Thus, a PCI-based device providing a certain functionality (e.g., a network interface controller) may appear to a device making use of the PCI-based device to be multiple devices providing the same functionality. The functions of an SR-IOV-capable storage adapter device may be classified as physical functions (PFs) or virtual functions (VFs). Physical functions are fully featured functions of the device that can be discovered, managed, and manipulated. Physical functions have configuration resources that can be used to configure or control the storage adapter device. Physical functions include the same configuration address space and memory address space that a non-virtualized device would have. A physical function may have a number of virtual functions associated with it. Virtual functions are similar to physical functions, but are light-weight functions that may generally lack configuration resources, and are generally controlled by the configuration of their underlying physical functions. Each of the physical functions and/or virtual functions may be assigned to a respective thread of execution (such as for example, a virtual machine) running on a host device. The modules described herein may be software modules, hardware modules or a suitable combination thereof. If the modules are software modules, the modules can be embodied on a non-transitory computer readable medium and processed by a processor in any of the computer systems described herein. It should be noted that the described processes and architectures can be performed either in real-time or in an asynchronous mode prior to any user interaction. The modules may be configured in the manner suggested in FIG. 9, FIG. 11, and/or functions described herein can be provided by one or more modules that exist as separate modules and/or module functions described herein can be spread over multiple modules. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Various embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. 16386101 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Sep 23rd, 2020 12:00AM https://www.uspto.gov?id=USD0949727-20220426 Automobile connectivity device D949727 The ornamental design for an automobile connectivity device, as shown and described. 1 FIG. 1 is a front, right-side perspective view of an automobile connectivity device; FIG. 2 is a back, left-side perspective view thereof; FIG. 3 is a front view thereof; FIG. 4 is a back view thereof; FIG. 5 is a left-side view thereof; FIG. 6 is a right-side view thereof; FIG. 7 is a top view thereof; FIG. 8 is a bottom view thereof; FIG. 9 is a front, right-side perspective view of the automobile connectivity device of FIG. 1, shown in an alternate configuration; and, FIG. 10 is a right-side view thereof. The dashed broken lines depict portions of the automobile connectivity device that form no part of the claimed design. 29751829 amazon technologies, inc. USA S1 Design Patent Open D10/106.9 15 Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jun 23rd, 2020 12:00AM https://www.uspto.gov?id=US11315262-20220426 Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras The motion of objects within a scene may be detected and tracked using digital (e.g., visual and depth) cameras aligned with fields of view that overlap at least in part. Objects may be identified within visual images captured from the scene using a tracking algorithm and correlated to point clouds or other depth models generated based on depth images captured from the scene. Once visual aspects (e.g., colors or other features) of objects are correlated to the point clouds, shapes and/or positions of the objects may be determined and used to further train the tracking algorithms to recognize the objects in subsequently captured frames. Moreover, a Kalman filter or other motion modeling technique may be used to enhance the prediction of a location of an object within subsequently captured frames. 11315262 1. A system comprising: a first camera including at least a portion of a storage unit within at least a first field of view, wherein the first camera is configured to capture visual images and depth images; and a computer system in communication with at least the first camera, wherein the computer system is programmed with one or more sets of instructions that, when executed by the computer system, cause the computer system to at least: detect a triggering event associated with the portion of the storage unit, wherein the triggering event is detected at a first time; define a first bounding region corresponding to the triggering event; identify a first visual image captured by the first camera at approximately the first time; identify a first depth image captured by the first camera at approximately the first time; recognize at least a first portion of an object within a first two-dimensional representation of the first bounding region in the first visual image; define a point cloud based at least in part on the first depth image, wherein the point cloud represents a plurality of points in three-dimensional space in at least the first depth image corresponding to at least the first two-dimensional representation of the first bounding region; project at least the first two-dimensional representation of the first bounding region onto the point cloud; determine a shape of the object based at least in part on the first two-dimensional representation of the first bounding region projected onto the point cloud; determine a first position of the object at the first time based at least in part on the first two-dimensional representation of the first bounding region projected onto the point cloud, wherein the first position of the object comprises at least a representative point of the object at the first time; train at least one algorithm to recognize the object based at least in part on the shape of the object and the first position of the object at the first time; capture, by the first camera, a second visual image at a second time, wherein the second time follows the first time; capture, by the first camera, a second depth image at approximately the second time; recognize, by the at least one algorithm, at least a second portion of the object within the second visual image; update the point cloud based at least in part on the second depth image; project at least a portion of the second visual image onto the updated point cloud; determine a second position of the object at the second time based at least in part on the second visual image and the updated point cloud; and define a trajectory of the object based at least in part on the first position, the first time, the second position and the second time. 2. The system of claim 1, further comprising: a second camera including at least the portion of the storage unit within a second field of view, wherein the second camera is configured to capture visual images and depth images, wherein the second field of view overlaps the first field of view at least in part, wherein the computer system is in communication with at least the first camera and the second camera, and wherein the one or more sets of instructions, when executed by the computer system, further cause the computer system to at least: identify a third visual image captured by the second camera at approximately the first time; recognize at least a third portion of the object within a second two-dimensional representation of the first bounding region in the third visual image; identify a third depth image captured by the second camera at approximately the first time, wherein the point cloud is defined based at least in part on the first depth image and the third depth image, and wherein the point cloud represents a plurality of points in three-dimensional space in at least the first depth image corresponding to at least the first two-dimensional representation of the first bounding region and the third depth image corresponding to at least the second two-dimensional representation of the first bounding region; and project at least the second two-dimensional representation of the first bounding region onto the point cloud, wherein the shape of the object is determined based at least in part on the first two-dimensional representation of the first bounding region projected onto the point cloud and the second two-dimensional representation of the first bounding region projected onto the point cloud, and wherein the first position of the object at the first time is determined based at least in part on the first two-dimensional representation of the first bounding region projected onto the point cloud and the second two-dimensional representation of the first bounding region projected onto the point cloud. 3. The system of claim 1, wherein the one or more sets of instructions, when executed by the computer system, further cause the computer system to at least: define a second bounding region corresponding to the first position, wherein at least the second portion of the object is recognized within a second two-dimensional representation of the second bounding region in the second visual image, and wherein the portion of the second visual image projected onto the updated point cloud is the second two-dimensional representation of the second bounding region in the second visual image. 4. The system of claim 2, wherein the at least one computer processor is further configured to at least: determine, for each of the points of the point cloud, a first score based at least in part on the first visual image, wherein the first score comprises a probability that one of the pixels of the first visual image corresponds to one of the points of the point cloud; determine, for each of the points of the point cloud, a second score based at least in part on the third visual image, wherein the second score comprises a probability that one of the pixels of the third visual image corresponds to one of the points of the point cloud; and determine, for each of the points of the point cloud, an aggregate score comprising at least the first score and the second score, wherein the first position of the object at the first time is determined based at least in part on the aggregate scores for each of the points of the point cloud. 5. A method comprising: determining a first position of an object at a first time; defining a first bounding region based at least in part on the first position; identifying a first visual image captured at a second time by a first imaging device having a first field of view, wherein the first field of view includes at least a first portion of a scene, and wherein the second time follows the first time; recognizing at least a first portion of the object within a first portion of the first visual image, wherein the first portion of the first visual image is a two-dimensional representation of at least a first portion of the first bounding region; identifying a first depth image captured at approximately the second time by a second imaging device having a second field of view, and wherein the second field of view includes at least a second portion of the scene; generating a three-dimensional representation of at least a portion of a scene based at least in part on the first depth image; projecting at least the first portion of the first visual image onto the three-dimensional representation; determining a second position of the object at the second time based at least in part on at least the first portion of the first visual image and the three-dimensional representation; defining a second bounding region based at least in part on the second position; identifying a second visual image captured at a third time by the first imaging device, wherein the third time follows the second time; recognizing at least a second portion of the object within at least a first portion of the second visual image, wherein the first portion of the second visual image is a two dimensional-representation of at least a first portion of the second bounding region; identifying a second depth image captured at approximately the third time by the second imaging device; updating the three-dimensional representation based at least in part on the second depth image; projecting at least the first portion of the second visual image onto the updated three-dimensional representation; and determining a third position of the object at the third time based at least in part on the first portion of the second visual image and the updated three-dimensional representation. 6. The method of claim 5, further comprising: generating a record of motion of the object by the at least one computer processor, wherein the record of the motion of the object comprises: information regarding the first position and the first time; and information regarding the second position and the second time; and storing the record of the motion of the object in at least one data store. 7. A method comprising: determining a first position of an object at a first time; defining a first bounding region based at least in part on the first position; identifying a first visual image captured at a second time by a first imaging device having a first field of view, wherein the first field of view includes at least a first portion of a scene, and wherein the second time follows the first time; identifying a second visual image captured at approximately the second time by a third imaging device having a third field of view, wherein the third field of view includes at least a third portion of the scene, and wherein the third field of view overlaps the first field of view at least in part; recognizing at least a first portion of the object within a first portion of the first visual image, wherein the first portion of the first visual image is a two-dimensional representation of at least a first portion of the first bounding region; recognizing at least a second portion of the object within a first portion of the second visual image, wherein the first portion of the second visual image comprises a two-dimensional representation of at least a second portion of the first bounding region; identifying a first depth image captured at approximately the second time by a second imaging device having a second field of view, and wherein the second field of view includes at least a second portion of the scene; generating a three-dimensional representation of at least the second portion of a scene based at least in part on the first depth image; projecting at least the first portion of the first visual image onto a first portion of the three-dimensional representation; projecting at least the first portion of the second visual image onto a second portion of the three-dimensional representational; and determining a second position of the object at the second time based at least in part on at least the first portion of the first visual image projected onto the first portion of the three-dimensional representation and the first portion of the second visual image projected onto the second portion of the three-dimensional representation. 8. The method of claim 7, wherein projecting at least the first portion of the first visual image onto the first portion of the three-dimensional representation comprises: calculating, for each of a first plurality of image pixels of the first portion of the first visual image, a first score corresponding to a probability that the image pixel depicts at least the first portion of the object, wherein projecting at least the first portion of the second visual image onto the second portion of the three-dimensional representation comprises: calculating, for each of a second plurality of image pixels of the first portion of the second visual image, a second score corresponding to a probability that the image pixel depicts at least the second portion of the object, and wherein the method further comprises: calculating, for each of a plurality of points of the three-dimensional representation, an aggregate score comprising a sum of the first score calculated for one of the first plurality of image pixels corresponding to one of the plurality of points of the three-dimensional representation and the second score calculated for one of the second plurality of image pixels corresponding to the one of the plurality of points of the three-dimensional representation; and defining a subset of the plurality of points of the three-dimensional representation, wherein the subset of the plurality of points includes points having aggregate scores above a predetermined threshold, and wherein the second position of the object is determined based at least in part on the subset of the plurality of points. 9. The method of claim 8, further comprising: determining a first percent occlusion of the object in the first portion of the first visual image; and determining a second percent occlusion of the object in the second portion of the second visual image, wherein the first score is calculated for each of the first plurality of image pixels based at least in part on the first percent occlusion, and wherein the second score is calculated for each of the second plurality of image pixels based at least in part on the second percent occlusion. 10. The method of claim 7, wherein determining the second position of the object comprises: providing at least the first portion of the first visual image to at least one tracking algorithm as a first input; determining a first hypothetical position of the object at the first time based at least in part on a first output of the at least one tracking algorithm; providing at least the second portion of the second visual image to the at least one tracking algorithm as a second input; and determining a second hypothetical position of the object at the second time based at least in part on a second output of the at least one tracking algorithm, wherein the second position of the object at the second time is determined based at least in part on the first hypothetical position and the second hypothetical position. 11. The method of claim 5, wherein defining the first bounding region comprises: determining at least one point in three-dimensional space corresponding to the first position of the object at the first time; and defining the first bounding region based at least in part on a buffer with respect to the at least one point, wherein the first bounding region is at least one of a rectangular volume, a spherical volume, a pyramidal volume or a cylindrical volume. 12. The method of claim 5, wherein each of the first visual image and the first depth image is captured by a first RGBD camera at approximately the second time, wherein the first RGBD camera comprises the first imaging device and the second imaging device, and wherein the first field of view is substantially identical to the second field of view. 13. The method of claim 5, wherein at least the first portion of the object is recognized within the first portion of the first visual image using a kernelized correlation filter algorithm operating on the first imaging device. 14. The method of claim 5, further comprising: determining, by at least one sensor, information regarding at least one triggering event that occurred within the scene at approximately the first time, wherein the at least one triggering event is associated with the object, and wherein the first position of the object is determined based at least in part on the information regarding the at least one triggering event. 15. The method of claim 14, wherein the at least one triggering event is at least one of: a placement of the object; a retrieval of the object; an entry of at least one actor; or a departure of at least one actor. 16. The method of claim 14, wherein determining the information regarding the at least one triggering event comprises at least one of: detecting a change in mass on at least one surface within the scene; or recognizing at least one of a color, a texture or an outline of the object within the first visual image. 17. The method of claim 5, wherein the object is at least one of: a human body part; or a commercial good manipulated by the human body part. 18. A facility comprising: a storage unit; an imaging device aligned to include at least a portion of the storage unit within at least one field of view, wherein the imaging device is configured to capture visual images and depth images; and a computer resource in communication with at least the imaging device, wherein the computer resource is programmed with one or more sets of instructions that, when executed, cause the computer resource to at least: detect a triggering event associated with an item on the portion of the storage unit at a first time; determine a first position of the item at the first time based at least in part on the triggering event; define a first bounding region corresponding to the first position; capture a first visual image using the imaging device at a second time, wherein the second time follows the first time; capture a first depth image using the imaging device at approximately the second time; recognize at least a first portion of the item within a first portion of the first visual image corresponding to the first bounding region; define a first point cloud based at least in part on the first depth image; determine a second position of the item at the second time based at least in part on the first portion of the first visual image and the first point cloud; define a trajectory representative of motion of the item based at least in part on the first position, the first time, the second position and the second time; define a second bounding region corresponding to the second position; capture a second visual image using the imaging device at a third time, wherein the third time follows the second time; capture a second depth image using the imaging device at approximately the third time; recognize at least a second portion of the item within a first portion of the second visual image corresponding to the second bounding region; define a second point cloud based at least in part on the second depth image; determine a third position of the item at the third time based at least in part on the second visual image and the second point cloud; and update the trajectory based at least in part on the third position and the third time. 19. The method of claim 7, further comprising: determining, by at least one sensor, information regarding at least one triggering event that occurred within the scene at approximately the first time, wherein the at least one triggering event is associated with the object, wherein the at least one triggering event is at least one of: a placement of the object; a retrieval of the object; an entry of at least one actor; or a departure of at least one actor; a change in mass on at least one surface within the scene; or a detection of at least one of a color, a texture or an outline of the object within the first visual image, and wherein the first position of the object is determined based at least in part on the information regarding the at least one triggering event. 20. The facility of claim 18, wherein the at least one triggering event is at least one of: a placement of the item on the portion of the storage unit; a retrieval of the item from the portion of the storage unit; an entry of at least one actor within the at least one field of view; or a departure of at least one actor from the at least one field of view. 20 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 15/473,430, now U.S. Pat. No. 10,699,421, filed Mar. 29, 2017, the contents of which are incorporated by reference herein in their entirety. BACKGROUND Today, the use of imaging devices such as digital cameras for conducting surveillance or monitoring operations is widespread. For example, digital cameras are often used to monitor the arrivals or departures of goods or the performance of services in materials handling facilities such as warehouses, fulfillment centers, retail establishments or other like facilities, as well as the travels of persons or objects in locations such as airports, stadiums or other dense environments, or the flow of traffic on one or more sidewalks, roadways or highways. Digital cameras are also frequently installed in financial settings such as banks or casinos, where money changes hands in large amounts, at high rates of speed, or between large numbers of people. In dynamic environments such as materials handling facilities, transportation centers, financial institutions or like structures in which diverse collections of people, objects or machines enter and exit from such environments at regular or irregular times or on predictable or unpredictable schedules, it is frequently difficult to detect and track small and/or fast-moving objects using digital cameras. Most systems for detecting and tracking objects in three-dimensional (or “3D”) space are limited to the use of a single digital camera and involve both the generation of a 3D mesh (e.g., a polygonal mesh) from depth imaging data captured from such objects and the patching of portions of visual imaging data onto faces of the 3D mesh. Generating and tracking a 3D model of an object from imaging data captured from the object (e.g., visual images of an object, or depth images or other samples of depths or ranges to the object) is a computationally expensive process that tends to consume or occupy substantial amounts of available data storage, processing and transmission capacities, and may require comparatively lengthy processing times. Therefore, in environments where objects that are to be tracked are large in number or small in size, or are located in tight or crowded quarters or moving at high speeds, the generation and tracking of 3D models of such objects is currently infeasible. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1A through 1I are views of aspects of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIG. 2 is a block diagram of components of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIG. 3 is a flow chart of one process for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIG. 4 is a view of data captured using one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIGS. 5A through 5D are views of aspects of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIGS. 6A through 6G are views of aspects of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIG. 7 is a flow chart of one process for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. FIGS. 8A through 8M are views of aspects of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure. DETAILED DESCRIPTION As is set forth in greater detail below, the present disclosure is directed to systems and methods for tracking objects in three-dimensional (or “3D”) space using one or more calibrated imaging devices. More specifically, one or more implementations of the present disclosure are directed to tracking target objects within physical scenes, where the target objects are simultaneously observed by a number of calibrated visual cameras and depth cameras having overlapping fields of view. One or more of the implementations disclosed herein are particularly useful in tracking the presence and motion of objects within materials handling facilities, or in other environments where large numbers of objects of varying sizes may appear and remain present for brief or intermittent periods of time and/or are in motion at high rates of speed, even where such objects may be blocked or hidden from the fields of view of one or more of the imaging devices for one or more of such periods of time. Referring to FIGS. 1A through 1I, views of aspects of one system 100 for tracking objects in 3D using calibrated cameras in accordance with implementations of the present disclosure are shown. As is shown in FIGS. 1A and 1B, the system 100 includes a materials handling facility 120 having a plurality of cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 and a plurality of shelving units 170-1, 170-2, 170-3 aligned in parallel with respect to one another. One or more operators 180-1, 180-2, e.g., customers, workers or other personnel, may travel throughout the materials handling facility 120 alone or with one or more carts 185-1, 185-2 or dollies, hand trucks, bins or the like, such as between or among the various shelving units 170-1, 170-2, 170-3, during the performance of one or more tasks. For example, one or more of the operators 180-1, 180-2 may pick one or more items from the various shelves of shelving units 170-1, 170-2, 170-3, or stow one or more items onto such shelves. Alternatively, one or more autonomous mobile robots (not shown) may also perform one or more tasks within the materials handling facility 120, e.g., picking and/or storing items. Each of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 is calibrated, such that relationships between coordinates of imaging data captured by such cameras and directions relative to their respective image sensors are known. Moreover, each of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 is also aligned and configured to capture imaging data in the form of visual images (e.g., color images, such as red-green-blue images, or “RGB” images) and/or depth images (e.g., depth data or ranging data) from objects within their respective fields of view. For example, the camera 125-1 is an RGB camera (e.g., a camera configured to capture visual images) aligned to include a front face of the shelving unit 170-1 within its field of view, while the camera 125-2 is an RGBD camera (e.g., a camera configured to capture both visual images and depth images) aligned to include a front face of the shelving unit 170-2 and spaces adjacent thereto within its field of view, and the camera 125-3 is an RGB camera aligned to include a front face of the shelving unit 170-3 within its field of view. Similarly, the cameras 125-4, 125-5, 125-6 are an RGBD camera, an RGB camera and an RGBD camera, respectively, that are aligned to include the front faces of the shelving units 170-1, 170-2, 170-3 and spaces adjacent to such shelving units 170-1, 170-2, 170-3 within their respective fields of view. The cameras 125-7, 125-8, 125-9 are RGBD cameras that are mounted aloft within the materials handling facility 120 and aligned to include spaces adjacent to the shelving units 170-1, 170-2, 170-3 within their fields of view. Thus, each of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 of the materials handling facility 120 shown in FIGS. 1A and 1B is configured to capture imaging data in the form of visual images and/or depth images regarding activity and/or interactions with one or more of the shelving units 170-1, 170-2, 170-3, including imaging data regarding specific aspects of the operators 180-1, 180-2, the carts 185-1, 185-2, or any objects being transported throughout the materials handling facility 120, such as one or more items being picked from or stowed onto the shelving units 170-1, 170-2, 170-3. In accordance with one or more implementations of the present disclosure, objects within an environment such as the materials handling facility 120 shown in FIGS. 1A and 1B may be tracked based on visual imaging data and depth imaging data captured using the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9. Two or more cameras may be synchronized, and an initial position of a target object may be selected on any basis. The initial position of the target object may be identified and/or represented in any manner, such as by coordinates of a center or centroid of the target object, coordinates of each of a set or a collection of points corresponding to the object (e.g., in 3D space), or coordinates of a representative point of the target object, which may or may not be a center or centroid of the target object. A set of the active cameras, or cameras that include at least a portion of a target object within their respective fields of view, may be identified and an extent to which the target object is visible or occluded from such fields of view may be determined. For example, for a given target object within a 3D bounding region, a portion of a 2D representation of the bounding region that appears within such fields of view of a digital camera or another imaging device may be determined. Likewise, degrees of occlusion of a target object within each of such fields of view may be determined in any manner. In some embodiments, a degree of occlusion may be determined by comparing a z-buffer corresponding to a target object to a z-buffer of other points within a field of view (e.g., a camera frustrum) spanned by a 2D representation of a 3D bounding region that is projected into the field of view, or in any other manner. A point cloud representative of all available depth information regarding a physical scene may be defined, and two-dimensional predictions or hypotheses as to positions of the target object expressed in visual images captured at synchronized times may be determined and correlated with points in space expressed in the point cloud to determine positions of the target object in 3D space at the synchronized times. A “tracklet,” or a record of the positions of the target object at various times beginning at the initial position and the subsequently determined positions, may be defined. So long as each of a set of “stopping conditions” (or reasons to terminate tracking of an item) remains unmet, positions that are determined from the visual images correlated with the point cloud may be added to the tracklet, which ultimately defines the motion of the target object. As is shown in FIG. 1C, the operator 180-2 is shown as extending a left hand 183-2 upwardly and toward the front face of the shelving unit 170-2, e.g., to retrieve one or more items from a shelf of the shelving unit 170-2, or to stow one or more items thereon. The cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 are synchronized, and a plurality of images 130-1, 130-2, 130-3, 130-4 are captured from a set of the cameras that include the operator 180-2 and the cart 185-2 in their respective fields of view. This set includes the camera 125-2, which captures the depth image 130-3 and a visual image 130-4 from one side of the shelving unit 170-2 and spaces adjacent to a front face of the shelving unit 170-2, as well as the camera 125-5, which captures the visual image 130-2 from another side of the shelving unit 170-2 and of the spaces adjacent to the front face of the shelving unit 170-2, and the camera 125-8, which captures the depth image 130-1 from above the shelving unit 170-2 and the spaces adjacent to the front face of the shelving unit 170-2. The depth image 130-1 includes depth or ranging data regarding distances from the camera 125-8 to objects within its field of view at time t1, e.g., a top view of the shelving unit 170-2, including a distance z1A to an upper surface of the shelving unit 170-2, a distance z2A to the left hand 183-2 of the operator 180-2, a distance z3A to a head of the operator 180-2, a distance z4A to a torso of the operator 180-2, a distance z5A to upper surfaces of the cart 185-2, and a distance z6A to a floor of the materials handling facility 120 in a vicinity of the shelving unit 170-2. The depth image 130-3 similarly includes depth or ranging data regarding distances from the camera 125-2 to objects within its field of view at time t1, e.g., a front view of the shelving unit 170-2, including a distance z1B to a left face of the shelving unit 170-2, a distance z2B to the left hand 183-2 of the operator 180-2, a distance z3B to the head of the operator 180-2, a distance z4B to the torso of the operator 180-2, a distance z5B to a right leg of the operator 180-2, a distance z6B to a basket or a bin of the cart 185-2, and a distance z7B to wheels of the cart 185-2. The visual image 130-2 depicts colors, textures, outlines or other aspects of objects within a field of view of the camera 125-5 at time t1, e.g., the front view of the shelving unit 170-2, including but not limited to colors, textures, outlines or other aspects of the shelving unit 170-2, the operator 180-2 and the cart 185-2. Similarly, the visual image 130-4 depicts colors, textures, outlines or other aspects of objects within a field of view of the camera 125-2 at time e.g., a rear view of the shelving unit 170-2, including but not limited to colors, textures, outlines or other aspects of the shelving unit 170-2, the operator 180-2, the left hand 183-2 and the cart 185-2. The visual image 130-2 and the visual image 130-4 include colors, textures, outlines or other aspects of such objects from different perspectives, e.g., the perspectives of the camera 125-5 and the camera 125-2. As is discussed above, the systems and methods of the present disclosure may be utilized to track positions of objects in 3D space based on imaging data captured by multiple visual cameras and/or depth cameras (e.g., digital cameras). In some implementations, the positions of the objects may be identified or represented in any manner, such as by coordinates of a center or centroid of the target object, or coordinates of one or more of a set or a collection of points corresponding to the target object (e.g., one or more representative points), which may but need not include a center or a centroid of the target object. Moreover, the positions may be identified or represented according to any coordinate system, including but not limited to a rectangular (or Cartesian) coordinate system, e.g., with x, y and z coordinates with respect to an origin, or a polar coordinate system, e.g., with angular and radial coordinates with respect to a pole, or any other type or form of coordinate system. In accordance with some implementations of the present disclosure, the tracking of objects in 3D space may begin with the identification of a “seed box” centered on a predicted initial position of a target object, viz., the left hand 183-2 of the operator 180-2, with buffers of nominal lengths (e.g., ten centimeters) around a predicted initial position. The seed box may be established based on any type or form of triggering event, as represented by or in any type or form of information or data, and may be defined or oriented with respect to one or more points in space. For example, the seed box may be established based on depth imaging data captured by one or more depth cameras, based on colors, textures, outlines or other aspects expressed in visual imaging data captured by one or more visual cameras, or on a combination of depth imaging data and visual imaging data. Alternatively, the seed box may be established based on information or data captured, sensed or received from any type or form of sensor, including but not limited to a change in weight detected by a weight sensor provided in one or more shelves of a storage unit, a manual interaction with a touchscreen on which visual imaging data is presented, or any other information or data. The seed box may also be based on intrinsic or extrinsic data regarding objects, e.g., candidate target objects, including but not limited to locations of such objects, or one or more dimensions (such as lengths, widths, heights, diameters or weights), shapes or other aspects of such objects. As is shown in FIG. 1C, the seed box is represented in the depth image 130-1 by a two-dimensional box 135-1, and in the depth image 130-2, the visual image 130-3 and the visual image 130-4 by two-dimensional boxes 135-2, 135-3, 135-4. An initial position of the left hand 183-2 may be determined based on the depth images 130-1, 130-3 and the visual images 130-2, 130-4. As is shown in FIG. 1D, a point cloud (or depth model) 140-1 of the left hand 183-2 may be defined from the depth images 130-1, 130-3, e.g., depth information within the boxes 135-1, 135-2, and correlated with the visual images 130-2, 130-4, viz., representations of the target object within the boxes 135-2, 135-4, that are captured at a synchronized time t1. Additionally, predictions of locations of the target object at time t1 may be determined based on any colors, textures, outlines or other aspects of the object that are present within a two-dimensional representation of the seed box within the visual images 130-2, 130-4. The pixels or groups of pixels corresponding to the target object within the visual images 135-2, 135-4 may be correlated with one another, with respect to the point cloud 140-1, e.g., by triangulation. An initial position of the target object at time t1, viz., the left hand 183-2 of the operator 180-2, may then be added to a tracklet or other record of positions of the target object. The initial position may be represented by coordinates of one or more points in space associated with the target object, including but not limited to a center or centroid of the target object, one or more representative points, or a set of points corresponding to the target object in 3D space. After the initial position of the left hand 183-2 at time t1 has been determined, the left hand 183-2 may be further tracked within the materials handling facility 120 as long as the left hand 183-2 remains within a field of view of at least one of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9. The left hand 183-2 may be tracked within the visual images, e.g., from frame-to-frame, using any number of tracking algorithms or systems, such as a kernelized correlation filter (or “KCF”) tracker, that may predict an appearance of the left hand 183-2 in a current visual image frame based on the appearances of the left hand 183-2 determined from previous visual image frames. Such algorithms or systems may operate on a centralized server or other computer device, or by one or more external or distributed computer devices, including but not limited to one or more computer processors operating on one or more of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9. Additionally, such tracking algorithms or systems may be further trained to predict positions of the left hand 183-2 in a visual image captured by a camera based on positions of the left hand 183-2 determined using one or more other cameras, and also to refine its learning of both the position and the appearance of the left hand 183-2. The left hand 183-2 may also be tracked within depth images, e.g., in point clouds generated from the depth images. Points or scores may be assigned to pixels that appear in synchronized visual images and/or depth images captured from one or more calibrated cameras at (or about) a given time. For example, each of the pixels of the visual images may be scored based on probabilities that such pixels correspond to the target object, viz., the left hand 183-2, and each of such pixels from all of the visual images captured at a given time may be projected onto points of a point cloud, e.g., the point cloud 140-1, generated from depth images captured at the given time. In some implementations, a plurality of volumetric regions corresponding to intersections of line segments extending from cameras to portions of visual images and point clouds may be discretized and combined according to one or more algorithms, e.g., a three-dimensional Bresneham line algorithm. In some implementations, scores determined for pixels of all available visual images that include a target object within their respective views may be aggregated for each point of the point cloud. Points of the point cloud having maximum scores, in the aggregate, may be assigned to a maximum score set, and determined to correspond to the target object at the given time. A total hypothesis of the target object, at the given time, will be defined as a three-dimensional region centered at a median of the maximum score set, e.g., according to a “voting” algorithm that exploits the geometry of the calibrated cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 to estimate a position of one or more points in 3D space that correspond to the target object. For example, a position of the left hand 183-2 at the time t1 may be determined by any mathematical or statistical procedure, such as by ranking or scoring one or more of the pixels or groups of pixels appearing within one or both of the visual images 130-2, 130-4, and projecting such pixels onto the point cloud 140-1 defined based on the depth images 130-1, 130-3 according to a voting algorithm that selects one or more of the points of the point cloud 140-1 having the highest total number of scores. Alternatively, probabilities (e.g., probability maps) may be assigned to each of the pixels, or regions of pixels, appearing in visual images, and used to estimate the motion of the target object, thereby enhancing predictions of the presence of the left hand 183-2 in subsequently captured visual images. The position of the left hand 183-2 at the time t1 may be used to train one or more of the tracking algorithms to recognize the left hand 183-2 in visual image frames captured at later times. For example, positions of points in 3D space that correspond to the left hand 183-2 may be determined based on hypotheses of such positions of the left hand 183-2 determined by each of the cameras, such as by selecting a set of pixels in the visual images captured at a synchronized time having a maximum score, or scores in excess of a threshold, or by determining probabilities of such positions of the left hand 183-2 that are associated with such pixels or regions, and associating scores of such pixels with points of a point cloud generated based on depth images captured at the synchronized time, e.g., according to a voting algorithm. For example, as is shown in FIG. 1E, a depth image 130-5, a visual image 130-6, a depth image 130-7 and a visual image 130-8 are captured by the camera 125-8, the camera 125-5, and the camera 125-2 at time t2. The depth images 130-5, 130-7 and the visual images 130-6, 130-8 each depict a retrieval of an item from the shelving unit 170-2 by the left hand 183-2. For example, the depth image 130-5 includes depth or ranging data regarding distances from the camera 125-8 to objects within its field of view at time t2, including a distance z1C to an upper surface of the shelving unit 170-2, a distance z2C to the left hand 183-2 clutching the item, a distance z3C to a head of the operator 180-2, a distance z4C to a torso of the operator 180-2, a distance z5C to upper surfaces of the cart 185-2, and a distance z6C to a floor of the materials handling facility 120 in a vicinity of the shelving unit 170-2. The depth image 130-7 similarly includes depth or ranging data regarding distances from the camera 125-2 to objects within its field of view at time t2, including a distance z1D to a left face of the shelving unit 170-2, a distance z2D to the left hand 183-2 clutching the item, a distance z3D to the head of the operator 180-2, a distance z4D to the torso of the operator 180-2, a distance z5D to a right leg of the operator 180-2, a distance z6D to a basket or a bin of the cart 185-2, and a distance z7D to wheels of the cart 185-2. The visual images 130-6, 130-8 depict colors, textures, outlines or other aspects of objects within the fields of view of the cameras 125-5, 125-2 at time t2, including but not limited to colors, textures, outlines or other aspects of the shelving unit 170-2, the operator 180-2 and the cart 185-2, as well as the left hand 183-2 clutching the item, from the perspectives of the camera 125-5 and the camera 125-2. As is shown in FIG. 1F, depth images 130-9, 130-11 and visual images 130-10, 130-12 are captured by the camera 125-8, the camera 125-5, and the camera 125-2 at time t3, as the operator 180-2 turns toward the cart 185-2 while clutching the item. The depth images 130-9, 130-11 include depth or ranging data regarding distances from the cameras 125-8, 125-5, 125-2 to objects within their respective fields of view at time t3, while the visual images 130-10, 130-12 depict colors, textures, outlines or other aspects of such objects at time t3. The left hand 183-2 of the operator 180-2 is occluded in the depth image 130-11 and the visual image 130-12. Similarly, as is shown in FIG. 1G, depth images 130-13, 130-15 and visual images 130-14, 130-16 are captured by the camera 125-8, the camera 125-5, and the camera 125-2 at time t4, as the operator 180-2 deposits the item into the cart 185-2. The depth images 130-13, 130-15 include depth or ranging data regarding distances from the cameras 125-8, 125-5, 125-2 to objects within their respective fields of view at time t4, while the visual images 130-14, 130-16 depict colors, textures, outlines or other aspects of such objects at time t4. The left hand 183-2 of the operator 180-2 is occluded in the depth image 130-15 and the visual image 130-16. Based on the visual images and the depth images captured while the left hand 183-2 is actively present within the fields of view of at least one of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9, actual positions of the left hand 183-2 within 3D space may be determined for each and every point in time, based on correlations between hypothetical positions of the left hand 183-2 determined from point clouds generated from the depth images and hypothetical positions of the left hand 183-2 determined from the visual images. As is shown in FIG. 1H, correlated portions of the visual images 130-6, 130-8 captured at time t2 that correspond to the left hand 183-2, e.g., as identified by a trained tracking algorithm, and are included in the boxes 135-6, 135-8 may be projected onto a point cloud 140-2 generated from the depth images 130-5, 130-7 captured at time t2. A voting algorithm that selects the best or most appropriate positions of the left hand 183-2 expressed in the point cloud 140-2 (or in the depth images 130-5, 130-7) based on the projected portions of the visual images 130-6, 130-8 may then be executed to determine an actual position of the left hand 183-2 (e.g., a position of a center or centroid of the left hand 183-2, positions of each of a set of points in 3D space that correspond to the left hand 183-2, or a position of a representative point of the set of points) at time t2. Because the cameras are calibrated, voting line segments extending from lenses or other optical elements of the respective cameras through the corresponding pixels or regions of pixels corresponding to the left hand 183-2 within each of the visual images 130-6, 130-8 of the left hand 183-2 captured by such cameras at (or about) time t2, and volumetric regions corresponding to intersections of two or more sets of such line segments may be mapped to points of the point cloud 140-2 and determined to represent a position of the left hand 183-2 at time t2. In some implementations, information regarding a position of the left hand 183-2 at time t2 may be used to further train the tracking algorithm, as necessary, to recognize the position and/or the appearance of the left hand 183-2 within visual images that are subsequently captured. Similarly, a position of the left hand 183-2 at time t3 may be determined by correlating portions of the visual image 130-10 captured at (or about) time t3 that correspond to the left hand 183-2, e.g., as identified by the trained tracking algorithm, and are included in the box 135-10, and projecting such portions onto a point cloud 140-3 generated from the depth image 130-9 captured at (or about) time t3. A voting algorithm that selects the most appropriate position of the left hand 183-2 expressed in the point cloud 140-3 (or in the depth image 130-9) based on the projected portions of the visual image 130-10 may then be executed to determine the position of the left hand 183-2 at (or about) time t3. Because the left hand 183-2 is occluded in the depth image 130-11 and the visual image 130-12 captured at time t3, the depth image 130-11 and the visual image 130-12 may not be used to determine the position of the left hand 183-2 at time t3. Likewise, a position of the left hand 183-2 at time t4 may be determined by correlating portions of the visual image 130-15 captured at time t4 that correspond to the left hand 183-2, e.g., as identified by the trained tracking algorithm, and are included in the box 135-15, and projecting such portions onto a point cloud 140-4 generated from the depth image 130-13 captured at time t4. A voting algorithm that selects the best or most appropriate position of the left hand 183-2 expressed in the point cloud 140-4 (or in the depth image 130-13) based on the projected portions of the visual image 130-14 may then be executed to determine the position of the left hand 183-2 at time t4. Because the left hand 183-2 is occluded in the depth image 130-15 and the visual image 130-16 captured at time t4, the depth image 130-15 and the visual image 130-16 may not be used to determine the position of the left hand 183-2 at time t4. Positions of the left hand 183-2 determined based on depth images 130-1, 130-3, 130-5, 130-7, 130-9, 130-11, 130-13, 130-15 or point clouds 140-1, 140-2, 140-3, 140-4 generated therefrom and visual images 130-2, 130-4, 130-6, 130-8, 130-10, 130-12, 130-14, 130-16 are added to a tracklet representative of the motion of the left hand 183-2 while the left hand 183-2 is visible to at least one of the cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 within the materials handling facility 120. As is shown in FIG. 1I, a tracklet representative of the motion of the left hand 183-2 over each of the points in time during the period in which the left hand 183-2 was visible to at least one of cameras 125-1, 125-2, 125-3, 125-4, 125-5, 125-6, 125-7, 125-8, 125-9 within the materials handling facility 120 is shown. Accordingly, implementations of the systems and methods of the present disclosure may be used to track objects that are in motion in 3D space within materials handling facilities or like environments using imaging data (e.g., visual images and/or depth images) captured using RGB cameras, RGBD cameras, or other imaging devices. Such implementations may establish a “seed box,” or a 3D bounding region around an initial position of a target object to be tracked within a scene (e.g., a position of a center or a centroid of the target object, or of a set of points corresponding to the target object, or of a representative point of the set of points), based on a depth image of the scene, a visual image of the scene, or any other relevant information or data regarding the scene. The seed box may be defined based on any triggering event, including one or more events that may be recognized in imaging data captured by any type or form of imaging device, as well as events that may be captured, sensed or detected in any other type or form of information or data. For example, the information or data on which a seed box may be defined may include intrinsic or extrinsic information or data regarding locations, dimensions, shapes or other aspects of objects that may aide in the identification and/or selection of such boxes according to the present disclosure. Additionally, the seed box may be defined or oriented with respect to one or more points in space, including but not limited to a center or centroid of the target object at a predicted initial position, or with respect to one or more of a collection of points corresponding to the object (e.g., one or more representative points), which may but need not include a center or a centroid of the target object. One or more imaging devices, including both visual cameras and depth cameras, may be synchronized and configured to begin capturing imaging data from the scene. An active set of the imaging devices that include the target object within their fields of view, and in a sufficiently unoccluded manner, may be identified, and visual images and depth images of the target object may be captured at a synchronized time. A visual image may be an image channel or file captured from a scene by an imaging device, with the visual image including a plurality of pixels, and each of the pixels representing one or more color values for visible light captured by an image sensor from a portion of the scene that corresponds to one of the pixels (e.g., black-and-white, grayscale, or color according to any standard or color model, such as RGB, CMYK or the like). A depth image is an image channel or file captured from a scene by an imaging device, with the depth image including a plurality of pixels, and each of the pixels representing a range or distance from an image sensor to a portion of the scene that corresponds to one of the pixels. Alternatively, any other type of imaging data may be captured by imaging devices and utilized in accordance with one or more implementations of the present disclosure, including but not limited to infrared images, radiographic images or any other representations of captured light. A point cloud (or depth model) of a target object may be defined based on one or more of the depth images, and one or more 2D hypothetical positions of the target object may be predicted based on portions of the target object identified in the visual images using a tracking algorithm, e.g., an Open Source Computer Vision (or OpenCV) tracker or a KCF tracker. A point cloud may be a representation of depth data, or formed from depth data, corresponding to an object in which individual points in space that are associated with the object have coordinates defining their respective locations relative to an imaging system. A point cloud may be defined as a grouping of data points, with each point at a different location on a surface of an object. Each data point may have a set of coordinates according to a standard coordinate system (e.g., a rectangular or Cartesian coordinate system, a polar coordinate system). For example, in one implementation, the coordinates may define a position within a Cartesian space having three orthogonal axes, e.g., an x-axis, ay-axis and a z-axis, and the coordinates may be expressed as (x, y, z) with respect to such axes. The coordinates may further include a distance value, such that for each pixel designated with an x coordinate and a y coordinate with respect to an imaging sensor, and the distance from the imaging sensor to an object represented by that pixel being designated with a z coordinate. A point cloud may be provided in an ordered form or an unordered form. Ordered point cloud data further includes a representation of a relationship between points. For example, ordered point cloud data may include a vector from one point to another, or may be expressed as a linked list of points. In some implementations, ordered point cloud data may be uniform in spatial distribution, such as in a regularly gridded two-dimensional or three-dimensional array. Unordered point cloud data may include data points in which information about relationships between such points is not known or included. Additionally, unordered point cloud data need not be uniform in spatial distribution. For example, unordered point cloud data provided by a 3D sensor may have data points which do not readily fit within a regularly gridded two-dimensional or three-dimensional array. In some implementations, hypothetical positions of a target object may correspond to a hypothetical center or centroid of the target object, or a hypothetical set or collection of one or more points corresponding to the object (e.g., one or more representative points). The hypothetical positions may but need not be correlated with one another and a point cloud to identify a position of the target object at the synchronized time (e.g., a position of a center or centroid of the target object at the synchronized time, or a set or collection of one or more points, such as representative points, corresponding to the target object at the synchronized time), and the position may be added to a “tracklet” or other record of the positions of the target object within the scene. If a target object remains sufficiently visible within the scene, each of the imaging devices may be advanced to a next frame, the active set of the imaging devices may be updated accordingly, and the process for determining a position of the target object may be repeated accordingly. Information regarding positions determined by correlations between hypothetical positions and a point cloud may be used to train a tracking algorithm, as necessary. For example, after a position of a target object in 3D space has been determined based on visual images and depth images, the position may then be projected into each of the imaging devices from which the visual images and the depth images were captured, and used to train each of the imaging devices to recognize an appearance of the target object in subsequently captured visual images and depth images. For example, in some implementations, a 3D location of a target object may be deduced from 2D hypothetical positions determined from visual images in which the target object is visible and is not substantially occluded, e.g., where the object has a low percentage of occlusion, that are captured by one or more imaging devices (e.g., RGB cameras). For each imaging device in which at least a portion of a bounding region has a non-zero depth value, pixels corresponding to a 2D hypothetical position of a target object (e.g., as defined by a tracking algorithm or other technique) may be back-projected therein, and a position of the target object in 3D space may be determined accordingly. In some implementations, the 3D position may be determined to be a region such as a cube or a sphere having a volume defined by a fixed rectangular or axial distance (e.g., ten centimeters) from a point and/or one or more axes. In some implementations, where imaging devices do not have non-zero depth values at within 2D representations of 3D bounding regions, a set of points within a view frustrum defined by such a 2D representation, e.g., a pyramidal section in 3D space that includes all points of a point cloud that are projected to a rectangle or other section corresponding to the 2D representation. Each point of a point cloud is assigned a score according to its location, e.g., whether the point is within a view frustrum and/or representation of a bounding region for each of the imaging devices. In some implementations, the scores may be weighted based on their percentage of occlusion and/or aggregated based on a number of view frustrums in which such points appear. Points having maximum scores may be selected, and a 3D position of the object may be defined as a box or volume having any other shape that is centered or oriented with respect to one or more points having a maximum score, or points having scores above a predetermined threshold. In some implementations, a voting algorithm or technique may be used to estimate a joint object position probability distribution in 3D space based on responses obtained from 2D tracking algorithms or other techniques and the geometry of two or more calibrated cameras. In each of such cameras, a tracking algorithm (e.g., an OpenCV tracker or a KCF tracker) may be used to determine a filter response for visual images captured thereby, and the filter response may include a probability map for an entire field of view, rather than specific pixels. The response pixels may be back-projected as lines and/or line segments in 3D space, including not only response pixels having maximum values but also response pixels having values above a predetermined threshold (e.g., a weighted or unweighted threshold). Such line segments may be discretized, e.g., according to a 3D Bresenham algorithm, and a vote may be determined based on the intersections of the lines or line segments. Those of ordinary skill in the pertinent arts will recognize that imaging data, e.g., visual imaging data, depth imaging data, infrared imaging data, or imaging data of any other type or form, may be captured using one or more imaging devices such as digital cameras, depth sensors, range cameras, infrared cameras or radiographic cameras. Such devices generally operate by capturing light that is reflected from objects, and by subsequently calculating or assigning one or more quantitative values to aspects of the reflected light, e.g., image pixels, then generating an output based on such values, and storing such values in one or more data stores. For example, a digital camera may include one or more image sensors (e.g., a photosensitive surface with a plurality of pixel sensors provided thereon), having one or more filters associated therewith. Such sensors may detect information regarding aspects of any number of image pixels of the reflected light corresponding to one or more base colors (e.g., red, green or blue) of the reflected light, or distances to objects from which the light was reflected. Such sensors may then generate data files including such information, and store such data files in one or more onboard or accessible data stores (e.g., a hard drive or other like component), or in one or more removable data stores (e.g., flash memory devices). Such data files may also be printed, displayed on one or more broadcast or closed-circuit television networks, or transmitted over a computer network as the Internet. An imaging device that is configured to capture and store visual imaging data (e.g., color images) is commonly called an RGB (“red-green-blue”) imaging device (or camera), while an imaging device that is configured to capture both visual imaging data and depth imaging data (e.g., ranges) is commonly referred to as an RGBD or RGBz imaging device (or camera). Imaging data files may be stored in any number of formats, including but not limited to .JPEG or .JPG files, or Graphics Interchange Format (or “.GIF”), Bitmap (or “.BMP”), Portable Network Graphics (or “.PNG”), Tagged Image File Format (or “.TIFF”) files, Audio Video Interleave (or “.AVI”), QuickTime (or “.MOV”), Moving Picture Experts Group (or “.MPG,” “.MPEG” or “.MP4”) or Windows Media Video (or “.WMV”) files. Reflected light may be captured or detected by an imaging device if the reflected light is within the device's field of view, which is defined as a function of a distance between a sensor and a lens within the device, viz., a focal length, as well as a location of the device and an angular orientation of the device's lens. Accordingly, where an object appears within a depth of field, or a distance within the field of view where the clarity and focus is sufficiently sharp, an imaging device may capture light that is reflected off objects of any kind to a sufficiently high degree of resolution using one or more sensors thereof, and store information regarding the reflected light in one or more data files. Many imaging devices also include manual or automatic features for modifying their respective fields of view or orientations. For example, a digital camera may be configured in a fixed position, or with a fixed focal length (e.g., fixed-focus lenses) or angular orientation. Alternatively, an imaging device may include one or more actuated or motorized features for adjusting a position of the imaging device, or for adjusting either the focal length (e.g., a zoom level of the imaging device) or the angular orientation (e.g., the roll angle, the pitch angle or the yaw angle), by causing a change in the distance between the sensor and the lens (e.g., optical zoom lenses or digital zoom lenses), a change in the location of the imaging device, or a change in one or more of the angles defining the angular orientation. For example, an imaging device may be hard-mounted to a support or mounting that maintains the device in a fixed configuration or angle with respect to one, two or three axes. Alternatively, however, an imaging device may be provided with one or more motors and/or controllers for manually or automatically operating one or more of the components, or for reorienting the axis or direction of the device, i.e., by panning or tilting the device. Panning an imaging device may cause a rotation within a horizontal plane or about a vertical axis (e.g., a yaw), while tilting an imaging device may cause a rotation within a vertical plane or about a horizontal axis (e.g., a pitch). Additionally, an imaging device may be rolled, or rotated about its axis of rotation, and within a plane that is perpendicular to the axis of rotation and substantially parallel to a field of view of the device. Furthermore, some imaging devices may digitally or electronically adjust an image identified in a field of view, subject to one or more physical or operational constraints. For example, a digital camera may virtually stretch or condense the pixels of an image in order to focus or broaden the field of view of the digital camera, and also translate one or more portions of images within the field of view. Some imaging devices having optically adjustable focal lengths or axes of orientation are commonly referred to as pan-tilt-zoom (or “PTZ”) imaging devices, while imaging devices having digitally or electronically adjustable zooming or translating features are commonly referred to as electronic PTZ (or “ePTZ”) imaging devices. Information and/or data regarding features or objects expressed in imaging data, including colors, textures, outlines or other aspects of the features or objects, may be extracted from the data in any number of ways. For example, colors of image pixels, or of groups of image pixels, in a digital image may be determined and quantified according to one or more standards, e.g., the RGB color model, in which the portions of red, green or blue in an image pixel are expressed in three corresponding numbers ranging from 0 to 255 in value, or a hexadecimal model, in which a color of an image pixel is expressed in a six-character code, wherein each of the characters may have a range of sixteen. Colors may also be expressed according to a six-character hexadecimal model, or #NNNNNN, where each of the characters N has a range of sixteen digits (i.e., the numbers 0 through 9 and letters A through F). The first two characters NN of the hexadecimal model refer to the portion of red contained in the color, while the second two characters NN refer to the portion of green contained in the color, and the third two characters NN refer to the portion of blue contained in the color. For example, the colors white and black are expressed according to the hexadecimal model as #FFFFFF and #000000, respectively, while the color National Flag Blue is expressed as #3C3B6E. Any means or model for quantifying a color or color schema within an image or photograph may be utilized in accordance with the present disclosure. Moreover, textures or features of objects expressed in a digital image may be identified using one or more computer-based methods, such as by identifying changes in intensities within regions or sectors of the image, or by defining areas of an image corresponding to specific surfaces. Furthermore, edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of objects, or portions of objects, expressed in still or moving digital images may be identified using one or more algorithms or machine-learning tools. The objects or portions of objects may be stationary or in motion, and may be identified at single, finite periods of time, or over one or more periods or durations. Such algorithms or tools may be directed to recognizing and marking transitions (e.g., the edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of objects or portions thereof) within the digital images as closely as possible, and in a manner that minimizes noise and disruptions, and does not create false transitions. Some detection algorithms or techniques that may be utilized in order to recognize characteristics of objects or portions thereof in digital images in accordance with the present disclosure include, but are not limited to, Canny edge detectors or algorithms; Sobel operators, algorithms or filters; Kayyali operators; Roberts edge detection algorithms; Prewitt operators; Frei-Chen methods; or any other algorithms or techniques that may be known to those of ordinary skill in the pertinent arts. For example, objects or portions thereof expressed within imaging data may be associated with a label or labels according to one or more machine-learning classifiers, algorithms or techniques, including but not limited to nearest neighbor methods or analyses, artificial neural networks, factorization methods or techniques, K-means clustering analyses or techniques, similarity measures such as log likelihood similarities or cosine similarities, latent Dirichlet allocations or other topic models, or latent semantic analyses. The systems and methods of the present disclosure are directed to tracking objects in 3D space based on visual images (e.g., RGB images) and depth images (e.g., depth or ranging data), or any other type of images, that are captured using multiple imaging devices having fields of view that overlap on a scene, such as interior or exterior portions of a materials handling facility, a transportation center, a financial institution, or any other environment. In particular, some implementations of the present disclosure are directed to establishing an initial position of a target object bounded by a 3D bounding region, or a “seed box,” for the target object, and capturing visual images and depth images using multiple RGB and/or RGBD imaging devices that are synchronized. The 3D bounding region or seed box may be selected or defined in any manner, and may take any shape or form. For example, in some implementations, the 3D bounding region may be a region such as a cube or a sphere having a volume defined by a fixed rectangular or axial distance (e.g., ten centimeters) from a position and/or one or more axes. Alternatively, in some implementations, the 3D bounding region need not take the shape of a “box,” e.g., a rectangularly shaped volume, and may have any other shape or volume, such as round, cylindrical, pyramidal, oblong, or any other regular or irregular shape or volume. The 3D bounding region may also be recognized in any manner, including based on one or more triggering events, e.g., in response to the lifting of an item from a shelf, the placement of an item onto a shelf, the entry or departure of a human actor (or a non-human actor or machine), or any other event that may be captured, sensed or recognized in information or data obtained by any type of sensor. Thereafter, a synchronization point may be set for each of the imaging devices of the active set, e.g., by advancing each of the imaging devices one frame forward, and selecting the synchronization point as the earliest point in time of each of the next frames. A next location for the target object, e.g., within a next position of the 3D bounding region, may be predicted, and each of the visual images may be subsequently processed in order to determine which of such visual images includes the target object within their respective fields of view. An active set of such imaging devices, e.g., an active cameras set, may be defined accordingly. For example, a measure of an extent that the 3D bounding region appears within each of the fields of view may be calculated based on one or more visual images, and a measure of an extent to which the 3D bounding region is occluded may be determined within each of the fields of view may be calculated based on one or more depth images. In some implementations, and for one or more (e.g., each) of the imaging devices of the active set, the presence of the target object within successive image frames may be determined using a tracking algorithm, such as an OpenCV tracker or a KCF tracker, which is used to predict a position of the target object in a next image frame based on a position of the target object in a 2D representation of the 3D bounding region in a previous frame. A 3D position of the target object is determined based on each of the positions of the target object predicted from image frames captured from the imaging devices of the active set, i.e., based on scores assigned to pixels that appear in visual images captured from more than one imaging device, or probabilities that such pixels correspond to the target object, and projected such pixels onto points of a point cloud generated from one or more depth images. The 3D position of the target object may be identified and/or represented in any manner, such as by coordinates of a center or centroid of the target object, coordinates of each of a set or a collection of points corresponding to the target object (e.g., in 3D space), or coordinates of one or more representative points corresponding to the target object. In some such implementations, pixels having the highest scores or the highest probabilities may be identified and selected, e.g., according to a voting algorithm, and a 3D position of the target object at a given time may be determined based on positions of one or more points of a point cloud corresponding to such pixels. The 3D position may then be back-projected into each camera, and used to further train the tracking algorithm to better recognize the target object in subsequently captured visual image frames. Therefore, by using visual images and depth images to determine positions in 3D space, and training tracking algorithms to recognize objects based on such determined positions, some implementations of the systems and methods of the present disclosure may improve upon the computer-based tracking of target objects, thereby solving a fundamental computer vision problem. Visual images that are captured at a synchronization point and have occluded views of a target object may be identified using depth images captured at the synchronized point and excluded from consideration, thereby enhancing the efficiency of such processes in subsequently captured frames. The tracking of the target object may continue until a “stopping condition” is reached, e.g., until the target object is not visible within the fields of view of any imaging device, or until a point cloud insufficiently depicts the target object therein. The systems and methods of the present disclosure may be utilized in any number of applications in which the tracking of objects appearing within multiple views is desired, including but not limited to tracking the motion of humans and/or machines within a materials handling facility, or tracking the motion of any other objects within a materials handling facility. As used herein, the term “materials handling facility” may include, but is not limited to, warehouses, distribution centers, cross-docking facilities, order fulfillment facilities, packaging facilities, shipping facilities, rental facilities, libraries, retail stores or establishments, wholesale stores, museums, or other facilities or combinations of facilities for performing one or more functions of material or inventory handling for any purpose. For example, in some implementations, one or more of the systems and methods disclosed herein may be used to track body parts of human operators (e.g., customers, workers or other personnel) and/or machines (e.g., autonomous mobile robots) and to recognize their respective interactions within a materials handling facility, including but not limited to interactions with one or more items (e.g., consumer goods) or other inanimate objects within the materials handling facility. Such systems and methods may also be utilized to track objects within transportation centers, financial institutions or like structures in which diverse collections of people, objects or machines enter and exit from such environments at regular or irregular times or on predictable or unpredictable schedules. By relying on multiple imaging devices, motion of an object may be tracked even when the object is occluded in one or more of the fields of view of the imaging devices. Referring to FIG. 2, a block diagram of components of one system 200 for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure is shown. The system 200 includes a marketplace 210, a materials handling facility 220 and a customer (or actor, or worker, or another human operator) 280 that are connected to one another across a network 290, such as the Internet. Except where otherwise noted, reference numerals preceded by the number “2” shown in FIG. 2 indicate components or features that are similar to components or features having reference numerals preceded by the number “1” shown in FIGS. 1A through 1I. The marketplace 210 may be any entity or individual that wishes to make items from a variety of sources (e.g., manufacturers, merchants, sellers or vendors) available for download, purchase, rent, lease or borrowing by customers using a networked computer infrastructure, including one or more physical computer servers 212 and data stores (e.g., databases) 214 for hosting a network site 216 (e.g., a web site). The marketplace 210 may be physically or virtually associated with one or more materials handling facilities, including but not limited to the materials handling facility 220. The network site 216 may be implemented using the one or more servers 212, which connect or otherwise communicate with the one or more data stores 214 as well as the network 290, as indicated by line 218, through the sending and receiving of digital data. The servers 212 may cause the display of information associated with the network site 216 in any manner, e.g., by transmitting code such as Hypertext Markup Language (HTML), over the network 290 to another computing device or resource that may be configured to generate and render the information into one or more pages or to cause a display of such pages on a computer display of any kind. Moreover, the data stores 214 may include any type of information regarding items that have been made available for sale through the marketplace 210, or ordered by customers (e.g., the customer 280) from the marketplace 210. The servers 212 may further execute any type of computer-based function or compute any type or form of calculation, including but not limited to any formulas, equations, algorithms or techniques for determining one or more probabilities or performing any number of statistical tests. The materials handling facility 220 may be any facility that is adapted to receive, store, process and/or distribute items on behalf of the marketplace 210. The materials handling facility 220 may be configured to receive any type or kind of inventory items from various sources, to store the inventory items until a user orders or retrieves one or more of the items, or to distribute the inventory items to the user. For example, inventory items such as merchandise, commodities, perishables or any other type of item may be received from one or more suppliers, e.g., manufacturers, distributors, wholesalers, vendors or the like, at the materials handling facility 220. Upon their arrival at the materials handling facility 220, the inventory items may be prepared for storage, such as by unpacking or otherwise rearranging the inventory items, and updating one or more records to reflect the types, quantities, conditions, costs, locations or any other parameters associated with the arrival of the inventory items. Subsequently, the inventory items may be stocked, managed or dispensed in terms of countable, individual units or multiples of units, such as packages, cartons, crates, pallets or other suitable aggregations. Alternatively, one or more of the items, such as bulk products, commodities, or the like, may be stored in continuous or arbitrarily divisible amounts that may not be inherently organized into countable units, and may instead be managed in terms of measurable quantities such as units of length, area, volume, weight, time duration or other dimensional properties characterized by units of measurement. Inventory items may be stored within an inventory area on an inventory shelf, a storage facility or another like unit, such as in bins, on shelves or via other suitable storage mechanisms. The inventory shelves, storage facilities or like units may be flat or angled, stationary or mobile, and of any shape or size. In some implementations, all inventory items of a given type or kind may be stored in a common location within an inventory area. In other implementations, like inventory items may be stored in different locations. For example, to optimize the retrieval of inventory items having high turnover rates or velocities within a large materials handling facility, such inventory items may be stored in several different locations to reduce congestion that might be encountered if the items are stored at a single location. When a request or an order specifying one or more of the inventory items is received, or as a user progresses through the materials handling facility 220, inventory items that are listed in the request or order, or are desired by the user, may be selected or “picked” from an inventory area at the materials handling facility 220. For example, in one implementation, a customer or other user may travel through the materials handling facility 220 with a list (e.g., a paper list, or a handheld mobile device displaying or including such a list) and may pick one or more of the inventory items from an inventory area at the materials handling facility 220. In other implementations, an employee of the materials handling facility 220 or another user may pick one or more inventory items, as may be directed by one or more written or electronic pick lists derived from orders. In some instances, an inventory item may be retrieved and delivered to a customer or another user who placed a request for the inventory item. In other instances, the inventory item may require repositioning from one location within an inventory area to another location. For example, in some instances, an inventory item may be picked from a first location (e.g., a first inventory shelf or other storage facility) in an inventory area, moved a distance, and placed at a second location (e.g., a second inventory shelf or other storage facility) in the inventory area. As is shown in FIG. 2, the materials handling facility 220 includes a networked computer infrastructure for performing various computer-related functions associated with the receipt, storage, processing and distribution of such items, including one or more physical computer servers 222, databases (or other data stores) 224 and processors 226, that may be provided in the same physical location as the materials handling facility 220, or in one or more alternate or virtual locations, e.g., in a “cloud”-based environment. In some implementations, the servers 222, the data stores 224 and/or the processors 226 or any number of other computing devices or resources may further execute any type of computer-based function or compute any type or form of calculation, including but not limited to any formulas, equations, algorithms or techniques for determining one or more probabilities or performing any number of statistical tests. Such computer devices or resources may also operate or provide access to one or more reporting systems for receiving or displaying information or data regarding workflow operations, and may provide one or more interfaces for receiving interactions (e.g., text, numeric entries or selections) from one or more operators, users or workers in response to such information or data. Such computer devices or resources may be general purpose devices or machines, or dedicated devices or machines that feature any form of input and/or output peripherals such as scanners, readers, keyboards, keypads, touchscreens, voice interaction or recognition components or modules, or like devices, and may further operate or provide access to one or more engines for analyzing the information or data regarding the workflow operations, or the interactions received from the one or more operators, users or workers. The materials handling facility 220 may include one or more inventory areas having predefined two-dimensional or three-dimensional storage facilities for accommodating items and/or containers of such items, such as aisles, rows, bays, shelves, slots, bins, racks, tiers, bars, hooks, cubbies or other like storage means, or any other appropriate regions or stations, which may be flat or angled, stationary or mobile, and of any shape or size. Additionally, as is discussed above, the materials handling facility 220 may further include one or more receiving stations featuring any apparatuses that may be required in order to receive shipments of items at the materials handling facility 220 from one or more sources and/or through one or more channels, including but not limited to docks, lifts, cranes, jacks, belts or other conveying apparatuses for obtaining items and/or shipments of items from carriers such as cars, trucks, trailers, freight cars, container ships or cargo aircraft (e.g., manned aircraft or unmanned aircraft, such as drones), and preparing such items for storage or distribution to customers. The materials handling facility 220 may further include one or more distribution stations where items that have been retrieved from a designated inventory area may be evaluated, prepared and packed for delivery from the materials handling facility 220 to addresses, locations or destinations specified by customers, also by way of carriers such as cars, trucks, trailers, freight cars, container ships or cargo aircraft (e.g., manned aircraft or unmanned aircraft, such as drones). Alternatively, an item received at a receiving station of the materials handling facility 220 may be transferred directly to a distribution station for further processing, or “cross-docked,” without first being placed into storage in an intermediate inventory area. The materials handling facility 220 may also include one or more additional stations for receiving and distributing items to customers, as well as one or more conveying systems, autonomous mobile robots, or other manual or automated vehicles for transporting items between such stations or areas (not shown in FIG. 2). The materials handling facility 220 may operate one or more order processing and/or communication systems using computer devices or resources in communication with one or more of the servers 222, the data stores 224 and/or the processors 226, or through one or more other computing devices or resources that may be connected to the network 290, as is indicated by line 228, in order to transmit or receive information in the form of digital or analog data, or for any other purpose. The imaging devices 225-1, 225-2 . . . 225-n may be any form of optical recording device that may be used to photograph or otherwise record imaging data of structures, facilities or any other elements within the materials handling facility 220, as well as any items within the materials handling facility 220, or for any other purpose. The imaging devices 225-1, 225-2 . . . 225-n may be mounted in any specific location or orientation within the materials handling facility 220, e.g., above, below or alongside one or more inventory areas or stations for receiving or distributing items. The imaging devices 225-1, 225-2 . . . 225-n may include one or more sensors, memory or storage components and processors, and such sensors, memory components or processors may further include one or more photosensitive surfaces, filters, chips, electrodes, clocks, boards, timers or any other relevant features (not shown). The imaging devices 225-1, 225-2 . . . 225-n may capture imaging data in the form of one or more still or moving images of any kind or form, as well as any relevant audio signals or other information, within one or more designated locations within the materials handling facility 220. In some implementations, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be configured to capture depth imaging data, e.g., distances or ranges to objects within their respective fields of view. In some implementations, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be configured to capture visual imaging data, e.g., visual images or image frames in color, grayscale or black-and-white. For example, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be an RGB color camera, a still camera, a motion capture/video camera or any other type or form of camera. In other implementations, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be depth-sensing cameras, such as a RGBD (or RGBz) camera. In still other implementations, one or more of the imaging devices 242 may be a thermographic or infrared (IR) camera. Additionally, in some implementations, the imaging devices 225-1, 225-2 . . . 225-n may simply be camera modules that include a lens and an image sensor configured to convert an optical image obtained using the lens of the camera into a digital signal or digital representation of the image (generally referred to herein as imaging data). In one implementation, the image sensor may be a RGB sensor capable of supporting an image resolution of at least 860×480 at six frames per second that may likewise be configured to provide image data to other components (e.g., a graphics processing unit) for processing. In some implementations, the imaging devices 225-1, 225-2 . . . 225-n may be paired to provide stereo imagery and depth information, and may include a pair of camera modules. Additionally, imaging data may be stored in any variety of formats including, but not limited to, YUYV, RGB, RAW, .bmp, .jpeg, .gif, or the like. The imaging devices 225-1, 225-2 . . . 225-n may also include manual or automatic features for modifying their respective fields of view or orientations. For example, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be configured in a fixed position, or with a fixed focal length (e.g., fixed-focus lenses) or angular orientation. Alternatively, one or more of the imaging devices 225-1, 225-2 . . . 225-n may include one or more motorized features for adjusting a position of the imaging device, or for adjusting either the focal length (e.g., zooming the imaging device) or the angular orientation (e.g., the roll angle, the pitch angle or the yaw angle), by causing changes in the distance between the sensor and the lens (e.g., optical zoom lenses or digital zoom lenses), changes in the location of the imaging devices 225-1, 225-2 . . . 225-n, or changes in one or more of the angles defining the angular orientation. For example, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be hard-mounted to a support or mounting that maintains the device in a fixed configuration or angle with respect to one, two or three axes. Alternatively, however, one or more of the imaging devices 225-1, 225-2 . . . 225-n may be provided with one or more motors and/or controllers for manually or automatically operating one or more of the components, or for reorienting the axis or direction of the device, i.e., by panning or tilting the device. Panning an imaging device may cause a rotation within a horizontal axis or about a vertical axis (e.g., a yaw), while tilting an imaging device may cause a rotation within a vertical plane or about a horizontal axis (e.g., a pitch). Additionally, an imaging device may be rolled, or rotated about its axis of rotation, and within a plane that is perpendicular to the axis of rotation and substantially parallel to a field of view of the device. Some of the imaging devices 225-1, 225-2 . . . 225-n may digitally or electronically adjust an image identified in a field of view, subject to one or more physical and operational constraints. For example, a digital camera may virtually stretch or condense the pixels of an image in order to focus or broaden the field of view of the digital camera, and also translate one or more portions of images within the field of view. Imaging devices having optically adjustable focal lengths or axes of orientation are commonly referred to as pan-tilt-zoom (or “PTZ”) imaging devices, while imaging devices having digitally or electronically adjustable zooming or translating features are commonly referred to as electronic PTZ (or “ePTZ”) imaging devices. Once the characteristics of stationary or moving objects or portions thereof have been recognized in one or more digital images, such characteristics of the objects or portions thereof may be matched against information regarding edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of known objects, which may be stored in one or more data stores. In this regard, stationary or moving objects may be classified based at least in part on the extent to which the characteristics identified in one or more digital images correspond to one or more of the characteristics of the known objects. The operability of networks including one or more of the imaging devices 225-1, 225-2 . . . 225-n, e.g., digital cameras, may be affected based on the lighting conditions and characteristics of the scenes in which the imaging devices 225-1, 225-2 . . . 225-n are deployed, e.g., whether such scenes have sufficient lighting at appropriate wavelengths, whether such scenes are occluded by one or more objects, or whether such scenes are plagued by shadows or other visual impurities. The operability may also depend on the characteristics of the objects within the scenes, including variations, reflectances or deformations of their respective surfaces, as well as their sizes or textures. Although the materials handling facility 220 of FIG. 2 includes boxes corresponding to three imaging devices 225-1, 225-2 . . . 225-n, those of ordinary skill in the pertinent arts will recognize that any number or type of imaging devices may be provided within a materials handling facility in accordance with the present disclosure, including but not limited to digital cameras, depth sensors or range cameras, infrared cameras, radiographic cameras or other optical sensors. The materials handling facility 220 may also include any number of other sensors, components or other features for controlling or aiding in the operation of the materials handling facility 220, including but not limited to one or more thermometers, barometers, hygrometers, gyroscopes, air monitoring sensors (e.g., oxygen, ozone, hydrogen, carbon monoxide or carbon dioxide sensors), ozone monitors, pH sensors, magnetic anomaly detectors, metal detectors, radiation sensors (e.g., Geiger counters, neutron detectors, alpha detectors), laser sensors, weight sensors, attitude indicators, depth gauges, accelerometers, or sound sensors (e.g., microphones, piezoelectric sensors, vibration sensors or other transducers for detecting and recording acoustic energy from one or more directions). The customer 280 may be any entity or individual that wishes to manually or automatically retrieve, evaluate and/or purchase one or more items maintained in an inventory area of the materials handling facility 220, or to download, purchase, rent, lease, borrow or otherwise obtain items (e.g., goods, products, services or information of any type or form) from the marketplace 210. The customer 280 may utilize one or more computing devices, such as a smartphone 282 or any other like machine that may operate or access one or more software applications, such as a web browser (not shown) or a shopping application 284, and may be connected to or otherwise communicate with the marketplace 210, or the materials handling facility 220 through the network 290, as indicated by line 288, by the transmission and receipt of digital data. For example, the customer 280 may use the smartphone 282 or another like client device to interact with one or more computer devices and/or input/output devices within the materials handling facility 220, and for any purpose. Moreover, the customer 280 may retrieve items from the materials handling facility 220, and also receive deliveries or shipments of one or more items from facilities maintained by or on behalf of the marketplace 210, such as the materials handling facility 220. Alternatively, or in addition to the customer 280, the materials handling facility 220 may also include one or more human operators (not shown), such as one or more workers, who may be any designated personnel tasked with performing one or more tasks within the materials handling facility 220 in general, or within one or more inventory areas, receiving stations, distribution stations or other locations of the materials handling facility 220 in particular. Such workers may handle or transport items (e.g., any type or form of good, product, media or other tangible consumer article) within the materials handling facility 220, or operate one or more pieces of equipment therein (not shown). The workers may also operate one or more specific computing devices or resources for registering the receipt, retrieval, transportation or storage of items within the materials handling facility 220, or a general purpose device such a personal digital assistant, a digital media player, a smartphone, a tablet computer, a desktop computer or a laptop computer (not shown), which may include any form of input and/or output peripherals such as scanners, readers, keyboards, keypads, touchscreens or like devices. In some implementations, such devices may include one or more wireless modules to facilitate communications with the server 222, with one or more of the imaging devices 225-1, 225-2 . . . 225-n, or with one or more computer devices or resources, such as the server 212 or the smartphone 282, over the network 290, as well as a display (e.g., a touchscreen display) to facilitate the visible presentation to and interaction with a human operator. Such devices may be configured to store a unique identifier associated with a given human operator, and provide the unique identifier to the server 222 or to another computer device or resource in order to identify the human operator. In some implementations, a portable device may also include one or more other features, e.g., audio input/output peripherals or accessories, such as speakers or microphones, as well as video input/output peripherals or accessories, such as cameras, projectors, haptic peripherals, accessories such as keyboards, keypads, touchscreens, joysticks, control buttons, or other components. Such portable devices may operate in conjunction with or may otherwise utilize or communicate with one or more components of the materials handling facility 220. The computers, servers, devices and other resources described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to provide any of the functions or services described herein and/or achieve the results described herein. Also, those of ordinary skill in the pertinent arts will recognize that users of such computers, servers, devices and the like may operate a keyboard, keypad, mouse, stylus, touch screen, or other device (not shown) or method (e.g., speech recognition or gesture recognition devices or techniques) to interact with the computers, servers, devices and the like, or to “select” an item, link, node, hub or any other aspect of the present disclosure. Those of ordinary skill in the pertinent arts will understand that process steps described herein as being performed by a “marketplace,” a “materials handling facility,” or a “customer” (or “human operator” or “user”) or like terms, may be automated steps performed by their respective computer devices or resources, or implemented within software modules (or computer programs) executed by one or more general purpose computers. Those of ordinary skill in the pertinent arts would also recognize that process steps described as being performed by a “marketplace,” a “fulfillment center,” or a “customer” (or “human operator” or “user”) may be typically performed by a human, but could, alternatively, be performed by an automated agent. The marketplace 210, the materials handling facility 220 and/or the customer 280 may use any web-enabled or Internet applications or features, or any other client-server applications or features including electronic mail (or E-mail), or other messaging techniques, to connect to the network 290 or to communicate with one another, such as through short or multimedia messaging service (SMS or MMS) text messages. For example, the server 222 may be adapted to transmit information or data in the form of synchronous or asynchronous messages from the materials handling facility 220 to the server 212, the smartphone 282 or any other computer device (e.g., any device having any number of other servers, data stores, processors or the like) in real time or in near-real time, or in one or more offline processes, via the network 290. Those of ordinary skill in the pertinent arts would recognize that the marketplace 210, the materials handling facility 220 or the customer 280 may operate any of a number of computing devices or resources that are capable of communicating over the network 290, including but not limited to set-top boxes, personal digital assistants, digital media players, web pads, laptop computers, desktop computers, electronic book readers, and the like. The protocols and components for providing communication between such devices are well known to those skilled in the art of computer communications and need not be described in more detail herein. The data and/or computer executable instructions, programs, firmware, software and the like (also referred to herein as “computer executable” components) described herein may be stored on a transitory and/or non-transitory computer-readable medium that is within or accessible by computers or computer components such as the server 212, the server 222, the imaging devices 225-1, 225-2 . . . 225-n or the smartphone 282, or any other computers or control systems utilized by the marketplace 210, the materials handling facility 220 or the customer 280 and having sequences of instructions which, when executed by a processor (e.g., a central processing unit, or “CPU”), cause the processor to perform all or a portion of the functions, services and/or methods described herein. Such computer executable instructions, programs, software and the like may be loaded into the memory of one or more computers using a drive mechanism associated with the computer readable medium, such as a floppy drive, CD-ROM drive, DVD-ROM drive, network interface, or the like, or via external connections. Some implementations of the systems and methods of the present disclosure may also be provided as a computer executable program product including a non-transitory machine-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described herein. The machine-readable storage medium may include, but is not limited to, hard drives, floppy diskettes, optical disks, CD-ROMs, DVDs, ROMs, RAMs, erasable programmable ROMs (“EPROM”), electrically erasable programmable ROMs (“EEPROM”), flash memory, magnetic or optical cards, solid-state memory devices, or other types of media/machine-readable medium that may be suitable for storing electronic instructions. Further, implementations may also be provided as a computer executable program product that includes a transitory machine-readable signal (in compressed or uncompressed form). Examples of machine-readable signals, whether modulated using a carrier or not, may include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, or including signals that may be downloaded through the Internet or other networks. The present disclosure references a number of computer-based functions or tasks that may be executed by one or more computer processors, systems or resources. In some implementations, each of such functions or tasks may be executed by processors associated with an imaging device, or two or more imaging devices, which may control one or more aspects of the capture, processing and/or storage of imaging data. In some other implementations, each of such functions or tasks may be executed by processors that are external to an imaging device, such as in one or more other physical, alternate or virtual locations, e.g., in a “cloud”-based environment. In still other implementations, such functions or tasks may be executed in a distributed manner, such as by computer processors, systems or resources in two or more distributed locations. For example, some of such functions or tasks may be executed by processors associated with one or more imaging devices, while other functions or tasks may be executed by processors located in one or more other physical, alternate or virtual locations. Referring to FIG. 3, a flow chart 300 of one process for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure is shown. At box 310, a seed box is defined based on an initial position of a target object, e.g., at a seed time. The initial position of the target object may be determined as an output from an algorithm or on any basis, e.g., based on a single depth image, a single visual image, or one or more depth images or visual images, and used to determine a seed box, e.g., an initial bounding region, or a baseline position of a hypothetical 3D bounding region, from which tracking of a target object in a scene may be initiated. Additionally, as is discussed above, an initial position (or subsequent positions) of a target object may be identified and/or represented in any manner, such as by coordinates of a center or centroid of the target object, coordinates of each of a set or a collection of points corresponding to the target object (e.g., in 3D space), or coordinates of a representative point of the target object, which may or may not be a center or centroid of the target object. A seed box may also be defined with respect to the initial position at a seed time based on any intrinsic or extrinsic data that may be known regarding one or more objects (e.g., candidate target objects), including but not limited to locations, dimensions, shapes, or other aspects of such objects. For example, the seed box may be defined based on visual imaging data (e.g., colors, textures, outlines or other aspects of objects), depth imaging data (e.g., a point cloud), or any other information or data captured, sensed or detected regarding the scene, the target object, or the one or more imaging devices (e.g., in response to one or more triggering events). At box 315, a tracklet for the target object (or a set of positions of the target object) is set to “empty,” or reset, at a seed time associated with the initial position and the bounding region. At box 320, two or more visual cameras and two or more depth cameras that are aligned to capture imaging data from the scene from unique fixed perspectives are synchronized at the seed time. Alternatively, such cameras may be synchronized to a given frame, or a time when such frames are captured by one or more of the cameras, i.e., to an earliest such time among the cameras. In some implementations, the visual cameras and the depth cameras may be distinct systems or components. In some other implementations, however, the visual cameras and the depth cameras may be components of a common system, e.g., an RGBD camera. At box 325, an initial point cloud is defined from depth image frames captured from the scene using one or more of the depth cameras. For example, where a depth image containing information relating to distances of surfaces of objects within a scene from a perspective of a depth camera is captured, the depth image may be converted into a 3D representation of the physical topography of the scene from that perspective using ranging information for one or more of the pixels provided in the depth image and parameters of the depth camera, e.g., a set of coordinates of the imaging sensor or other components of the depth camera. Two or more depth images captured using RGBD cameras from different perspectives may be further utilized to enhance the quality of the 3D representation of the scene. At box 330, visual cameras having the target object in view within visual image frames captured from the scene are determined. For example, where the 3D bounding region has been defined at box 310, an extent to which a 2D projection of the 3D bounding region appears within the fields of view of each of the imaging devices is determined. At box 332, the extent to which pixels corresponding to the target object are occluded (or not occluded) within the 2D projection of the 3D bounding region is determined, e.g., by comparing depth data for the target cloud points to depth data for other scene points within a frustrum spanned by the 3D bounding region. At box 334, an active cameras set including the visual cameras that hold the target object sufficiently in view at an initial synchronization point is built. The active cameras set may be built based on a comparison between color images captured by the visual cameras and/or the depth cameras, and compared to the initial point cloud defined at box 325, to determine whether any of the visual images are occluded by one or more features represented in the point cloud. For example, referring again to FIGS. 1C, 1E, 1F and 1G, the left hand 183-2 of the operator 180-2 is occasionally occluded from the fields of view of one or more of the imaging devices 125-2, 125-5, 125-8 when one or more objects are located between the left hand 183-2 and the optical sensors or other elements of the imaging devices 125-2, 125-5, 125-8, such as when the left hand 183-2 is extended into the shelving unit 170-2, or when the operator 180-2 turns away from one or more of the imaging devices 125-2, 125-5, 125-8 or leans over the cart 185-2 and inserts the left hand 183-2 therein. At box 336, the appearance of the target object within the fields of view of the active cameras set is learned, for example, based at least in part on the extent to which the target object appears within visual image frames captured by the respective cameras, and one or more of the colors, textures, outlines or other features of the target object, e.g., according to one or more object recognition and/or tracking algorithms. At box 338, a value of a step variable i is set at one. At box 340, the target object is recognized within one or more of the image frames captured at the synchronization point i using one or more tracking algorithms, and at box 350, a hypothetical position of the target object in each of the fields of view of the visual cameras of the active cameras set is predicted based on the visual image frames captured at the synchronization point i. For example, the target object may be recognized using one or more tracking algorithms, such as an OpenCV tracker, a KCF tracker, or any other tracker, which may operate in any manner, such as by recognizing differences between background pixels (e.g., stationary features) and foreground pixels (e.g., dynamic features) in an image, and associating foreground pixels with the target object. A hypothetical position of the target object within any given image may be defined in any manner, including but not limited to coordinates of one or more of the pixels corresponding to the target object within an image, which may be determined in any manner. Once the target object has been recognized in each of such image frames, a position of the target object may be predicted within such image frames, e.g., by triangulation, or by converting detections of the target object in such image frames to three-dimensional rays that begin at optical elements of the respective imaging devices and terminate at surfaces of objects within the given pixels, based on the intrinsic properties of such imaging devices and extrinsic properties of a scene, and merging the detections into one, e.g., by triangulating the rays corresponding to such detections. At box 360, the point cloud is updated based on the depth image frames captured at the synchronization point i. For example, differences between the depth image frames from which the initial point cloud was defined and the depth image frames captured at the synchronization point i may be identified and used to update the initial point cloud. Alternatively, the initial point cloud may be replaced with a point cloud generated based on the depth image frames captured at the synchronization point i. At box 365, the hypothetical positions predicted at box 350 based on the visual image frames captured at the synchronization point i and the updated point cloud defined at box 360 based on the depth image frames captured at the synchronization point i are combined to determine a new target position of the target object in 3D space at the synchronization point i according to triangulation and/or an analysis of the updated point cloud. For example, the colors, textures, outlines or other features of the target object identified in the visual images may be projected in 3D space and correlated with the updated point cloud, to determine a position of the target object at the synchronization point i. A voting algorithm may be used to identify pixels in any of the visual images having sufficiently high levels of responsiveness from the tracker algorithm and to extend voting line segments therefrom, and regions where such voting line segments intersect may be assigned points or scores, with a highest number of points or scores corresponding to a most likely position of the target object in 3D space. For example, in some implementations, voting line segments may be discretized using a 3D Bresenham algorithm or any other technique, and response values may be assigned to corresponding volumetric bins. The new target position of the target object in 3D space at synchronization point i may be represented in any manner, such as by coordinates of a center or centroid of the target object, coordinates of each of a set or a collection of points corresponding to the object (e.g., in 3D space), or coordinates of a representative point of the target object, which may or may not be a center or centroid of the target object. The position of the target object in 3D space at synchronization point i, as determined at box 365 may then be returned to each of the imaging devices that predicted the position of the target object within visual image frames captured at synchronization point i at box 350, and used to train the tracking algorithms to predict locations of the target object in subsequently captured visual frames. At box 370, the position of the target object in 3D space at synchronization point i determined at box 365 is added to the tracklet. At box 380, whether a stopping condition has been realized is determined. For example, a stopping condition may be an event that calls for an end to the tracking of the target object, such as when the object is no longer visible in any of the imaging devices at the scene (e.g., when the active cameras set does not include any cameras), when a maximum tracking time has elapsed, when a predetermined stopping time has been reached, when the updated point cloud does not include a sufficient number of points within a target volume corresponding to the 3D bounding region, or when an appearance of the target object is sufficiently different from an initial appearance of the target object, thereby indicating that the target object has changed, or that the target object is not the object that was originally tracked. If a stopping condition has been realized, then the process ends. If the stopping condition has not been realized, however, then the process advances to box 385, where the learned appearance of the target object within the fields of view of the active cameras set is updated, e.g., based on the combination of the visual image frames with the updated point cloud. At box 390, the value of the step variable i is incremented by one, and at box 395, the active cameras set is updated to include visual cameras having the target object sufficiently in view at the synchronization point i based on visual image frames and digital image frames captured at synchronization point i, before returning to box 350, where a hypothetical position of the target object in each of the visual cameras of the active cameras set is predicted. As is discussed above, where a plurality of imaging devices (e.g., one or more RGB cameras and one or more RGBD cameras) are aligned to capture imaging data from a scene that includes a target object, an active set of the imaging devices may be defined based on the imaging devices that include the target object within their respective fields of view, and the operation of the imaging devices may be synchronized accordingly, e.g., to simultaneous or nearly simultaneous operation, depending on the frame rates of the respective imaging devices of the active set. Referring to FIG. 4, a view of data captured using one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure is shown. Except where otherwise noted, reference numerals preceded by the number “4” shown in FIG. 4 indicate components or features that are similar to components or features having reference numerals preceded by the number “2” shown in FIG. 2 or by the number “1” shown in FIGS. 1A through 1I. As is shown in FIG. 4, a plurality of frames are captured by each of four imaging devices (e.g., cameras) 425-1, 425-2, 425-3, 425-4, including an RGBD camera 425-1, an RGB camera 425-2, an RGBD camera 425-3 and an RGB camera 425-4. The operation of the imaging cameras 425-1, 425-2, 425-3, 425-4 may be synchronized to times or points, e.g., specific frames or times of such frames captured using one or more of the cameras 425-1, 425-2, 425-3, 425-4. For example, each of the available imaging devices (e.g., cameras of an active cameras set) may be synchronized to a given time, such as a seed time associated with an establishment of an initial position of a target object, and subsequently to later times or an earliest time of a next frame captured by the cameras of the active cameras set. For example, the RGBD camera 425-1 is configured to capture a plurality of visual image frames and a plurality of depth image frames at a rate of approximately forty-five frames per second (45 fps), while the RGB camera 425-2 is configured to capture a plurality of visual image frames at a rate of approximately thirty frames per second (30 fps). The RGBD camera 425-3 is configured to capture a plurality of visual image frames and a plurality of depth image frames at a rate of approximately forty frames per second (40 fps), while the RGB camera 425-4 is configured to capture a plurality of visual image frames at a rate of approximately sixty frames per second (60 fps). As is discussed above, the use of the cameras 425-1, 425-2, 425-3, 425-4 in tracking operations may be synchronized to a seed time, or a time associated with an initial position of a target object, and to later points or times associated with the subsequent capture of frames by active cameras of an active cameras set. For example, as is shown in FIG. 4, the RGBD camera 425-1 included a target object within its fields of view (e.g., its visual field of view and its depth field of view) in frames captured at a seed time t1, and in a next pair of frames, but did not include the target object within its fields of view in the next two pairs of frames captured thereby. The target object returned to the fields of view of the RGBD camera 425-1 in its next two pairs of frames. The RGB camera 425-2, meanwhile, included the target object within its field of view over the first four frames captured thereby. The RGBD camera 425-3 included the target object within its fields of view in frames captured at the seed time t1, but not in a next pair of frames. The target object returned to the fields of view of the RGBD camera 425-3 in its next three pairs of frames. Finally, the RGB camera 425-4 included the target object within its field of view in the first, fourth and eighth of the first eight frames captured thereby, and did not include the target object within its field of view in the second, third, fifth, sixth and seventh of the first eight frames captured thereby. The tracking of the target object by the cameras 425-1, 425-2, 425-3, 425-4 may be synchronized to the seed time t1, and to subsequent times when an earliest image frame is captured by the cameras of the active cameras set. For example, each of the cameras 425-1, 425-2, 425-3, 425-4 is deemed active at the seed time t1, and the tracking of the target object at that time may be synchronized accordingly. A next synchronization point may be selected based on the capture of the visual image frames and the depth image frames by the RGBD camera 425-1 at time t2, which is the earliest of the next image frames to be captured by an active camera, and the capture of the visual image frame by the RGB camera 425-2 slightly later than time t2 may be synchronized to time t2. Thereafter, a next synchronization point may be selected based on the capture of a visual image frame by the RGB camera 425-4 at time t3, and the capture of the visual image frame and the depth image frame by the RGBD camera 425-3 shortly thereafter may be synchronized to time t3. Similarly, a next synchronization point may be selected based on the capture of a visual image frame by the RGB camera 425-2 at time t4, and the capture of the visual image frame and the depth image frame by the RGBD camera 425-3 shortly thereafter may be synchronized to time t4. A next synchronization point may be selected based on the capture of the visual image frame and the depth image frame by the RGBD camera 425-1 at time t5, while a next synchronization point may be selected based on the capture of the visual image frame by the RGB camera 425-2 at time t6, and the capture of the visual image frame and the depth image frame by the RGBD camera 425-1, the visual image frame and the depth image frame by the RGBD camera 425-3 and the visual image frame by the RGB camera 425-4 may be synchronized to time t6. As is also discussed above, where a plurality of imaging devices (e.g., RGB cameras and RGBD cameras) are used to track objects in 3D space, an imaging device may be defined as “active” where the imaging device includes a target object clearly, and in an unoccluded manner, within its field of view. As long as the target object appears clearly, and without sufficient occlusion, within a field of view of at least one imaging device, and no other stopping condition is met, the target object may be tracked within the scene using the imaging devices. Referring to FIGS. 5A through 5D, views of aspects of one system 500 for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure are shown. Except where otherwise noted, reference numerals preceded by the number “5” shown in FIGS. 5A through 5D indicate components or features that are similar to components or features having reference numerals preceded by the number “4” shown in FIG. 4, by the number “2” shown in FIG. 2 or by the number “1” shown in FIGS. 1A through 1I. As is shown in FIGS. 5A and 5B, the system 500 includes a materials handling facility 520 having a pair of RGBD cameras 525-1, 525-2 and a shelving unit 570, which includes a pair of sets 572-1, 572-2 of shelves. The RGBD camera 525-1 is configured to capture imaging data (e.g., both visual images and depth images) regarding interactions with the shelving unit 570 from above the materials handling facility 520, while the RGBD camera 525-2 is configured to capture imaging data (e.g., both visual images and depth images) regarding interactions with the shelving unit 570 from a frontal perspective with respect to the sets 572-1, 572-2 of shelves. Whether an item is sufficiently represented within imaging data (e.g., visual image frames and/or depth image frames) captured by an imaging device, such as one of the imaging devices 525-1, 525-2 of FIGS. 5A and 5B, may be determined by calculating a portion or share of a 2D representation of a 3D bounding region having a target object therein that is visible within a field of view of the imaging device, as well as portion or share of the pixels corresponding to the target object within the 2D representation of the 3D bounding region that are occluded from view by one or more other objects. For example, as is shown in FIG. 5C, a visual image 530-1 captured at time t1 using the imaging device 525-1, e.g., from a top view of the materials handling facility 520, depicts an operator 580 (e.g., a customer) using a hand 583 to interact with an item 585 (e.g., a medium-sized bottle) on one of the shelves 572-2 in the shelving unit 570. A visual image 530-2 captured at time t1 using the imaging device 525-2, e.g., from a front view of the shelving unit 570, also depicts the operator 580 interacting with the item 585 using the hand 583. A 2D box 535-1 corresponding to a representation of a 3D bounding region in the visual image 530-1 is shown centered on the hand 583, while a 2D box 535-2 corresponding to a representation of the 3D bounding region in the visual image 530-2 is also shown centered on the hand 583. As is shown in FIG. 5C, approximately seventy-nine percent (79%) of the 2D box 535-1 is located within the field of view of the imaging device 525-1, e.g., based on the visual image 530-1. Meanwhile, approximately two percent (2%) of the pixels corresponding to the hand 583 are occluded within the 2D box 535-1. Thus, the visual image 530-1 may be processed and evaluated to track the hand 583 within the materials handling facility 520 at time t1. Similarly, approximately one hundred percent (100%) of the 2D box 535-2 is located within the field of view of the imaging device 525-2, e.g., based on the visual image 530-2. Approximately zero percent (0%) of the hand 583 is occluded within the 2D box 535-2. Thus, the visual image 530-2 may also be processed and evaluated to track the hand 583 within the materials handling facility 520 at time t1. The extent of the hand 583 that is occluded within the 2D boxes 535-1, 535-2 may be determined in any manner and according to any technique. For example, in some implementations of the present disclosure Likewise, as is shown in FIG. 5D, a visual image 530-3 captured using the imaging device 525-1 and a visual image 530-4 captured at time t2 using the imaging device 525-2 also depict the operator 580 interacting with the item 585 using the hand 583. A 2D box 535-3 corresponding to a representation of a bounding region in the visual image 530-3 is shown centered on the hand 583, while a 2D box 535-4 corresponding to a representation of the bounding region in the visual image 530-4 is also shown centered on the hand 583. Approximately one hundred percent (100%) of the 2D box 535-3 is located within the field of view of the imaging device 525-1 at time t2, e.g., based on the visual image 530-3. Approximately eighty percent (80%) of the hand 583 is occluded within the 2D box 535-3, however. Thus, the visual image 530-3 may not be utilized in tracking the hand 583 within the materials handling facility 520 at time t2. Approximately one hundred percent (100%) of the 2D box 535-4 is also located within the field of view of the imaging device 525-2 at time t2, e.g., based on the visual image 530-4. Approximately two percent (2%) of the hand 583 is occluded within the 2D box 535-4. Thus, the visual image 530-3 may be processed and evaluated to track the hand 583 within the materials handling facility 520 at time t2. As is also discussed above, positions of a target object in 3D space that are determined by correlating visual images with point clouds formed from depth images captured by synchronized imaging devices may be returned to imaging devices to train such imaging devices to recognize the actual positions of the target object within their respective fields of view. Referring to FIGS. 6A through 6G, views of aspects of one system for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure are shown. Except where otherwise noted, reference numerals preceded by the number “6” shown in FIGS. 6A through 6G indicate components or features that are similar to components or features having reference numerals preceded by the number “5” shown in FIGS. 5A through 5D, by the number “4” shown in FIG. 4, by the number “2” shown in FIG. 2 or by the number “1” shown in FIGS. 1A through 1I. As is shown in FIG. 6A, the system 600 includes a scene 620 having a plurality of imaging devices 625-1, 625-2, 625-3, 625-4 (e.g., RGB cameras and/or RGBD cameras). The imaging devices 625-1, 625-2, 625-3, 625-4 are aligned to capture imaging data, including still or moving images, from the scene 620, and have fields of view that overlap at least in part over a portion of the scene 620. Additionally, an operator 680 is traveling through the scene 620, and is passing within the respective fields of view of each of the imaging devices 625-1, 625-2, which overlap in the scene 620. As is shown in FIG. 6A, the operator 680 is carrying an article 685 (viz., a football) in a hand 683. As is discussed above, the imaging devices of the present disclosure, including but not limited to the devices 625-1, 625-2, 625-3, 625-4 of FIG. 6A, may be configured to predict a hypothetical position of a target object within their respective fields of view at a given time based on visual images captured thereby, e.g., based on previously captured frames, as well as any colors, textures, outlines or other aspects recognized within the visual images, or any other intrinsic or extrinsic data regarding the operator 680, the hand 683, the article 685, the scene 620 and/or the imaging device 625-3, using one or more tracking algorithms operating thereon. When a position of the target object is determined in 3D space (e.g., a position of one or more points associated with the target object, such as a center or centroid, a set or collection of points, or a representative one or more of such points) as of the given time based on hypothetical positions of the target object within one or more frames that are correlated with depth information (e.g., a point cloud formed from depth images captured by the imaging devices 625-1, 625-2, 625-3, 625-4), information regarding the determined position of the target object may be returned to the imaging devices 625-1, 625-2, 625-3, 625-4 in order to further train the tracking algorithms operating thereon to recognize the target object in subsequently captured frames. As is shown in FIG. 6B, a visual image frame 630-3A is captured at time t1 by the imaging device 625-3 shown in FIG. 6A as the operator 680 travels within the scene 620. The visual image frame 630-3A depicts the operator 680, the hand 683 and the article 685 at time t1. A 2D representation 635-3A of a 3D bounding region is shown on the visual image frame 630-3A, and includes a hypothetical position of the hand 683 at time t1 determined by the imaging device 625-3 according to any type or form of tracking algorithm, system or technique, including but not limited to such algorithms, systems or techniques that operate based at least in part on hypothetical positions of the hand 683 previously observed in visual image frames captured by the imaging device 625-3, as well as any intrinsic or extrinsic data. Such algorithms, systems or techniques may operate by tracking common edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of objects. Similarly, as is shown in FIG. 6C, a visual image frame 630-3B is captured at time t1 by the imaging device 625-4 shown in FIG. 6A as the operator 680 travels within the scene 620. The visual image frame 630-3B also depicts the operator 680, the hand 683 and the article 685 at time t1, and a 2D representation 635-3B of the 3D bounding region is shown on the visual image frame 630-3B as including a hypothetical position of the hand 683 at time t1 as determined by the imaging device 625-3 according to any type or form of tracking algorithm, system or technique. Although FIGS. 6B and 6C show only single visual image frames 630-3A, 630-3B captured by the imaging devices 625-3, 625-4 at the time t1, those of ordinary skill in the pertinent arts will recognize that each of the other imaging devices 625-1, 625-2 shown in FIG. 6A may also capture visual image frames, and that each of the imaging devices 625-1, 625-2, 625-3, 625-4 shown in FIG. 6A may capture depth image frames of the scene 620 at time t1, or at other times. As is shown in FIG. 6D, content of the visual image frames 630-3A, 630-3B may be correlated with a point cloud 640-1 generated based on depth images captured by one or more of the imaging devices 625-1, 625-2, 625-3, 625-4, such as by projecting (e.g., triangulating) common aspects of the respective portions of the visual image frames 630-3A, 630-3B within the 2D representations 635-3A, 635-3B of the 3D bounding region shown in each of the visual image frames 630-3A, 630-3B onto the point cloud 640-1. Although FIG. 6D shows only two visual image frames 630-3A, 630-3B being projected upon the point cloud 640-1 at a single time t1 those of ordinary skill in the pertinent arts will recognize that visual image frames captured by any of the imaging devices 625-1, 625-2, 625-3, 625-4 may be projected onto the point cloud 640-1 in accordance with the present disclosure. As is shown in FIG. 6E, each of the imaging devices 625-1, 625-2, 625-3, 625-4 may be configured to generate predictions of a hypothetical position of the hand 683 at time t1 by providing one or more visual image frames captured thereby at time t1 to a server 622 or other computer device, e.g., over one or more computer networks. Such predictions may be based at least in part on hypothetical positions of the hand 683 in any previously captured visual image frames, as well as any colors, textures, outlines or other aspects recognized in the visual image frames captured by the imaging devices 625-1, 625-2, 625-3, 625-4 at time t1, and any intrinsic or extrinsic data. Predicted positions of the hand 683 within the visual image frames may be correlated with one another and with depth information, e.g., the point cloud 640-1, generated based on depth image frames (not shown) captured using one or more of the imaging devices 625-1, 625-2, 625-3, 625-4, or any other imaging devices, to determine a position of the hand 683 at time t1. The position of the hand 683 may be defined by coordinates of a center or centroid of the hand 683, coordinates of each of a set or a collection of points corresponding to the hand 683 (e.g., in 3D space), or coordinates of a representative point of the hand 683 object, which may or may not be a center or centroid of the hand 683. As is shown in FIG. 6F, once the position of the hand 683 at time t1 has been determined, the position of the hand 683 may be returned to the imaging devices 625-1, 625-2, 625-3, 625-4 and used to train the one or more tracking algorithms operating thereon to recognize the hand 683 in visual image frames that are subsequently captured after time Because the position of the hand 683 at time t1 is determined based on visual image frames captured by each of the imaging devices 625-1, 625-2, 625-3, 625-4, mapped to depth information (not shown) captured by one or more of the imaging devices 625-1, 625-2, 625-3, 625-4, such as the point cloud 640-1, and not just on the contents of the visual image frame 630-3B captured at time t1 by the imaging device 625-4, the position of the hand 683 at time t1 may aid one or more tracking algorithms in learning how the hand 683 behaves in 3D space from a global perspective, and not simply from the fields of view defined by the imaging devices 625-3, 625-4. In particular, the imaging devices 625-1, 625-2, 625-3, 625-4 may be trained to recognize not only that an object (e.g., the hand 683) is present within a field of view, but also to determine a shape of the object based on the projection of the visual image frames 630-3A, 630-3B, and/or other visual image frames, onto the point cloud 640-1. As is shown in FIG. 6G, knowledge of a shape or position of a target object, as determined from a point cloud 640-2 generated based on depth images captured by one or more of the imaging devices 625-1, 625-2, 625-3, 625-4 at time t2, may be used to train a tracking algorithm, system or technique to recognize the target object within image frames captured by one or more of the imaging devices 625-1, 625-2, 625-3, 625-4 at time t2. For example, as is shown in FIG. 6G, a tracking algorithm may be trained to recognize a target object (e.g., the hand 683) within portions of 2D representations 635-4A, 635-4B of the 3D bounding region within image frames captured at a time t2 based on shapes or positions of the target object determined based on the point cloud 640-1 and the visual image frames 630-3A, 630-3B. Any type of voting algorithms may be used to determine a position of a target object based on visual image frames captured by imaging devices including the target object within their respective fields of view, or a point cloud generated from depth information captured by such devices. For example, probabilities associated with positions of the target object within visual imaging data or depth imaging data captured at various times may be determined and mapped, and a track of the target object during the various times may be determined based on an analysis of the probabilities as a whole, rather than based on positions of the target object that are individually determined at discrete times based on visual imaging data and depth imaging data captured at such times. Referring to FIG. 7, a flow chart 700 of one process for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure is shown. At box 710, a seed box is defined based on an initial position of a target object. The seed box, e.g., an initial bounding region, may be defined on any basis, such as in response to a triggering event, or based on any information or data, and used to determine a baseline position of a hypothetical 3D box from which tracking of a target object in a scene may be initiated. At box 715, a tracklet for the target object (or a set of positions of the target object) is set to “empty,” or reset, at a seed time associated with the initial position and the bounding region. At box 720, two or more RGB cameras and two or more RGBD cameras that are aligned to capture imaging data from the scene from multiple perspectives are synchronized at the seed time, and at box 725, an initial point cloud is defined from depth image frames captured from the scene using one or more of the RGBD cameras. At box 730, RGB and/or RGBD cameras having the target object in view within visual image frames captured from the scene are determined, and at box 735, an extent of occlusion of the target object within the views of the RGB cameras are determined. For example, as is discussed above, the extent of the occlusion of a target object within each of the fields of view may be determined in any manner, such as by comparing z-buffers that correspond to the target object to z-buffers that correspond to other points within a field of view, or in any other manner. At box 740, an active cameras set including the RGB cameras and/or the RGBD cameras that include the target object sufficiently in view at an initial synchronization point is defined. At box 745, a value of a step variable i is set at one. At box 750, a map of probabilities for the position of the target object within the fields of view of each of the RGB and/or the RGBD cameras is determined for each of such cameras based on visual image frames captured by such cameras at a synchronization point i. The probability map may correspond to all or portions of such visual image frames, and may represent probabilities that each of the pixels of the visual image frames depicts a portion of the target object. In some implementations, a probability map may include values for each of such pixels. In other implementations, a probability map may be demarcated into one or more regions having common, average or nominal values of probabilities. Any arrangement or configuration of a probability map that depicts probabilities that pixels or regions of pixels within the visual image frames depict the target object may be utilized in accordance with the present disclosure. At box 752, the probability map for the position of the target object is provided to a Kalman filter or another set of mathematical equations for estimating the position of the target object in a manner that minimizes a mean of the squared errors associated with the position. At box 754, the Kalman filter models motion of the target object based on probability maps determined for all known synchronization points, e.g., synchronization points ranging from 1 to i. At box 756, the appearance of the target object is learned based on the visual image frames captured by each of the RGB and/or RGBD cameras of the active cameras set at the synchronization point i. For example, based on the visual image frames, a tracking algorithm within or associated with each camera may independently compute a trained filter response (e.g., a set of pixels) to the observed visual image frame, which may then be projected onto the 3D point cloud. Where a position of the target object in 3D space (e.g., a position of a center or centroid of the target object, positions of each of a set or a collection of points corresponding to the object, or positions of one or more representative points of the target object) is known based on previously captured visual image frames and/or depth information or data, such as a point cloud formed from depth image frames, a bounding region or other hypothesis defined for the target object within a successive frame may be searched for the target object, with each point of the point cloud found within the bounding region of each imaging device being assigned a score. Points of the point cloud that are occluded in a given field of view, however, will receive a score of zero. An aggregated score for each of the points of the point cloud, e.g., defined based on sums and/or medians of the scores for each of the fields of view, may be calculated. Thereafter, points of the point cloud having sufficiently high values from a sorted set, or local maxima points, may be identified as candidate positions for pixels corresponding to the target object, and a best position of the target object may be selected from one or more of the candidate positions accordingly. An output of the Kalman filter may be used to determine a probability of the motion of points of the target object, and a total score for each of the points of the point cloud may be determined based on a product of the Kalman filter output probability for the motion of a given point and a probability based on the aggregate score determined from each of the visual image frames. A candidate having a highest probability may be chosen. At box 758, the point cloud is updated based on depth image frames captured by each RGBD camera of the active cameras set at the synchronization point i. For example, differences between the depth image frames from which the initial point cloud was defined and the depth image frames captured at the synchronization point i may be identified and used to update the initial point cloud or a point cloud generated prior to the synchronization point i. Alternatively, the initial point cloud or the prior point cloud may be replaced with a point cloud generated based on the depth image frames captured at the synchronization point i. At box 760, a combined target position in 3D space is determined based on the modeled motion and the updated point cloud, and at box 765, the combined target position in 3D space at the synchronization point i is added to the tracklet. The combined target position in 3D space may be represented as coordinates of a center or a centroid of the target object, or as coordinates of each of a set or collection of points (e.g., one or more representative points) corresponding to the target object, or in any other manner. At box 770, the tracklet for the target object over the tracking period is defined based on the probability maps and the point clouds defined from the visual image frames and the depth image frames captured at the prior synchronization points i. For example, a voting algorithm may be used to estimate a joint object position probability distribution in 3D space based on representations of the target object in 2D images captured by the plurality of imaging devices, and recognized therein using a tracking algorithm, such as an OpenCV tracker or a KCF tracker. Such representations may be projected onto the point clouds, and a tracklet of the positions of the target object may be determined accordingly, such as by assigning scores to each of the points in 3D space at various times, aggregating scores for such points, and selecting a best candidate based on the aggregated scores. At box 780, whether a stopping condition, e.g., an event that requires an end to the tracking of the target object, has been realized is determined. If a stopping condition has been realized, then the process ends. If the stopping condition has not been realized, however, then the process advances to box 785, where the learned appearance of the target object is updated based on the modeled motion and the updated point cloud. The appearance and/or position of the target object may be returned to each of the cameras of the active cameras set, which may then update their respective tracking algorithms based on the appearance of the target object. At box 790, the value of the step variable i is incremented by one, e.g., to a value of i+1. At box 795, the active cameras set is updated to include RGB and/or RGBD cameras having the target object sufficiently in view at the synchronization point i based on visual image frames and digital image frames captured at synchronization point i, before returning to box 750, where a map of probabilities for the position of the target object within the fields of view of each of the RGB and/or the RGBD cameras is determined. Referring to FIGS. 8A through 8M, views of aspects of one system 800 for tracking objects in 3D space using calibrated cameras in accordance with implementations of the present disclosure are shown. Except where otherwise noted, reference numerals preceded by the number “8” shown in FIGS. 8A through 8M indicate components or features that are similar to components or features having reference numerals preceded by the number “6” shown in FIGS. 6A through 6G, by the number “5” shown in FIGS. 5A through 5D, by the number “4” shown in FIG. 4, by the number “2” shown in FIG. 2 or by the number “1” shown in FIGS. 1A through 1I. As is shown in FIGS. 8A and 8B, the system 800 includes a materials handling facility 820 having a pair of imaging devices 825-1, 825-2 and a shelving unit 870 with a plurality of shelves 872-1, 872-2, 872-3. The imaging devices 825-1, 825-2 are aligned to include all or portions of the shelving unit 870 and/or the shelves 872-1, 872-2, 872-3 within their respective fields of view. As is also shown in FIGS. 8A and 8B, a plurality of items 885-1, 885-2, 885-3 are deposited on the shelf 872-1. As is shown in FIGS. 8C through 8F, the imaging devices 825-1, 825-2 may capture imaging data as a plurality of operators 880-1, 880-2 execute a number of movements within the materials handling facility 820, including movements involving interactions with the one or more items 885-1, 885-2, 885-3 or other items on the shelves 872-1, 872-2, 872-3 there. For example, as is shown in FIG. 8C, the imaging devices 825-1, 825-2 captured the synchronized visual images 830-1L, 830-1R at a time t1, with each of the visual images 830-1L, 830-1R depicting the operator 880-1 evaluating the item 885-1 and the operator 880-2 interacting with the item 885-3 on the shelf 872-1. A 2D representation 835-1A of a 3D bounding region is shown as centered on a right hand 883-1 of the operator 880-1 in the visual image 830-1L, and a 2D representation 835-2A of a 3D bounding region is shown as centered on a right hand 883-2 of the operator 880-2 in the visual image 830-1L. Similarly, a 2D representation 835-1B of a 3D bounding region is shown as centered on the right hand 883-1 of the operator 880-1 in the visual image 830-1R, and a 2D representation 835-2B of a 3D bounding region is shown as centered on a right hand 883-2 of the operator 880-2 in the visual image 830-1R. As is shown in FIG. 8D, the imaging devices 825-1, 825-2 captured the synchronized visual images 830-2L, 830-2R at a time t2, with each of the visual images 830-2L, 830-2R depicting the operator 880-1 as he continues to evaluate the item 885-1, while the operator 880-2 retrieves the item 885-3 from the shelf 872-1 and places the item 885-3 into a cart. 2D representations 835-1C, 835-2C of 3D bounding regiones are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-2L. Similarly, 2D representations 835-1D, 835-2D of 3D bounding regiones are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-2R. As is shown in FIG. 8E, the imaging devices 825-1, 825-2 captured the synchronized visual images 830-3L, 830-3R at a time t3, with each of the visual images 830-3L, 830-3R depicting the operator 880-1 placing the item 885-1 into a cart, while the operator 880-2 attempts to retrieve the item 885-2 from the shelf 872-1. 2D representations 835-1E, 835-2E of 3D bounding regiones are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-3L. Similarly, 2D representations 835-1F, 835-2F of bounding regions are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-3R. As is shown in FIG. 8F, the imaging devices 825-1, 825-2 captured the synchronized visual images 830-4L, 830-4R at a time t4, with each of the visual images 830-4L, 830-4R depicting the operator 880-1 looking on as the operator 880-2 places the item 885-3 into the cart. 2D representations 835-1G, 835-2G of bounding regions are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-4L. Similarly, 2D representations 835-1H, 835-2H of bounding regions are shown as centered on the right hand 883-1 of the operator 880-1 and on the right hand 883-2 of the operator 880-2 in the visual image 830-4R. As is discussed above, the motion of target objects, e.g., one or more of the hands of the operators 880-1, 880-2 or the items 885-1, 885-2, 885-3, or any other objects within the materials handling facility 820, may be tracked by mapping probabilities that such target objects are depicted within visual image frames and defining a joint probability distribution in 3D space from which positions of the target objects may be selected. As is shown in FIGS. 8G and 8H, a plurality of probability maps 850-1A, 850-1B, 850-1C, 850-1D, 850-1E, 850-1F, 850-1G, 850-1H may be generated based on the presence of the right hand 883-1 of the operator 880-1 identified in each of the 2D representations 835-1A, 835-1B, 835-1C, 835-1D, 835-1E, 835-1F, 835-1G, 835-1H of the bounding region identified in the image frames 830-1L, 830-1R, 830-2L, 830-2R, 830-3L, 830-3R, 830-4L, 830-4R captured by the imaging devices 825-1, 825-2 from left and right views, respectively. For example, as is shown in FIG. 8G, the probability maps 850-1A, 850-1B include regions of substantially high probabilities that pixels of the 2D representations 835-1A, 835-1B, which depict unoccluded views of the right hand 883-1 of the operator 880-1, correspond to the right hand 883-1. Conversely, the probability maps 850-1C, 850-1D do not include any regions in which the probabilities that pixels of the 2D representations 835-1C, 835-1D correspond to the right hand 883-1 are significant, since the 2D representations 835-1C, 835-1D depict occluded views of the right hand 883-1. Likewise, as is shown in FIG. 8H, the probability maps 850-1E, 850-1F, 850-1H include regions where pixels of the 2D representations 835-1E, 835-1F, 835-1H, which depict partially occluded views of the right hand 883-1, are strongly likely corresponding to the right hand 883-1, while the probability map 835-1G does not include any such regions. Similarly, FIGS. 81 and 8J show a plurality of probability maps 850-2A, 850-2B, 850-2C, 850-2D, 850-2E, 850-2F, 850-2G, 850-2H that are generated based on the presence of the right hand 883-2 of the operator 880-2 in each of the 2D representations 835-2A, 835-2B, 835-2C, 835-2D, 835-2E, 835-2F, 835-2G, 835-2H of the bounding region identified in the image frames 830-1L, 830-1R, 830-2L, 830-2R, 830-3L, 830-3R, 830-4L, 830-4R. As is shown in FIG. 8K, probability maps 850-1A, 850-1B, 850-1C, 850-1D representing probabilities that the image frames 830-1L, 830-2L, 830-3L, 830-4L captured by the imaging device 825-1 depict the right hand 883-1 of the operator 880-1, and probability maps 850-1E, 850-1F, 850-1G, 850-1H representing probabilities that the image frames 830-1R, 830-2R, 830-3R, 830-4R captured by the imaging device 825-2 depict the right hand 883-1 of the operator 880-1, may be projected upon or otherwise associated with point clouds 840-1A, 840-2A, 840-4A generated from depth images captured at the times t1, t2, t3 and t4 in order to determine a three-dimensional probability distribution of the right hand 883-1 within the materials handling facility 820. Similarly, as is shown in FIG. 8L, probability maps 850-2A, 850-2B, 850-2C, 850-2D representing probabilities that the image frames 830-1L, 830-2L, 830-3L, 830-4L captured by the imaging device 825-1 depict the right hand 883-2 of the operator 880-2, and probability maps 850-2E, 850-2F, 850-2G, 850-2H representing probabilities that the image frames 830-1R, 830-2R, 830-3R, 830-4R captured by the imaging device 825-2 depict the right hand 883-2 of the operator 880-2, may be projected upon or otherwise associated with point clouds 840-1B, 840-2B, 840-3B, 840-4B generated from depth images captured at the times t2, t3 and t4 in order to determine a three-dimensional probability distribution of the right hand 883-2 within the materials handling facility 820. As is shown in FIG. 8M, a model of the motion of the right hand 883-1 of the operator 880-1 and the right hand 883-2 of the operator 880-2 may be generated based on the probability distributions determined based on the probability maps 850-1A, 850-1B, 850-1C, 850-1D, 850-1E, 850-1F, 850-1G, 850-1H and the point clouds 840-1A, 840-2A, 840-4A shown in FIG. 8K, and the probability maps 850-2A, 850-2B, 850-2C, 850-2D, 850-2E, 850-2F, 850-2G, 850-2H and the point clouds 840-1B, 840-2B, 840-3B, 840-4B shown in FIG. 8L. Although some of the implementations disclosed herein reference the tracking of body parts of human actors, the systems and methods of the present disclosure are not so limited. For example, the systems and methods disclosed herein may be used to track any objects, including one or more body parts of non-human animals, as well as portions of any number of machines or robots, and are not limited to detecting and tracking humans. Furthermore, although some of the implementations disclosed herein reference detecting and tracking specific objects in a commercial setting, e.g., within a materials handling facility such as a fulfillment center, the systems and methods of the present disclosure are likewise not so limited. Rather, the systems and methods disclosed herein may be utilized to track objects in 3D space within any type of commercial or non-commercial settings, such as the live filming of performances in a stadium or theater. Additionally, although some of the implementations described herein or shown in the accompanying figures refer to the processing of imaging data that is in color, e.g., according to an RGB color model, the systems and methods disclosed herein are not so limited, and may be used to process any type of information or data that is provided in color according to any color model, or in black-and-white or grayscale. It should be understood that, unless otherwise explicitly or implicitly indicated herein, any of the features, characteristics, alternatives or modifications described regarding a particular implementation herein may also be applied, used, or incorporated with any other implementation described herein, and that the drawings and detailed description of the present disclosure are intended to cover all modifications, equivalents and alternatives to the various implementations as defined by the appended claims. Additionally, it should also be appreciated that the detailed description is set forth with reference to the accompanying figures. In the figures, the use of the same reference numbers in different figures indicates similar or identical items or features. Except where otherwise noted, left-most digit(s) of a reference number identify a figure in which the reference number first appears, while two right-most digits of a reference number in a figure indicate a component or a feature that is similar to components or features having reference numbers with the same two right-most digits in other figures. Moreover, with respect to the one or more methods or processes of the present disclosure shown or described herein, including but not limited to the flow charts shown in FIGS. 3 and 7, orders in which such methods or processes are presented are not intended to be construed as any limitation on the claimed inventions, and any number of the method or process steps or boxes described herein can be combined in any order and/or in parallel to implement the methods or processes described herein. Also, the drawings herein are not drawn to scale. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey in a permissive manner that certain implementations could include, or have the potential to include, but do not mandate or require, certain features, elements and/or steps. In a similar manner, terms such as “include,” “including” and “includes” are generally intended to mean “including, but not limited to.” Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementation. The elements of a method, process, or algorithm described in connection with the implementations disclosed herein can be embodied directly in hardware, in a software module stored in one or more memory devices and executed by one or more processors, or in a combination of the two. A software module can reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, a hard disk, a removable disk, a CD-ROM, a DVD-ROM or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal. Disjunctive language such as the phrase “at least one of X, Y, or Z,” or “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain implementations require at least one of X, at least one of Y, or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. Language of degree used herein, such as the terms “about,” “approximately,” “generally,” “nearly” or “substantially” as used herein, represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “about,” “approximately,” “generally,” “nearly” or “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. Although the invention has been described and illustrated with respect to illustrative implementations thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure. 16909824 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jul 24th, 2019 12:00AM https://www.uspto.gov?id=US11316144-20220426 Lithium-ion batteries with solid electrolyte membranes Systems, methods, and computer-readable media are disclosed for lithium-ion batteries with solid electrolyte membranes. In one embodiment, a battery cell may include a copper current collector, a first layer in contact with the copper current collector, the first layer comprising polyvinylidene fluoride, an anode comprising a first lithiated polymer binder configured to conduct lithium ions, where the first layer is disposed between the copper current collector and the anode, and a lithiated polymer electrolyte membrane in contact with the anode. The battery cell may include a cathode in contact with the lithiated polymer electrolyte membrane and comprising a second lithiated polymer binder configured to conduct lithium ions, a second layer in contact with the cathode, the second layer comprising polyvinylidene fluoride, and an aluminum current collector disposed adjacent to the second layer, wherein the aluminum current collector is a positive current collector. 11316144 1. A battery cell comprising: a copper current collector, wherein the copper current collector is a negative current collector; a first layer in contact with the copper current collector; an anode comprising a plated lithium metal surface and a first lithiated polymer binder configured to conduct lithium ions, wherein the first layer is disposed between the copper current collector and the anode; a cathode comprising a second lithiated polymer binder configured to conduct lithium ions; a lithiated polymer electrolyte membrane disposed between the anode and the cathode; a second layer in contact with the cathode; an aluminum current collector disposed adjacent to the second layer, wherein the aluminum current collector is a positive current collector, and wherein the second layer is disposed between the aluminum current collector and the cathode; and a lithiated layer disposed between the plated lithium metal surface of the anode and the lithiated polymer electrolyte membrane, the lithiated layer comprising a lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer; wherein the lithiated layer is in contact with the plated lithium metal surface of the anode. 2. The battery cell of claim 1, wherein the first layer comprises polyvinylidene fluoride, and wherein the second layer comprises polyvinylidene fluoride. 3. The battery cell of claim 2, wherein the first layer further comprises graphite, and wherein the second layer further comprises lithium cobalt oxide. 4. The battery cell of claim 1, wherein the lithiated polymer electrolyte membrane comprises a base membrane having a porosity of between about 50% and about 70%. 5. The battery cell of claim 4, wherein the lithiated polymer electrolyte membrane comprises between about 30% and about 50% of the lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer, and between about 50% to about 70% plastic. 6. The battery cell of claim 4, wherein the lithiated polymer electrolyte membrane further comprises lithium-based or Zeolite-based inclusions. 7. The battery cell of claim 1, wherein the lithiated layer is a first lithiated layer, wherein the battery cell further comprises a second lithiated layer disposed between the cathode and lithiated polymer electrolyte membrane, and wherein the second lithiated layer comprises the lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer. 8. The battery cell of claim 1, wherein the copper current collector is a mesh copper current collector having a mesh count open area between about 3% and about 30%, and wherein the aluminum current collector is a mesh aluminum current collector having a mesh count open area between about 3% and about 30%. 9. The battery cell of claim 1, further comprising: a first plastic edge protection frame disposed between the lithiated polymer electrolyte membrane and the anode; and a second plastic edge protection frame disposed between the lithiated polymer electrolyte membrane and the cathode. 10. The battery cell of claim 1, wherein the anode further comprises graphite that is dispersed in the first lithiated polymer binder, and wherein the first lithiated polymer binder provides first pathways for lithium ions to flow; and wherein the cathode further comprises lithium cobalt oxide that is dispersed in the second lithiated polymer binder, and wherein the second lithiated polymer binder provides second pathways for lithium ions to flow. 11. A device comprising: a flexible battery comprising: a copper current collector, wherein the copper current collector is a negative current collector; a first layer in contact with the copper current collector; an anode comprising a plated lithium metal surface and a first lithiated polymer binder configured to conduct lithium ions, wherein the first layer is disposed between the copper current collector and the anode; a cathode comprising a second lithiated polymer binder configured to conduct lithium ions; a lithiated polymer electrolyte membrane disposed between the anode and the cathode; a second layer in contact with the cathode; an aluminum current collector disposed adjacent to the second layer, wherein the aluminum current collector is a positive current collector, and wherein the second layer is disposed between the aluminum current collector and the cathode; and a lithiated layer disposed between the plated lithium metal surface of the anode and the lithiated polymer electrolyte membrane, the lithiated layer comprising a lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer; wherein the lithiated layer is in contact with the plated lithium metal surface of the anode. 12. The device of claim 11, wherein the first layer comprises polyvinylidene fluoride, and wherein the second layer comprises polyvinylidene fluoride. 13. The device of claim 12, wherein the first layer further comprises graphite, and wherein the second layer further comprises lithium cobalt oxide. 14. The device of claim 11, wherein the lithiated polymer electrolyte membrane comprises a base membrane having a porosity of between about 50% and about 70%. 15. The battery cell of claim 14, wherein the lithiated polymer electrolyte membrane comprises between about 30% and about 50% of the lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer, and between about 50% to about 70% plastic. 16. The device of claim 14, wherein the lithiated polymer electrolyte membrane further comprises lithium-based or Zeolite-based inclusions. 17. The device of claim 11, wherein the lithiated layer is a first lithiated layer, wherein the battery cell further comprises a second lithiated layer disposed between the cathode and lithiated polymer electrolyte membrane, and wherein the second lithiated layer comprises the lithiated sulfonated tetrafluoroethylene based fluoropolymer-copolymer. 18. The device of claim 11, wherein the copper current collector is a mesh copper current collector having a mesh count open area between about 3% and about 30%, and wherein the aluminum current collector is a mesh aluminum current collector having a mesh count open area between about 3% and about 30%. 19. The device of claim 11, further comprising: a first plastic edge protection frame disposed between the lithiated polymer electrolyte membrane and the anode; and a second plastic edge protection frame disposed between the lithiated polymer electrolyte membrane and the cathode. 19 CROSS-REFERENCE TO RELATED APPLICATION This application is a continuation-in-part of U.S. application Ser. No. 16/296,845, filed Mar. 8, 2019, which claims the benefit of U.S. Provisional Application No. 62/779,053, filed Dec. 13, 2018, both of which are incorporated by reference in their entirety. BACKGROUND Electronic devices may include batteries or other portable power sources. Certain batteries may be subject to swelling, expanding, or otherwise changing form over time, and may include flammable elements. For example, a pouch battery may swell in the event that the pouch battery is damaged. Swollen or expanded batteries may cause damage to a device, such as causing a device housing to expand, a device screen to crack, or other damage. In some instances, swollen or otherwise damaged batteries may create a safety hazard for users. Accordingly, preventing swelling or other changes in form of batteries, and reducing safety risks associated with batteries, may be desired. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic illustration of an example use case of a lithium-ion battery with solid electrolyte membrane in accordance with one or more embodiments of the disclosure. FIG. 2 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane in accordance with one or more embodiments of the disclosure. FIG. 3 is a schematic illustration of a cross-sectional view of a portion of a lithium-ion cell with detail views of lithium ion pathways in accordance with one or more embodiments of the disclosure. FIG. 4 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and layers in accordance with one or more embodiments of the disclosure. FIG. 5 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and electrolyte in accordance with one or more embodiments of the disclosure. FIG. 6 depicts various methods of manufacturing lithium-ion batteries with solid electrolyte membranes in accordance with one or more embodiments of the disclosure. FIG. 7 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and layers in accordance with one or more embodiments of the disclosure. FIG. 8 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and current collectors having roughened surfaces in accordance with one or more embodiments of the disclosure. FIG. 9 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a composite solid electrolyte membrane in accordance with one or more embodiments of the disclosure. FIG. 10 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and a plated lithium Anode current collector in accordance with one or more embodiments of the disclosure. FIG. 11 is a schematic illustration of a cross-sectional view of a portion of a lithium-ion cell with detail views of lithium ion pathways in accordance with one or more embodiments of the disclosure. FIG. 12 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and mesh current collectors in accordance with one or more embodiments of the disclosure. FIG. 13 is a schematic illustration of a cross-sectional view of a lithium-ion cell with a solid electrolyte membrane and an edge protection frame in accordance with one or more embodiments of the disclosure. FIG. 14 schematically illustrates an example architecture of an electronic device in accordance with one or more embodiments of the disclosure. The detailed description is set forth with reference to the accompanying drawings. The drawings are provided for purposes of illustration only and merely depict example embodiments of the disclosure. The drawings are provided to facilitate understanding of the disclosure and shall not be deemed to limit the breadth, scope, or applicability of the disclosure. The use of the same reference numerals indicates similar, but not necessarily the same or identical components. Different reference numerals may be used to identify similar components. Various embodiments may utilize elements or components other than those illustrated in the drawings, and some elements and/or components may not be present in various embodiments. The use of singular terminology to describe a component or element may, depending on the context, encompass a plural number of such components or elements and vice versa. DETAILED DESCRIPTION Overview Electronic devices may be used to consume content. Some electronic devices may be portable and may include portable power sources, such as batteries and the like. The batteries or power sources may be rechargeable. Batteries used in electronic devices may have various form factors. For example, some batteries may be pouch batteries, some batteries may be cylindrical batteries, some batteries may be cells or packs, and so forth. Batteries may be removably positioned in some devices, while it other devices batteries may be positioned in fixed locations or may otherwise be embedded, non-removable and/or non-replaceable in a device. The location of a battery or power source in a device may be dimensioned so as to accommodate a particular battery form factor and/or size. Accordingly, any changes to a form factor and/or size of a battery may result in strains or stresses imparted on other components of the device. For example, if a pouch battery expands or swells from its initial form factor, the expanded or swollen battery may impart a strain on a display of the device. If the pouch battery expands or swells enough, the display and/or other components of the device may be damaged. For example, the display may crack or shatter, the device housing may expand, components of the device may become lose, and the like. While certain batteries may be designed to accommodate a small amount of swelling, such as 10% swelling in a Z-direction or thickness of a pouch battery, such batteries may be unprotected against abnormal swelling or bulge situations, which could be caused, in one example, by gas swelling. In addition to deformation, certain batteries may include flammable components, which may create safety risks in the event of leakage or other damage to a battery. One example of a cause of low battery performance and/or battery safety issues includes electrolyte degradation. For example, batteries that include liquid electrolyte may be more flammable than batteries that do not include liquid electrolyte, as the electrolyte itself may be flammable. For example, electrolyte materials may degrade at high battery voltages, such as voltages greater than 4 volts at the cathode, leading to battery performance loss and/or swelling due to gassing. The degradation, combined with the electrolyte flammability, exacerbates the safety risks for aging batteries. A battery may be a single cell, multiple cells, a pack, and the like, and may be referred to herein as a “cell.” Certain batteries or power sources, such as pouch batteries, may expand or swell as a result of damage to one or more components of the battery. For example, electrolyte degradation or breakdown may cause swelling as a result of gas generated by the breakdown. In another example, if one or more layers of a battery pouch are damaged and/or come into contact with electrochemical components of the cell, the layers (e.g., the pouch aluminum layer, etc.) may corrode, which may weaken the pouch structure and result in expansion or swelling of the pouch battery. The expansion or swelling may cause damage to the device and/or create a safety hazard or a perceived safety hazard to a user of the device. Embodiments of the disclosure include batteries with solid electrolyte membranes that may include little or no liquid electrolyte, and may therefore be resistant to damage, such as swelling. Certain embodiments may be resistant to flammability or other catastrophic failure. Some embodiments may be corrosion-resistant. Embodiments may prevent swelling or damage to components of a battery, such as a pouch material, even in the event of damage to one or more components of the battery. For example, damage to a pouch of the battery may have reduced impact as less or no liquid electrolyte may be released from the battery. Batteries of the disclosure may be any suitable battery that is susceptible to packaging corrosion and/or batteries with failure modes that include swelling, such as a lithium-ion battery, or any other suitable battery type, and may have exterior pouches, casings, or housings comprised of any suitable material. This disclosure relates to, among other things, systems, methods, computer-readable media, techniques, and methodologies for lithium-ion batteries with solid electrolyte membranes. Embodiments may include batteries with solid electrolyte membranes that may include certain materials, such as lithiated Nafion. Such embodiments may maintain battery performance, while preventing premature drainage or exhaustion of the battery. Referring to FIG. 1, an example environment 100 with a device 110 is illustrated in accordance with one or more embodiments of the disclosure. The device 110 may be any suitable portable device, such as a smartphone, an e-reader, a tablet, an audio or video streaming device, an Internet of Things (IoT) device, a product ordering button or device, a home sensor, an aging in place device, an earphone, a speaker device, or another device. In the example of FIG. 1, the device 110 may be a smartphone with a display 120. Two instances are illustrated in FIG. 1. In a first instance, the device 110 may include a first pouch battery 130 positioned within the device 110, such as within a housing of the device 110. The first pouch battery 130 may power the device 110. In a second instance, the device 110 may include a second pouch battery 150 positioned within the device 110, such as within a housing of the device 110. The second pouch battery 150 may power the device 110. As illustrated in FIG. 1, the first pouch battery 130 may be a typical lithium-ion battery that is susceptible to damage and flammability. As a result, in some instances, the first pouch battery 130 may expand and damage the device 110. In contrast, the second pouch battery 150 may be a lithium-ion battery with a solid electrolyte membrane, as described herein, and may therefore be resistant to electrolyte degradation, and as a result, resistant to damage and flammability. Both the first pouch battery 130 and the second pouch battery 150 may include a pouch that forms a housing or outer casing of the respective pouch battery 130. The pouch may be formed of a polymer, aluminum, or other material. The pouch may be flexible, so as to allow for flexibility in positioning of the respective pouch battery in the device 110. The first pouch battery 130 may include one or more cells, such as lithium-ion cells. The lithium-ion cells may include a number of components. For example, a first lithium-ion cell 160 in the first pouch battery 130 may include a copper anode current collector 162, an anode 164 formed of graphite and polyvinylidene fluoride (PVDF) binder positioned adjacent to the copper anode current collector 162, a porous polyethylene separator 168, a cathode 172 formed of lithium cobalt oxide and PVDF binder, and an aluminum cathode current collector 174 positioned adjacent to the cathode 172. The first lithium-ion cell 160 may include liquid electrolyte 166, 170, that may fill space between the copper anode current collector 162 and the aluminum cathode current collector 174 (where liquid electrolyte 166 represents liquid electrolyte at a first side of the porous polyethylene separator 168, and liquid electrolyte 170 represents liquid electrolyte at a second side of the porous polyethylene separator 168). The liquid electrolyte 166, 170, may flow through the porous polyethylene separator 168. The electrolyte 166, 170 may include lithium salts, organic solvents, additives, and other components. During charging and/or discharging, lithium ions may move between the anode 164 and the cathode 172 via the liquid electrolyte 166, 170. The cell may be at least partially positioned within the pouch of the pouch battery, and may be a lithium-ion cell, a lithium-polymer cell, or another cell type. In first pouch battery 130, the electrolyte 166, 170 may include non-aqueous solvents and lithium-containing salt, and serves as the vehicle for ion movement. The wettability of the separator 168 and the electrodes 164, 172 defines how much of the electrodes are accessible by the electrolyte, which further defines the capacity of the cell. If there are regions in the separator and/or electrodes that are cut off from the electrolyte, those regions do not participate in the cell reactions and contribute to a lowered capacity. Furthermore the electrolyte degrades at high cathode voltages (e.g., greater than 4.0 volts, etc.) and temperatures higher than 50 degrees Celsius, leading to battery swelling and severe battery performance degradation. In contrast, the second pouch battery 150 may include one or more cells, such as lithium-ion cells. The lithium-ion cells may include a number of components. For example, a second lithium-ion cell 180 in the second pouch battery 150 may include a copper anode current collector 162 (which may be the same as the first pouch battery 130), an anode 182 formed of graphite (or any other suitable material, such as lithium metals, niobium oxide, etc.) and lithiated Nafion binder positioned adjacent to the copper anode current collector 162, a lithiated solid polymer electrolyte Nafion membrane 184, a cathode 186 formed of lithium cobalt oxide and lithiated Nafion binder, and an aluminum cathode current collector 174 (which may be the same as the first pouch battery 130) positioned adjacent to the cathode 186. The lithiated solid polymer electrolyte Nafion membrane 184 may act as the separator for the second lithium-ion cell 180. The second lithium-ion cell 180 may not include, or may be devoid of, any liquid electrolyte or free/unabsorbed liquid electrolyte. In some embodiments, air that fills space between the copper anode current collector 162 and the aluminum cathode current collector 174 may be vacuumed out of the pouch. During charging and/or discharging, lithium ions may move between the anode 164 and the cathode 172 via the lithiated Nafion. The cell may be at least partially positioned within the pouch of the pouch battery, and may be a lithium-ion cell, a lithium-polymer cell, or another cell type. Although different amounts of lithium-ions are illustrated in the drawings of the first lithium-ion cell 160 and the second lithium-ion cell 180, the same number of lithium-ions may be present in either cell. Nafion may be a sulfonated tetrafluoroethylene based fluoropolymer-copolymer and may be classified as a synthetic polymer with ionomers. Nafion may be in powder or liquid form. Lithiated Nafion may be Nafion that has been contacted with lithium, such that lithium is incorporated into its chemical structure. A binder may be a binding agent that holds or draws materials together to form a whole. In some instances, a binder may act as a chemical adhesive to form, for example, an electrode. A lithiated Nafion binder may be a binder that is formed at least partially of lithiated Nafion. Lithiated Nafion binders may be in liquid or solid form, and may be dried or otherwise cured into a solid or gel form. Lithiated Nafion binders may be configured to conduct lithium ions during charging and/or discharging of a cell. A lithiated solid polymer electrolyte Nafion membrane may be a membrane that is a solid polymer electrolyte, where the solid polymer electrolyte is lithiated Nafion. In some embodiments, lithium ions may be dispersed in the Nafion upon lithiation to form the lithiated Nafion, which may then be used as the solid polymer electrolyte. The lithiated solid polymer electrolyte Nafion membrane may be configured to conduct lithium ions during charging and/or discharging of a cell. In some embodiments, instead of a lithiated solid polymer electrolyte Nafion membrane, a membrane may be formed using a porous composite material, such as a GORETEX® fuel cell membrane or other lithiated material, that can be soaked in lithiated Nafion liquid or powder to absorb the lithiated Nafion. The resultant material may be about 50% to about 70% porous material, and about 30% to about 50% lithiated Nafion. Such composite materials may be of relatively less cost than other lithiated solid polymer electrolyte Nafion membranes. Accordingly, certain embodiments, such as the second pouch battery 150, may include a battery having a package, such as a pouch, and at least one cell positioned within the package. The at least one cell may include the anode 182, a lithiated polymer electrolyte membrane, such as the lithiated solid polymer electrolyte Nafion membrane 184 or a lithiated Nafion membrane, and the cathode 186. The lithiated polymer electrolyte membrane may be arranged between the anode and the cathode. The lithiated polymer electrolyte membrane may be configured to conduct lithium ions during charging and discharging. The anode 162 may include a first lithiated polymer binder, such as a lithiated Nafion binder, that is configured to conduct lithium ions during charging and discharging. The cathode 186 may include a second lithiated polymer binder configured to conduct lithium ions during charging and discharging. The first lithiated polymer binder and the second lithiated polymer binder may be formed of the same lithiated polymer material (e.g., “first” and “second” are used to denote the specific binders being referenced, etc.), such as lithiated Nafion. Embodiments of the disclosure include replacing the liquid electrolyte in lithium-ion batteries with a non-flammable lithium-ion conducting solid polymeric electrolyte separator. Embodiments of the disclosure may enhance battery life and reduce safety risk. Some embodiments may provide higher energy density batteries that thrive at high ambient temperatures up to 50 degrees Celsius. Certain embodiments may have similar or lower conductivity and cell resistance relative to conventional cells, and may demonstrate greater than 500 cycles at ambient temperatures of 25 degrees Celsius and still maintain greater than 80% of initial capacity at 1C charge and 1C discharge rate. Certain embodiments may demonstrate greater than 200 cycles at ambient temperatures of 60 degrees Celsius, and still maintain greater than 80% of initial capacity at 1C charge and discharge rate. Certain embodiments may demonstrate greater than 200 cycles at greater than 4.5V cell voltage at 25 degrees Celsius. Certain embodiments may have similar or better energy density (Wh/L and Wh/kg) as that of a conventional cell of same electrode size. Certain embodiments may be the same cost or cheaper than conventional cell of same electrode size. Certain embodiments may include a Nafion cell-making process that is a drop-in replacement to conventional battery-making process. Some embodiments may use a robust cation (Li+) conducting solid polymeric membrane, to replace the liquid/gel electrolyte+porous polyethylene separator combination that is used in conventional cells. The second pouch battery 150 may therefore use Nafion as an electrolyte separator and as the binder. Because Nafion is an excellent ion-transport membrane, electrolyte use may be reduced by about 70%, or may be completely eliminated. The lithium ions shuttle between the electrodes using the Nafion membrane separator and the Nafion binder as the pathway, instead of liquid electrolyte. To create Nafion as the binder, it is added to the anode and cathode paste during fabrication and then cast into electrodes. This seals in the Nafion distribution and eliminates the wettability-related performance issues seen in conventional batteries. Since the liquid electrolyte is significantly reduced or completely eliminated, the electrolyte degradation related issues are significantly reduced. Some embodiments may operate at temperatures up to 60 degrees Celsius without the accompanying safety and degradation risks. Because the second lithium-ion cell 180 does not include liquid electrolyte or any free liquid electrolyte, the risk of electrolyte degradation is reduced, thereby reducing the risk of damage, such as swelling, of the second pouch battery 150 due to gas expansion and the like. As a result, the device 110 may not be impacted by issues caused by the second pouch battery 150. Example embodiments of the disclosure provide a number of technical features or technical effects. For example, in accordance with example embodiments of the disclosure, certain embodiments of the disclosure may include batteries with solid electrolyte membranes that may include little or no liquid electrolyte. The batteries described herein may therefore prevent swelling and/or expansion of the battery, improve performance and/or stability of cells, and improve safety. The above examples of technical features and/or technical effects of example embodiments of the disclosure are merely illustrative and not exhaustive. One or more illustrative embodiments of the disclosure have been described above. The above-described embodiments are merely illustrative of the scope of this disclosure and are not intended to be limiting in any way. Accordingly, variations, modifications, and equivalents of the embodiments disclosed herein are also within the scope of this disclosure. The above-described embodiments and additional and/or alternative embodiments of the disclosure will be described in detail hereinafter through reference to the accompanying drawings. ILLUSTRATIVE EMBODIMENTS AND USE CASES Referring to FIG. 2, a schematic drawing of a lithium-ion cell 200 with a solid electrolyte membrane is depicted in various cross-sectional views in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 200 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 200. The lithium-ion cell 200 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 200 may be part of a lithium-ion pouch battery. In FIG. 2, the lithium-ion cell 200 may include a first assembly 210, a lithiated Nafion membrane 220, and a second assembly 230. The lithiated Nafion membrane 220 may be a lithiated solid polymer electrolyte Nafion membrane. The first assembly 210 may be coupled to a first side of the lithiated Nafion membrane 220, and the second assembly 230 may be coupled to a second side, or opposite side, of the lithiated Nafion membrane 220. The first assembly 210 may include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 220. The second assembly 230 may similarly include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 220. In other instances, one or more, or all, components of the lithium-ion cell 200 may be formed or otherwise assembled at the same time. The first assembly 210 may include a copper current collector 212 and an anode 214. The copper current collector 212 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 212 may be disposed adjacent to the anode 214. In some instances, the copper current collector 212 may be in contact with the anode 214. The copper current collector 212 may form a first end of the lithium-ion cell 200. The anode 214 may be disposed adjacent to the copper current collector 212. The anode 214 may be formed of graphite and lithiated Nafion. For example, the anode 214 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 214, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 200. Other embodiments may include a material other than, or in addition to, graphite, such as lithium metals, niobium oxide, etc. The anode 214 may be coupled to the copper current collector 212. In some embodiments, the anode 214 may be laminated to the copper current collector 212, while in other embodiments, the anode 214 may be coupled to the copper current collector 212 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The lithiated Nafion membrane 220 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 200. First lithium ion pathways in the anode 214 may extend from the lithiated Nafion membrane 220 to the copper current collector 212. Second lithium ion pathways in the cathode 234 may extend from the lithiated Nafion membrane 220 to the aluminum current collector 232. The second assembly 230 may include an aluminum current collector 232 and a cathode 234. The aluminum current collector 232 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 232 may be disposed adjacent to the cathode 234. In some instances, the aluminum current collector 232 may be in contact with the cathode 234. The aluminum current collector 232 may form a second end of the lithium-ion cell 200. The cathode 234 may be disposed adjacent to the aluminum current collector 232. The cathode 234 may be disposed between the lithiated Nafion membrane 220 and the aluminum current collector 232. The cathode 234 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 234 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 234, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 200. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The cathode 234 may be coupled to the aluminum current collector 232. In some embodiments, the cathode 234 may be laminated to the aluminum current collector 232, while in other embodiments, the cathode 234 may be coupled to the aluminum current collector 232 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The first assembly 210 may be coupled to a first side of the lithiated Nafion membrane 220, and the second assembly 230 may be coupled to a second side of the lithiated Nafion membrane 220 to form a stack 240. For example, as illustrated in FIG. 2, the stack may include the lithiated Nafion membrane 220 disposed between the anode 214 and the cathode 234. The stack 234 may be formed using a pinch roller or other suitable technique. The stack 240 may be placed into a pouch or other package for use in a battery. Additional methods of manufacturing are discussed with respect to FIG. 6. FIG. 3 is a schematic illustration of a cross-sectional view of a portion 310 of a lithium-ion cell 300, which may be a membrane electrode assembly, with detail views of lithium ion pathways in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 300 may be the same lithium-ion cell discussed with respect to FIG. 2. In FIG. 3, an anode 314 is depicted adjacent to, and in contact with, a first side of a lithiated Nafion membrane 312, and a cathode 316 is depicted adjacent to, and in contact with, a second side of the lithiated Nafion membrane 312. As discussed with respect to FIG. 2, the anode 314 may include graphite that is dispersed in a first lithiated Nafion binder. In a first detail view 320 of an interface between the anode 314 and the lithiated Nafion membrane 312, graphite particles 316 are illustrated as dispersed in a first lithiated Nafion binder 340. The graphite particles 316 may be dispersed and held together or bound with the first lithiated Nafion binder 340. The anode 314 may have a weight ratio of between about 50% to about 67% graphite, and between about 33% to about 50% lithiated Nafion binder. The lithiated Nafion binder 340 may provide first lithium ion pathways in or through the anode 314 (e.g., between the graphite particles 316, etc.). Lithium ions 330 may therefore pass or flow through the anode 314 during charging and discharging of the lithium-ion cell 300. Because the lithiated Nafion membrane 312 also includes Nafion, lithium ions 330 may pass through the lithium-ion cell 300, and more particularly between the lithiated Nafion membrane 312 and the anode 314, without the use of liquid electrolyte. As discussed with respect to FIG. 2, the cathode 316 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder. In a second detail view 330 of an interface between the cathode 316 and the lithiated Nafion membrane 312, lithium cobalt oxide particles 318 are illustrated as dispersed in a second lithiated Nafion binder 340, which may be the same material as the first lithiated Nafion binder. The lithium cobalt oxide particles 318 may be dispersed and held together or bound with the second lithiated Nafion binder 340. The cathode 316 may have a weight ratio of between about 50% to about 67% lithium cobalt oxide, and between about 33% to about 50% lithiated Nafion binder. The lithiated Nafion binder 340 may provide second lithium ion pathways in or through the cathode 316 (e.g., between the lithium cobalt oxide particles 318, etc.). Lithium ions 330 may therefore pass or flow through the cathode 316 during charging and discharging of the lithium-ion cell 300. Because the lithiated Nafion membrane 312 also includes Nafion that conducts lithium ions during charging and discharging, lithium ions 330 may pass through the lithium-ion cell 300, and more particularly between the lithiated Nafion membrane 312 and the cathode 316, without the use of liquid electrolyte. In some instances, the lithium ions 330 may pass through the cathode 316 using the second lithium ion pathways, through the lithiated Nafion binder 312, and through the anode 314 using the first lithium ion pathways to complete the circuit across the lithium-ion cell 300. Accordingly, the lithium-ion cell 300 may not rely on pores and/or wettability to operate, unlike typical lithium-ion cells. FIG. 4 is a schematic illustration of a cross-sectional view of a lithium-ion cell 400 with a solid electrolyte membrane and layers (also referred to as sublayers herein) in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 400 may be the same lithium-ion cell discussed with respect to FIGS. 2 and 3. The lithium-ion cell 400 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 400. The lithium-ion cell 400 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 400 may be part of a lithium-ion pouch battery. In FIG. 4, the lithium-ion cell 400 may include a first assembly 410, a lithiated Nafion membrane 420, and a second assembly 430. The lithiated Nafion membrane 420 may be a lithiated solid polymer electrolyte Nafion membrane. The first assembly 410 may be coupled to a first side of the lithiated Nafion membrane 420, and the second assembly 430 may be coupled to a second side, or opposite side, of the lithiated Nafion membrane 420. The first assembly 410 may include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 420. The second assembly 430 may similarly include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 420. In other instances, one or more, or all, components of the lithium-ion cell 400 may be formed or otherwise assembled at the same time. The first assembly 410 may include a copper current collector 412, an anode 414, and a first lithiated Nafion sublayer 416. The copper current collector 412 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 412 may be disposed adjacent to the anode 414. In some instances, the copper current collector 412 may be in contact with the anode 414. The copper current collector 412 may form a first end of the lithium-ion cell 400. The anode 414 may be disposed adjacent to the copper current collector 412. For example, both the anode 414 and the first lithiated Nafion sublayer 416 may be disposed between the lithiated Nafion membrane 420 and the aluminum current collector 432. The anode 414 may be formed of graphite and lithiated Nafion. For example, the anode 414 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 414, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 400. The anode 414 may be coupled to the copper current collector 412. In some embodiments, the anode 414 may be laminated to the copper current collector 412, while in other embodiments, the anode 414 may be coupled to the copper current collector 412 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The first lithiated Nafion sublayer 416 may be a layer of lithiated Nafion that is coupled to the anode 414. The first lithiated Nafion sublayer 416 may increase conductivity between the anode 414 and the lithiated Nafion membrane 420. In some embodiments, the first lithiated Nafion sublayer 416 may increase the number of lithium ion pathways in the anode 414. The first lithiated Nafion sublayer 416 may be disposed between, and/or optionally in contact with both, the anode 414 and the lithiated Nafion membrane 420. In some embodiments, the first lithiated Nafion sublayer 416 may be laminated to the anode 414, while in other embodiments, the first lithiated Nafion sublayer 416 may be coupled to the anode 414 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The lithiated Nafion membrane 420 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 400. First lithium ion pathways in the anode 414 may extend from the lithiated Nafion membrane 420 and/or the first lithiated Nafion sublayer 416 to the copper current collector 412. Second lithium ion pathways in the cathode 434 may extend from the lithiated Nafion membrane 420 and/or a second lithiated Nafion sublayer 436 to the aluminum current collector 432. The second assembly 430 may include an aluminum current collector 432 and a cathode 434. The aluminum current collector 432 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 432 may be disposed adjacent to the cathode 434. In some instances, the aluminum current collector 432 may be in contact with the cathode 434. The aluminum current collector 432 may form a second end of the lithium-ion cell 400. The cathode 434 may be disposed adjacent to the aluminum current collector 432. The cathode 434 may be disposed between the lithiated Nafion membrane 420 and the aluminum current collector 432. For example, both the cathode 434 and the second lithiated Nafion sublayer 436 may be disposed between the lithiated Nafion membrane 420 and the aluminum current collector 432. The cathode 434 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 434 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 434, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 400. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The cathode 434 may be coupled to the aluminum current collector 432. In some embodiments, the cathode 434 may be laminated to the aluminum current collector 432, while in other embodiments, the cathode 434 may be coupled to the aluminum current collector 432 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 436 may be a layer of lithiated Nafion that is coupled to the cathode 434. The second lithiated Nafion sublayer 436 may increase conductivity between the cathode 434 and the lithiated Nafion membrane 420. In some embodiments, the second lithiated Nafion sublayer 436 may increase the number of lithium ion pathways in the cathode 434. The second lithiated Nafion sublayer 436 may be disposed between, and/or optionally in contact with both, the cathode 434 and the lithiated Nafion membrane 420. In some embodiments, the second lithiated Nafion sublayer 436 may be laminated to the cathode 434, while in other embodiments, the second lithiated Nafion sublayer 436 may be coupled to the cathode 434 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 416 and/or the second lithiated Nafion sublayer 436 may be less than a second thickness of the anode 414 and/or the cathode 434. The first lithiated Nafion sublayer 416 and the second lithiated Nafion sublayer 436 may have equal thicknesses. The first lithiated Nafion sublayer 416 and the second lithiated Nafion sublayer 436 may have a first thickness that is less than or equal to a third thickness of the lithiated Nafion membrane 420. The illustrations in FIG. 4 or any other drawings may not be to scale. The first assembly 410 may be coupled to a first side of the lithiated Nafion membrane 420, and the second assembly 430 may be coupled to a second side of the lithiated Nafion membrane 420 to form a stack 440. For example, as illustrated in FIG. 4, the stack may include the lithiated Nafion membrane 420 disposed between the first lithiated Nafion sublayer 416 and the second lithiated Nafion sublayer 436, as well as between the anode 414 and the cathode 434. The stack 434 may be formed using a pinch roller or other suitable technique. The stack 440 may be placed into a pouch or other package for use in a battery. Additional methods of manufacturing are discussed with respect to FIG. 6. FIG. 5 is a schematic illustration of a cross-sectional view of a lithium-ion cell 500 with a solid electrolyte membrane and electrolyte in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 500 may be the same lithium-ion cell discussed with respect to FIGS. 2-3. In some embodiments, a liquid electrolyte may be introduced to the lithium-ion cell to improve conductivity. However, rather than free moving liquid electrolyte in a battery package, the liquid electrolyte introduced into the cell may be absorbed by and/or adsorbed to a liquid Nafion membrane. As a result, there may not be any liquid electrolyte that is subject to leaking or degradation (and subsequent gas generation, etc.). In FIG. 5, the lithium-ion cell 500 may include a copper current collector 512, an anode 514, a lithiated polymer electrolyte membrane 520, a cathode 534, and an aluminum current collector 532. The lithium-ion cell 500 may be placed in a pouch 510 or other package. In some embodiments, lithiated Nafion sublayers, or other components, may be included in the lithium-ion cell 500. The copper current collector 512 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 512 may be disposed adjacent to the anode 514. In some instances, the copper current collector 512 may be in contact with the anode 514. The copper current collector 512 may form a first end of the lithium-ion cell 500. The anode 514 may be disposed adjacent to the copper current collector 512. The anode 514 may be formed of graphite and lithiated Nafion. For example, the anode 514 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 514, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 500. The lithiated polymer electrolyte membrane 520 may be a lithiated Nafion membrane, and may configured to conduct lithium ions during charging and discharging of the lithium-ion cell 500. First lithium ion pathways in the anode 514 may extend from the lithiated polymer electrolyte membrane 520 to the copper current collector 512. Second lithium ion pathways in the cathode 534 may extend from the lithiated polymer electrolyte membrane 520 to the aluminum current collector 532. The aluminum current collector 532 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 532 may be disposed adjacent to the cathode 534. In some instances, the aluminum current collector 532 may be in contact with the cathode 534. The aluminum current collector 532 may form a second end of the lithium-ion cell 500. The cathode 534 may be disposed adjacent to the aluminum current collector 532. The cathode 534 may be disposed between the lithiated polymer electrolyte membrane 520 and the aluminum current collector 532. The cathode 534 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 534 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 534, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 500. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. A liquid electrolyte 530 may be introduced to the lithium-ion cell 500. For example, the liquid electrolyte 530 may be pipetted, poured, or otherwise provided to the pouch 510. At a first point in time 540 after the liquid electrolyte 530 is introduced to the lithium-ion cell 500, the lithiated polymer electrolyte membrane 520 may absorb and/or adsorb the liquid electrolyte. For example, at t=0 in FIG. 5, the liquid electrolyte 530 may be at a bottom of the pouch 510, and the lithiated polymer electrolyte membrane 520 may begin absorbing and/or adsorbing the liquid electrolyte 530. At a second point in time 550, denoted by t=0+n in FIG. 5, the liquid electrolyte 530 may be absorbed by and/or adsorbed to the lithiated polymer electrolyte membrane 520. In some embodiments, a portion of the liquid electrolyte 530 may be absorbed by and/or adsorbed to the first lithiated Nafion binder of the anode 514 and/or the second lithiated Nafion binder of the cathode 534 (as denoted by the black crosshatching in FIG. 5), in addition to the lithiated polymer electrolyte membrane 520. Accordingly, the lithiated Nafion binder may also be solvated by the liquid electrolyte. As a result, the liquid electrolyte 530 may no longer be free to move about the pouch 510 in liquid form. The amount of liquid electrolyte may be significantly less than the amount used in typical lithium-ion cells, such as about 70% less or more. In some embodiments, the lithiated polymer electrolyte membrane 520 may be saturated with the liquid electrolyte 530. Accordingly, an amount of liquid electrolyte 530 introduced to the lithium-ion cell 500 may be determined using saturation properties of the lithiated polymer electrolyte membrane 520. In other embodiments, there may not be any liquid electrolyte 530 introduced into the lithium-ion cell, and as a result, the battery and/or cell may be devoid of a liquid electrolyte. As illustrated in FIG. 5, at a third point in time 560, the lithium-ion cell 500 may be compressed during bonding. Compression may be in either or both the lateral or vertical directions. Compression may be a result of change in dimension of the lithiated polymer electrolyte membrane 520 (e.g., due to swelling) and/or other components of the lithium-ion cell 500. FIG. 6 depicts various methods of manufacturing lithium-ion batteries with solid electrolyte membranes and/or subcomponents in accordance with one or more embodiments of the disclosure. Other embodiments may include additional, fewer, or different operations than those illustrated in the examples of FIG. 6. In some embodiments, a sequence of operations may be different than that illustrated in FIG. 6. A first example process flow 600 for manufacturing a cell for lithium-ion batteries in accordance with one or more embodiments is depicted. The process flow 600 may be used to form a cell with Nafion as a separator and/or electrolyte with Nafion binder in the respective electrodes. At a first block 602, an anode formulation that includes Nafion may be coated on an anode current collector foil, and a cathode formulation that includes Nafion may be coated on a cathode current collector foil. For example, the anode formulation may be coated on a high-purity copper anode current collector foil. The copper foil may have a matted surface in some embodiments, or a smooth surface in other embodiments. Similarly, the cathode formulation may be coated on a high-purity aluminum cathode current collector foil, where the aluminum foil may have a matted surface in some embodiments, or a smooth surface in other embodiments. The anode formulation may include lithiated Nafion in either a dispersion form or a powder form. The anode formulation composition with either the dispersion form or the powder form of lithiated Nafion may be substantially equal, as determined by the percentage of Nafion in the electrode. To create the anode formulation using a liquid Nafion binder, the constituents illustrated in Table 1 may be mixed in a plastic or ceramic bowl with a lid. After mixing, zirconia ceramic beads or any other suitable high energy mixing equipment may be added to thoroughly mix the anode formulation. The viscosity of the final anode formulation is decided based on the electrode coating method. If electrodes are coated by slot-coating, then the viscosity may be in the range of 1000-5000 centipoise. Table 1 illustrates the range of composition of Nafion in the anode electrode from 33% to 50%. The resulting anode formulation may be coated on a high-purity copper anode current collector foil at the first block 602. TABLE 1 Quantity % in wet mixture/ % in wet mixture/ Constituents (g) % in dry electrode % in dry electrode Graphite 1 14.3/67 8.3/50 1100 EW Lithium-Ion 0.5 71.4/33 83.3/50  (10 wt % Nafion in n-Propanol) De-Ionized Water 1 14.3/0  8.3/0  To create the anode formulation using a powder Nafion binder, the powder Nafion binder may be mixed with graphite powder and the isopropanol solvent to create an ink/paste that can be coated by slot-coating or doctor-blading. The solvent content can be adjusted to create a formulation of the appropriate viscosity. Similar to the liquid Nafion binder, appropriate mixing equipment may be employed to produce a desired viscosity. Table 2 provides example Nafion composition ranges for lithium-ion batteries as described herein. The resulting anode formulation may be coated on a high-purity copper anode current collector foil at the first block 602. TABLE 2 Quantity % in wet mixture/ % in wet mixture/ Constituents (g) % in dry electrode % in dry electrode Graphite 1 33/50 40/67 1100 EW Lithium-Ion 1 33/50 n/a powder (10 wt % Nafion in n-Propanol) 1100 EW Lithium-Ion powder (10 wt % 0.5 20/33 n/a Nafion in n-Propanol) Iso-propanol 1 33/0  40/0  To create the cathode formulation using a liquid Nafion binder, the constituents illustrated in Table 3 may be mixed in a plastic or ceramic bowl with a lid. After mixing, zirconia ceramic beads or any other suitable high energy mixing equipment may be added to thoroughly mix the anode formulation. The viscosity of the final anode formulation is decided based on the electrode coating method. If electrodes are coated by slot-coating, then the viscosity may be in the range of 1000-5000 centipoise. Table 1 illustrates the range of composition of Nafion in the anode electrode from 33% to 50%. The resulting cathode formulation may be coated on a high-purity aluminum cathode current collector foil at the first block 602. TABLE 3 Quantity % in wet mixture/ % in wet mixture/ Constituents (g) % in dry electrode % in dry electrode LiCoO2 1 14.3/67 8.3/50 1100 EW Lithium-Ion 0.5 71.4/33 83.3/50  (10 wt % Nafion in n-Propanol) De-Ionized Water 1 14.3/0  8.3/0  To create the cathode formulation using a powder Nafion binder, the powder Nafion binder may be mixed with the cathode active material (e.g., LiCoO2 powder, etc.) and the isopropanol solvent to create an ink/paste that can be coated by slot-coating or doctor-blading. The solvent content can be adjusted to create a formulation of the appropriate viscosity. Similar to the liquid Nafion binder, appropriate mixing equipment may be employed to produce a desired viscosity. Table 4 provides example Nafion composition ranges for lithium-ion batteries as described herein. The resulting cathode formulation may be coated on a high-purity aluminum cathode current collector foil at the first block 602. TABLE 4 Quantity % in wet mixture/ % in wet mixture/ Constituents (g) % in dry electrode % in dry electrode LiCoO2 1 33/50 40/67 1100 EW Lithium-Ion powder (10 wt % 1 33/50 n/a Nafion in n-Propanol) 1100 EW Lithium-Ion powder (10 wt % 0.5 20/33 n/a Nafion in n-Propanol) Iso-propanol 1 33/0  40/0  At a second block 604, the anode and cathode electrodes may be dried until the solvents have evaporated. For example, the anode and cathode may each be dried for about 30 minutes at a temperature of about 120 degrees Celsius and/or until all solvents have evaporated. In some embodiments, Nafion sublayers may also be coated and dried on the respective cathode and anode electrodes. At a third block 606, the electrodes may be sprayed or slot-coated with a lithiated Nafion solution at a thickness of between about 5 to about 10 microns. For example, the anode and the cathode may each be sprayed or otherwise coated with a lithiated Nafion solution to create a contiguous layer of recast Nafion that is approximately 5 to 10 microns thick. At a fourth block 608, the electrodes may be dried under a vacuum. For example, the anode and cathode may be dried at a temperature of about 120 degrees Celsius for about 30 minutes under vacuum of 10−3 Torr. The electrodes may be dried until all solvents have evaporated. A second example process flow 610 for manufacturing a dry cell for lithium-ion batteries (e.g., a membrane electrode assembly, etc.) in accordance with one or more embodiments is depicted. The process flow 600 may be used to form a dry cell with Nafion as a separator and/or electrolyte with Nafion binder in the respective electrodes. At a first block 612, the cathode foil and the anode foil may be placed on either side of a lithiated Nafion membrane to form a stack. For example, the cathode foil and the anode foil formed as output of the first process flow 600 may be placed on opposite sides of a lithiated Nafion membrane to form a stack. The lithiated Nafion membrane may be a lithiated solid polymer electrolyte Nafion membrane configured to conduct lithium ions. At a second block 614, pressure-distributing sheets may be placed behind the cathode and anode foils. For example, a pressure-distributing sheet, such as a GYLON® material, may be placed on the outside surfaces of, or otherwise adjacent to, the cathode and anode foils. At a third block 616, the stack may be compressed at an elevated temperature and pressure. For example, the stack, including the pressure-distributing sheets, may be compressed at a pressure of about 200 pounds per square inch and at a temperature of about 120 degrees Celsius. The stack may remain compressed for about 5-6 minutes. The temperature may be maintained to within about 1 degree during compression. At a fourth block 618, the stack may be cooled to form a dry cell. For example, after compression is complete, the stack may be allowed to cool, or actively cooled, to room temperature. A third example process flow 620 for manufacturing a wet cell for lithium-ion batteries in accordance with one or more embodiments is depicted. The process flow 600 may be used to form a wet cell with Nafion as a separator and/or electrolyte with Nafion binder in the respective electrodes. At a first block 622, the dry cell formed as output of the second process flow 610 may be placed into a pouch. For example, the dry cell may be placed into a flexible pouch, such as an aluminum pouch, after the dry cell cools to room temperature. The pouch may be sealed with a spout, notch, or other opening remaining through which liquid electrolyte can be placed into the pouch. At a second block 624, the electrolyte may be pipetted into the pouch. For example, a volume of liquid electrolyte (e.g., EC+DMC+1M LiPF6, etc.) may be pipetted or otherwise placed into the pouch through the opening. After the electrolyte is placed into the pouch, the pouch may sit for a period of time, such as about 30 minutes. At a third block 626, the cell may be compressed and charged. For example, the pouch and/or cell may be compressed at a pressure of about 2 to about 5 pounds per square inch. The pouch may be compressed for about 30 minutes to provide time for the lithiated Nafion membrane to fully absorb or adsorb the electrolyte. In some instances, using a warm electrolyte, or heating the cell to about 35 to about 40 degrees Celsius during compression, may improve absorption or adsorption of the electrolyte by the lithiated Nafion membrane. While the pouch and/or cell is compressed, and after the liquid electrolyte is placed into the pouch, the cell may be charged at a rate of about 0.5 coulomb at room temperature until the cell voltage is about 4.3 volts to about 4.5 volts. After the cell voltage is about 4.3 volts to about 4.5 volts, the cell may be compressed to about 20 pounds per square inch while the pouch remains at least partially unsealed. At a fourth block 628, the pouch may be vacuumed and sealed. For example, the pouch may be vacuumed to remove gases and/or any excess liquid electrolyte, and may then be sealed to enclose the cell in the pouch. In some embodiments, the pouch may be vacuumed and sealed while compressed at about 20 pounds per square inch. For cell formation, the battery may be charged to about 4.5 volts and discharged to about 2.5 volts while compressed, and the compressed cell may be cycled two to five times from about 4.5 volts to about 2.5 volts. Cell formation should occur within about five to about ten charge and discharge cycles. The coulombic efficiency may be monitored to determine the endpoint of the formation step. The coulombic efficiency may be between about 95% and about 98% within ten charge cycles. FIG. 7 is a schematic illustration of a cross-sectional view of a lithium-ion cell 700 with a solid electrolyte membrane and layers (also referred to as sublayers herein) in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 700 may have a solid polymer electrolyte, and may increase battery robustness. The lithium-ion cell 700 may include a set of layers between the electrodes and the current collectors. The sublayers may be formed of PVDF and the respective electrode materials. For example, for the anode, the adjacent sublayer may be formed of graphite and PVDF, and for the cathode, the adjacent sublayer may be formed of lithium cobalt oxide and PVDF. The sublayers may ensure that Nafion in the electrode layers does not interact with the current collector materials. Because Nafion is a superacid, Nafion may corrode the copper and aluminum current collectors. The resulting ionic contaminants can ion-exchange with the Nafion and cause reduction in conductivity, as well as a reduction in capacity. The addition of sublayers may therefore reduce or eliminate all contact. As a result, only electron flow occurs between the sublayers and the current collectors, and all ionic interactions occur in the Nafion sublayer and the electrodes. The lithium-ion cell 700 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 700. The lithium-ion cell 700 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 700 may be part of a lithium-ion pouch battery. In FIG. 7, the lithium-ion cell 700 may include a first assembly 710, a lithiated Nafion membrane 720, and a second assembly 730. The lithiated Nafion membrane 720 may be a lithiated solid polymer electrolyte Nafion membrane. The first assembly 710 may be coupled to a first side of the lithiated Nafion membrane 720, and the second assembly 730 may be coupled to a second side, or opposite side, of the lithiated Nafion membrane 720. The first assembly 710 may include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 720. The second assembly 730 may similarly include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 720. In other instances, one or more, or all, components of the lithium-ion cell 700 may be formed or otherwise assembled at the same time. The first assembly 710 may include a copper current collector 712, a first sublayer 740, an anode 714, and a first lithiated Nafion sublayer 716. The copper current collector 712 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 712 may be disposed adjacent to the first sublayer 740. The first sublayer 740 may be positioned between the copper current collector 712 and the anode 714. The copper current collector 712 may be in contact with first sublayer 740. The copper current collector 712 may form a first end of the lithium-ion cell 700. The anode 714 may be disposed adjacent to the first sublayer 740. The anode 714, the first sublayer 740, and the first lithiated Nafion sublayer 716 may be disposed between the lithiated Nafion membrane 720 and the copper current collector 712. The first sublayer 740 may be in contact with the copper current collector 712 and the anode 714. The first sublayer 740 may be formed of PVDF. In some embodiments, the first sublayer 740 may be formed of graphite and PVDF. For example, the first sublayer 740 may not include Nafion and may therefore form an insulating layer between the copper current collector 712 and the anode 714. In some embodiments, the first sublayer 740 may be laminated to the anode 714, while in other embodiments, the first sublayer 740 may be coupled to the anode 714 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The anode 714 may be formed of graphite and lithiated Nafion. For example, the anode 714 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 714, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 700. The first lithiated Nafion sublayer 716 may be a layer of lithiated Nafion that is coupled to the anode 714. The first lithiated Nafion sublayer 716 may increase conductivity between the anode 714 and the lithiated Nafion membrane 720. In some embodiments, the first lithiated Nafion sublayer 716 may increase the number of lithium ion pathways in the anode 714. The first lithiated Nafion sublayer 716 may be disposed between, and/or optionally in contact with both, the anode 714 and the lithiated Nafion membrane 720. In some embodiments, the first lithiated Nafion sublayer 716 may be laminated to the anode 714, while in other embodiments, the first lithiated Nafion sublayer 716 may be coupled to the anode 714 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The lithiated Nafion membrane 720 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 700. First lithium ion pathways in the anode 714 may extend from the lithiated Nafion membrane 720 and/or the first lithiated Nafion sublayer 716 to the copper current collector 712. Second lithium ion pathways in the cathode 734 may extend from the lithiated Nafion membrane 720 and/or a second lithiated Nafion sublayer 736 to the aluminum current collector 732. The second assembly 730 may include an aluminum current collector 732, a second sublayer 742, a cathode 734, and a second lithiated Nafion sublayer 736. The aluminum current collector 732 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 732 may be disposed adjacent to the second sublayer 742. In some instances, the aluminum current collector 732 may be in contact with the second sublayer 742. The aluminum current collector 732 may form a second end of the lithium-ion cell 700. The cathode 734 may be disposed adjacent to the second sublayer 742, such that the second sublayer 742 is between the cathode 734 and the aluminum current collector 732. The cathode 734 may be disposed between the lithiated Nafion membrane 720 and the aluminum current collector 732. For example, the cathode 734, the second sublayer 742, and the second lithiated Nafion sublayer 736 may be disposed between the lithiated Nafion membrane 720 and the aluminum current collector 732. The cathode 734 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 734 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 734, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 700. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The second sublayer 742 may be in contact with the aluminum current collector 732 and the cathode 734. The second sublayer 742 may be formed of PVDF. In some embodiments, the second sublayer 732 may be formed of lithium cobalt oxide and PVDF. For example, the second sublayer 742 may not include Nafion and may therefore form an insulating layer between the aluminum current collector 732 and the cathode 734. In some embodiments, the second sublayer 742 may be laminated to the cathode 734, while in other embodiments, the second sublayer 742 may be coupled to the cathode 734 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The cathode 734 may be coupled to the second sublayer 742. In some embodiments, the cathode 734 may be laminated to the second sublayer 742, while in other embodiments, the cathode 734 may be coupled to the second sublayer 742 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 736 may be a layer of lithiated Nafion that is coupled to the cathode 734. The second lithiated Nafion sublayer 736 may increase conductivity between the cathode 734 and the lithiated Nafion membrane 720. In some embodiments, the second lithiated Nafion sublayer 736 may increase the number of lithium ion pathways in the cathode 734. The second lithiated Nafion sublayer 736 may be disposed between, and/or optionally in contact with both, the cathode 734 and the lithiated Nafion membrane 720. In some embodiments, the second lithiated Nafion sublayer 736 may be laminated to the cathode 734, while in other embodiments, the second lithiated Nafion sublayer 736 may be coupled to the cathode 734 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 716 and/or the second lithiated Nafion sublayer 736 may be less than a second thickness of the anode 714 and/or the cathode 734. The first lithiated Nafion sublayer 716 and the second lithiated Nafion sublayer 736 may have equal thicknesses. The first lithiated Nafion sublayer 716 and the second lithiated Nafion sublayer 736 may have a first thickness that is less than or equal to a third thickness of the lithiated Nafion membrane 720. The illustrations in FIG. 7 or any other drawings may not be to scale. The first assembly 710 may be coupled to a first side of the lithiated Nafion membrane 720, and the second assembly 730 may be coupled to a second side of the lithiated Nafion membrane 720 to form a stack 740. For example, as illustrated in FIG. 7, the stack may include the lithiated Nafion membrane 720 disposed between the first lithiated Nafion sublayer 716 and the second lithiated Nafion sublayer 736, as well as between the anode 714 and the cathode 734. The stack may be formed using a pinch roller or other suitable technique. The stack may be placed into a pouch or other package for use in a battery. FIG. 8 is a schematic illustration of a cross-sectional view of a lithium-ion cell 800 with a solid electrolyte membrane and current collectors having roughened surfaces in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 800 may include current collectors with roughened surfaces, or current collectors that have surface features resulting in increased surface areas at interfaces with respective sublayers of the lithium-ion cell 800. Roughening may cause the electrodes to stay attached to the current collectors throughout the battery electrode and cell manufacturing process. Roughening may also reduce the electrical resistivity between the electrodes and the current collectors. The lithium-ion cell 800 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 800. The lithium-ion cell 800 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 800 may be part of a lithium-ion pouch battery. In FIG. 8, the lithium-ion cell 800 may include a first assembly 810, a lithiated Nafion membrane 820, and a second assembly 830. The lithiated Nafion membrane 820 may be a lithiated solid polymer electrolyte Nafion membrane. The first assembly 810 may be coupled to a first side of the lithiated Nafion membrane 820, and the second assembly 830 may be coupled to a second side, or opposite side, of the lithiated Nafion membrane 820. The first assembly 810 may include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 820. The second assembly 830 may similarly include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 820. In other instances, one or more, or all, components of the lithium-ion cell 800 may be formed or otherwise assembled at the same time. The first assembly 810 may include a copper current collector 812, a first sublayer 840, an anode 814, and a first lithiated Nafion sublayer 816. The copper current collector 812 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 812 may be disposed adjacent to the first sublayer 840. The first sublayer 840 may be positioned between the copper current collector 812 and the anode 814. The copper current collector 812 may be in contact with first sublayer 840. The copper current collector 812 may form a first end of the lithium-ion cell 800. The copper current collector 812 may include one or more surface features 850 that cause a first surface of the copper current collector 812 that is in contact with the first sublayer 840 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The anode 814 may be disposed adjacent to the first sublayer 840. The anode 814, the first sublayer 840, and the first lithiated Nafion sublayer 816 may be disposed between the lithiated Nafion membrane 820 and the copper current collector 812. The first sublayer 840 may be in contact with the copper current collector 812 and the anode 814. The first sublayer 840 may be formed of PVDF. In some embodiments, the first sublayer 840 may be formed of graphite and PVDF. For example, the first sublayer 840 may not include Nafion and may therefore form an insulating layer between the copper current collector 812 and the anode 814. In some embodiments, the first sublayer 840 may be laminated to the anode 814, while in other embodiments, the first sublayer 840 may be coupled to the anode 814 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The anode 814 may be formed of graphite and lithiated Nafion. For example, the anode 814 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 814, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 800. The first lithiated Nafion sublayer 816 may be a layer of lithiated Nafion that is coupled to the anode 814. The first lithiated Nafion sublayer 816 may increase conductivity between the anode 814 and the lithiated Nafion membrane 820. In some embodiments, the first lithiated Nafion sublayer 816 may increase the number of lithium ion pathways in the anode 814. The first lithiated Nafion sublayer 816 may be disposed between, and/or optionally in contact with both, the anode 814 and the lithiated Nafion membrane 820. In some embodiments, the first lithiated Nafion sublayer 816 may be laminated to the anode 814, while in other embodiments, the first lithiated Nafion sublayer 816 may be coupled to the anode 814 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The lithiated Nafion membrane 820 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 800. First lithium ion pathways in the anode 814 may extend from the lithiated Nafion membrane 820 and/or the first lithiated Nafion sublayer 816 to the copper current collector 812. Second lithium ion pathways in the cathode 834 may extend from the lithiated Nafion membrane 820 and/or a second lithiated Nafion sublayer 836 to the aluminum current collector 832. The second assembly 830 may include an aluminum current collector 832, a second sublayer 842, a cathode 834, and a second lithiated Nafion sublayer 836. The aluminum current collector 832 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 832 may be disposed adjacent to the second sublayer 842. In some instances, the aluminum current collector 832 may be in contact with the second sublayer 842. The aluminum current collector 832 may form a second end of the lithium-ion cell 800. The aluminum current collector 832 may include one or more surface features 860 that cause a first surface of the aluminum current collector 832 that is in contact with the second sublayer 842 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The cathode 834 may be disposed adjacent to the second sublayer 842, such that the second sublayer 842 is between the cathode 834 and the aluminum current collector 832. The cathode 834 may be disposed between the lithiated Nafion membrane 820 and the aluminum current collector 832. For example, the cathode 834, the second sublayer 842, and the second lithiated Nafion sublayer 836 may be disposed between the lithiated Nafion membrane 820 and the aluminum current collector 832. The cathode 834 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 834 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 834, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 800. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The second sublayer 842 may be in contact with the aluminum current collector 832 and the cathode 834. The second sublayer 842 may be formed of PVDF. In some embodiments, the second sublayer 832 may be formed of lithium cobalt oxide and PVDF. For example, the second sublayer 842 may not include Nafion and may therefore form an insulating layer between the aluminum current collector 832 and the cathode 834. In some embodiments, the second sublayer 842 may be laminated to the cathode 834, while in other embodiments, the second sublayer 842 may be coupled to the cathode 834 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The cathode 834 may be coupled to the second sublayer 842. In some embodiments, the cathode 834 may be laminated to the second sublayer 842, while in other embodiments, the cathode 834 may be coupled to the second sublayer 842 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 836 may be a layer of lithiated Nafion that is coupled to the cathode 834. The second lithiated Nafion sublayer 836 may increase conductivity between the cathode 834 and the lithiated Nafion membrane 820. In some embodiments, the second lithiated Nafion sublayer 836 may increase the number of lithium ion pathways in the cathode 834. The second lithiated Nafion sublayer 836 may be disposed between, and/or optionally in contact with both, the cathode 834 and the lithiated Nafion membrane 820. In some embodiments, the second lithiated Nafion sublayer 836 may be laminated to the cathode 834, while in other embodiments, the second lithiated Nafion sublayer 836 may be coupled to the cathode 834 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 816 and/or the second lithiated Nafion sublayer 836 may be less than a second thickness of the anode 814 and/or the cathode 834. The first lithiated Nafion sublayer 816 and the second lithiated Nafion sublayer 836 may have equal thicknesses. The first lithiated Nafion sublayer 816 and the second lithiated Nafion sublayer 836 may have a first thickness that is less than or equal to a third thickness of the lithiated Nafion membrane 820. The illustrations in FIG. 8 or any other drawings may not be to scale. The first assembly 810 may be coupled to a first side of the lithiated Nafion membrane 820, and the second assembly 830 may be coupled to a second side of the lithiated Nafion membrane 820 to form a stack 840. For example, as illustrated in FIG. 8, the stack may include the lithiated Nafion membrane 820 disposed between the first lithiated Nafion sublayer 816 and the second lithiated Nafion sublayer 836, as well as between the anode 814 and the cathode 834. The stack may be formed using a pinch roller or other suitable technique. The stack may be placed into a pouch or other package for use in a battery. FIG. 9 is a schematic illustration of a cross-sectional view of a first lithium-ion cell 900 and a second lithium-ion cell 980, each having a composite solid electrolyte membrane in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. Both the first lithium-ion cell 900 and the second lithium-ion cell 980 may have a solid polymer electrode separator that is composite. For example, the composite solid polymer electrode separator may be formed of cheaper porous materials, such as plastic (e.g., polyethylene, PTFE, etc.), and pores in the plastic may be filled with Nafion electrolyte material. The porosity of the base membrane, which may be plastic, may be between about 50% to about 70%, and a thickness of the base membrane may be between about 6 micrometers to about 12 micrometers. Composite membranes may reduce cost, improve the mechanical strength of the separator assembly, and improve the puncture resistance of the separator. The second lithium-ion cell 980 includes a composite solid polymer electrode separator with inclusions that enhance membrane conductivity and to increase resistance to dendritic puncture. The first lithium-ion cell 900 and the second lithium-ion cell 980 will be described in conjunction with each other. The first lithium-ion cell 900 and/or the second lithium-ion cell 980 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 900, 980. The first lithium-ion cell 900 and/or the second lithium-ion cell 980 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the first lithium-ion cell 900 and/or the second lithium-ion cell 980 may be part of a lithium-ion pouch battery. In FIG. 9, the first lithium-ion cell 900 and/or the second lithium-ion cell 980 may include a first assembly 910, a composite lithiated Nafion membrane 970, and a second assembly 930. In both the first lithium-ion cell 900 and/or the second lithium-ion cell 980, the composite lithiated Nafion membrane 970 may be a composite lithiated solid polymer electrolyte Nafion membrane. The composite lithiated Nafion membrane 970 may be formed of a base membrane having a porosity of between about 50% and about 70%. The base membrane may be a plastic base membrane. The composite lithiated polymer electrolyte membrane 970 may include between about 30% and about 50% lithiated Nafion, and/or between about 50% to about 70% plastic. The first lithium-ion cell 900 may have the composite lithiated polymer electrolyte membrane 970. The second lithium-ion cell 980 may have a composite lithiated polymer electrolyte membrane 970 that also includes one or more lithium-based or Zeolite-based inclusions 990. The inclusions 990 may be formed with any suitable high surface area material, such as Zeolite, and/or suitable lithium containing materials. The first assembly 910 may be coupled to a first side of the composite lithiated Nafion membrane 970, and the second assembly 930 may be coupled to a second side, or opposite side, of the composite lithiated Nafion membrane 970. The first assembly 910 may include more than one component and may be formed or assembled prior to coupling with the composite lithiated Nafion membrane 970. The second assembly 930 may similarly include more than one component and may be formed or assembled prior to coupling with the composite lithiated Nafion membrane 970. In other instances, one or more, or all, components of the lithium-ion cell 900 may be formed or otherwise assembled at the same time. The first assembly 910 may include a copper current collector 912, an optional first sublayer 940, an anode 914, and an optional first lithiated Nafion sublayer 916. The copper current collector 912 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 912 may be disposed adjacent to the first sublayer 940. The first sublayer 940 may be positioned between the copper current collector 912 and the anode 914. The copper current collector 912 may be in contact with first sublayer 940. The copper current collector 912 may form a first end of the lithium-ion cell 900. The copper current collector 912 may optionally include one or more surface features 950 that cause a first surface of the copper current collector 912 that is in contact with the first sublayer 940 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The anode 914 may be disposed adjacent to the first sublayer 940. The anode 914, the first sublayer 940, and the first lithiated Nafion sublayer 916 may be disposed between the composite lithiated Nafion membrane 970 and the copper current collector 912. The first sublayer 940 may be in contact with the copper current collector 912 and the anode 914. The first sublayer 940 may be formed of PVDF. In some embodiments, the first sublayer 940 may be formed of graphite and PVDF. For example, the first sublayer 940 may not include Nafion and may therefore form an insulating layer between the copper current collector 912 and the anode 914. In some embodiments, the first sublayer 940 may be laminated to the anode 914, while in other embodiments, the first sublayer 940 may be coupled to the anode 914 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The anode 914 may be formed of graphite and lithiated Nafion. For example, the anode 914 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 914, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 900. The first lithiated Nafion sublayer 916 may be a layer of lithiated Nafion that is coupled to the anode 914. The first lithiated Nafion sublayer 916 may increase conductivity between the anode 914 and the composite lithiated Nafion membrane 970. In some embodiments, the first lithiated Nafion sublayer 916 may increase the number of lithium ion pathways in the anode 914. The first lithiated Nafion sublayer 916 may be disposed between, and/or optionally in contact with both, the anode 914 and the composite lithiated Nafion membrane 970. In some embodiments, the first lithiated Nafion sublayer 916 may be laminated to the anode 914, while in other embodiments, the first lithiated Nafion sublayer 916 may be coupled to the anode 914 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The composite lithiated Nafion membrane 970 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 900. First lithium ion pathways in the anode 914 may extend from the composite lithiated Nafion membrane 970 and/or the first lithiated Nafion sublayer 916 to the copper current collector 912. Second lithium ion pathways in the cathode 934 may extend from the composite lithiated Nafion membrane 970 and/or a second lithiated Nafion sublayer 936 to the aluminum current collector 932. The second assembly 930 may include an aluminum current collector 932, an optional second sublayer 942, a cathode 934, and an optional second lithiated Nafion sublayer 936. The aluminum current collector 932 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 932 may be disposed adjacent to the second sublayer 942. In some instances, the aluminum current collector 932 may be in contact with the second sublayer 942. The aluminum current collector 932 may form a second end of the lithium-ion cell 900. The aluminum current collector 932 may optionally include one or more surface features 960 that cause a first surface of the aluminum current collector 932 that is in contact with the second sublayer 942 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The cathode 934 may be disposed adjacent to the second sublayer 942, such that the second sublayer 942 is between the cathode 934 and the aluminum current collector 932. The cathode 934 may be disposed between the composite lithiated Nafion membrane 970 and the aluminum current collector 932. For example, the cathode 934, the second sublayer 942, and the second lithiated Nafion sublayer 936 may be disposed between the composite lithiated Nafion membrane 970 and the aluminum current collector 932. The cathode 934 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 934 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 934, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 900. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The second sublayer 942 may be in contact with the aluminum current collector 932 and the cathode 934. The second sublayer 942 may be formed of PVDF. In some embodiments, the second sublayer 932 may be formed of lithium cobalt oxide and PVDF. For example, the second sublayer 942 may not include Nafion and may therefore form an insulating layer between the aluminum current collector 932 and the cathode 934. In some embodiments, the second sublayer 942 may be laminated to the cathode 934, while in other embodiments, the second sublayer 942 may be coupled to the cathode 934 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The cathode 934 may be coupled to the second sublayer 942. In some embodiments, the cathode 934 may be laminated to the second sublayer 942, while in other embodiments, the cathode 934 may be coupled to the second sublayer 942 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 936 may be a layer of lithiated Nafion that is coupled to the cathode 934. The second lithiated Nafion sublayer 936 may increase conductivity between the cathode 934 and the composite lithiated Nafion membrane 970. In some embodiments, the second lithiated Nafion sublayer 936 may increase the number of lithium ion pathways in the cathode 934. The second lithiated Nafion sublayer 936 may be disposed between, and/or optionally in contact with both, the cathode 934 and the composite lithiated Nafion membrane 970. In some embodiments, the second lithiated Nafion sublayer 936 may be laminated to the cathode 934, while in other embodiments, the second lithiated Nafion sublayer 936 may be coupled to the cathode 934 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 916 and/or the second lithiated Nafion sublayer 936 may be less than a second thickness of the anode 914 and/or the cathode 934. The first lithiated Nafion sublayer 916 and the second lithiated Nafion sublayer 936 may have equal thicknesses. The first lithiated Nafion sublayer 916 and the second lithiated Nafion sublayer 936 may have a first thickness that is less than or equal to a third thickness of the composite lithiated Nafion membrane 970. The illustrations in FIG. 9 or any other drawings may not be to scale. The first assembly 910 may be coupled to a first side of the composite lithiated Nafion membrane 970, and the second assembly 930 may be coupled to a second side of the composite lithiated Nafion membrane 970 to form a stack 940. For example, as illustrated in FIG. 9, the stack may include the composite lithiated Nafion membrane 970 disposed between the first lithiated Nafion sublayer 916 and the second lithiated Nafion sublayer 936, as well as between the anode 914 and the cathode 934. The stack may be formed using a pinch roller or other suitable technique. The stack may be placed into a pouch or other package for use in a battery. FIG. 10 is a schematic illustration of a cross-sectional view of a lithium-ion cell 1000 with a solid electrolyte membrane and a plated lithium Anode current collector in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. In FIG. 10, the lithium-ion cell 1000 may include a lithium plated anode current collector that is coupled to a Nafion sublayer, which may then be coupled to a solid polymer electrode separator. The lithium-ion cell 1000 may have a solid polymer electrode separator that may optionally be a composite separator. For example, the composite solid polymer electrode separator may be formed of cheaper porous materials, such as plastic (e.g., polyethylene, PTFE, etc.), and pores in the plastic may be filled with Nafion electrolyte material. The porosity of the base membrane, which may be plastic, may be between about 50% to about 70%, and a thickness of the base membrane may be between about 6 micrometers to about 12 micrometers. Composite membranes may reduce cost, improve the mechanical strength of the separator assembly, and improve the puncture resistance of the separator. The composite solid polymer electrode separator may optionally include inclusions that enhance membrane conductivity and to increase resistance to dendritic puncture. Composite membranes may have relatively less thickness than non-composite members, while maintaining or improving mechanical strength. The lithium-ion cell 1000 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells. The lithium-ion cell 1000 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 1000 may be part of a lithium-ion pouch battery. In FIG. 10, the lithium-ion cell 1000 may include a first assembly 1010, an optionally composite (or non-composite) lithiated Nafion membrane 1070, and a second assembly 1030. The composite lithiated Nafion membrane 1070 may be a composite lithiated solid polymer electrolyte Nafion membrane. The composite lithiated Nafion membrane 1070 may be formed of a base membrane having a porosity of between about 50% and about 70%. The base membrane may be a plastic base membrane. The composite lithiated polymer electrolyte membrane 1070 may include between about 30% and about 50% lithiated Nafion, and/or between about 50% to about 70% plastic. The lithium-ion cell 1000 may have a composite lithiated polymer electrolyte membrane 1070 that also optionally includes one or more lithium-based or Zeolite-based inclusions. The inclusions may be formed with any suitable high surface area material, such as Zeolite, and/or suitable lithium containing materials. The first assembly 1010 may be coupled to a first side of the composite lithiated Nafion membrane 1070, and the second assembly 1030 may be coupled to a second side, or opposite side, of the composite lithiated Nafion membrane 1070. The first assembly 1010 may include more than one component and may be formed or assembled prior to coupling with the composite lithiated Nafion membrane 1070. The second assembly 1030 may similarly include more than one component and may be formed or assembled prior to coupling with the composite lithiated Nafion membrane 1070. In other instances, one or more, or all, components of the lithium-ion cell 1000 may be formed or otherwise assembled at the same time. The first assembly 1010 may include a copper current collector 1012, an optional first sublayer 1040, an anode 1080, and an optional first lithiated Nafion sublayer 1016. The copper current collector 1012 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 1012 may be disposed adjacent to the first sublayer 1040. The first sublayer 1040 may be positioned between the copper current collector 1012 and the anode 1080. The copper current collector 1012 may be in contact with first sublayer 1040. The copper current collector 1012 may form a first end of the lithium-ion cell 1000. The copper current collector 1012 may optionally include one or more surface features 1050 that cause a first surface of the copper current collector 1012 that is in contact with the first sublayer 1040 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The anode 1080 may be disposed adjacent to the first sublayer 1040. The anode 1080, the first sublayer 1040, and the first lithiated Nafion sublayer 1016 may be disposed between the composite lithiated Nafion membrane 1070 and the copper current collector 1012. The anode 1080 may include a plated lithium metal surface 1081 in contact with the first lithiated Nafion sublayer 1016. In some embodiments, the anode 1080 may include a plated lithium metal surface that is in contact with the first sublayer 1040. The anode 1080 and the first lithiated Nafion sublayer 1016 may be coupled prior to coupling the assembly to the first lithiated Nafion sublayer 1016 and/or the composite lithiated Nafion membrane 1070. The plated lithium anode 1080 may increase conductivity by about 500-600%. The first sublayer 1040 may be in contact with the copper current collector 1012 and the anode 1080. The first sublayer 1040 may be formed of PVDF. In some embodiments, the first sublayer 1040 may be formed of graphite and PVDF. For example, the first sublayer 1040 may not include Nafion and may therefore form an insulating layer between the copper current collector 1012 and the anode 1080. In some embodiments, the first sublayer 1040 may be laminated to the anode 1080, while in other embodiments, the first sublayer 1040 may be coupled to the anode 1080 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The anode 1080 may be formed of graphite and lithiated Nafion. For example, the anode 1080 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 1080, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 1000. The first lithiated Nafion sublayer 1016 may be a layer of lithiated Nafion that is coupled to the anode 1080. The first lithiated Nafion sublayer 1016 may increase conductivity between the anode 1080 and the composite lithiated Nafion membrane 1070. In some embodiments, the first lithiated Nafion sublayer 1016 may increase the number of lithium ion pathways in the anode 1080. The first lithiated Nafion sublayer 1016 may be disposed between, and/or optionally in contact with both, the anode 1080 and the composite lithiated Nafion membrane 1070. In some embodiments, the first lithiated Nafion sublayer 1016 may be laminated to the anode 1080, while in other embodiments, the first lithiated Nafion sublayer 1016 may be coupled to the anode 1080 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The composite lithiated Nafion membrane 1070 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 1000. First lithium ion pathways in the anode 1080 may extend from the composite lithiated Nafion membrane 1070 and/or the first lithiated Nafion sublayer 1016 to the copper current collector 1012. Second lithium ion pathways in the cathode 1034 may extend from the composite lithiated Nafion membrane 1070 and/or a second lithiated Nafion sublayer 1036 to the aluminum current collector 1032. The second assembly 1030 may include an aluminum current collector 1032, an optional second sublayer 1042, a cathode 1034, and an optional second lithiated Nafion sublayer 1036. The aluminum current collector 1032 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 1032 may be disposed adjacent to the second sublayer 1042. In some instances, the aluminum current collector 1032 may be in contact with the second sublayer 1042. The aluminum current collector 1032 may form a second end of the lithium-ion cell 1000. The aluminum current collector 1032 may optionally include one or more surface features 1060 that cause a first surface of the aluminum current collector 1032 that is in contact with the second sublayer 1042 to be a roughened surface. The roughened surface may have an increased surface area relative to other surfaces. Roughened surfaces may be matted and/or corrugated surfaces. The cathode 1034 may be disposed adjacent to the second sublayer 1042, such that the second sublayer 1042 is between the cathode 1034 and the aluminum current collector 1032. The cathode 1034 may be disposed between the composite lithiated Nafion membrane 1070 and the aluminum current collector 1032. For example, the cathode 1034, the second sublayer 1042, and the second lithiated Nafion sublayer 1036 may be disposed between the composite lithiated Nafion membrane 1070 and the aluminum current collector 1032. The cathode 1034 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 1034 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 1034, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 1000. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The second sublayer 1042 may be in contact with the aluminum current collector 1032 and the cathode 1034. The second sublayer 1042 may be formed of PVDF. In some embodiments, the second sublayer 1032 may be formed of lithium cobalt oxide and PVDF. For example, the second sublayer 1042 may not include Nafion and may therefore form an insulating layer between the aluminum current collector 1032 and the cathode 1034. In some embodiments, the second sublayer 1042 may be laminated to the cathode 1034, while in other embodiments, the second sublayer 1042 may be coupled to the cathode 1034 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The cathode 1034 may be coupled to the second sublayer 1042. In some embodiments, the cathode 1034 may be laminated to the second sublayer 1042, while in other embodiments, the cathode 1034 may be coupled to the second sublayer 1042 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 1036 may be a layer of lithiated Nafion that is coupled to the cathode 1034. The second lithiated Nafion sublayer 1036 may increase conductivity between the cathode 1034 and the composite lithiated Nafion membrane 1070. In some embodiments, the second lithiated Nafion sublayer 1036 may increase the number of lithium ion pathways in the cathode 1034. The second lithiated Nafion sublayer 1036 may be disposed between, and/or optionally in contact with both, the cathode 1034 and the composite lithiated Nafion membrane 1070. In some embodiments, the second lithiated Nafion sublayer 1036 may be laminated to the cathode 1034, while in other embodiments, the second lithiated Nafion sublayer 1036 may be coupled to the cathode 1034 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 1016 and/or the second lithiated Nafion sublayer 1036 may be less than a second thickness of the anode 1080 and/or the cathode 1034. The first lithiated Nafion sublayer 1016 and the second lithiated Nafion sublayer 1036 may have equal thicknesses. The first lithiated Nafion sublayer 1016 and the second lithiated Nafion sublayer 1036 may have a first thickness that is less than or equal to a third thickness of the composite lithiated Nafion membrane 1070. The illustrations in FIG. 10 or any other drawings may not be to scale. The first assembly 1010 may be coupled to a first side of the composite lithiated Nafion membrane 1070, and the second assembly 1030 may be coupled to a second side of the composite lithiated Nafion membrane 1070 to form a stack 1040. For example, as illustrated in FIG. 10, the stack may include the composite lithiated Nafion membrane 1070 disposed between the first lithiated Nafion sublayer 1016 and the second lithiated Nafion sublayer 1036, as well as between the anode 1080 and the cathode 1034. The stack may be formed using a pinch roller or other suitable technique. The stack may be placed into a pouch or other package for use in a battery. FIG. 11 is a schematic illustration of a cross-sectional view of a portion of a lithium-ion cell 1100 with detail views of lithium ion pathways in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 1100 may include electrode structures with enhanced high power and high charge/discharge rate performance. Electrodes may be split into two layers. The layer closest to the current collector may have no Nafion, and may include electrode material (e.g., anode may have graphite, cathode may have lithium cobalt oxide) with about 5% to about 10% PVDF as binder. The Nafion-containing second layer may have a thickness of between about 1 micrometer and about 2 micrometers, and may have a weight or composition of about 50% to about 90% of Nafion. Such an arrangement may assist with moving the ion transfer (Li+ to Li) location closer to the membrane and help with high charge transfer capability. In FIG. 11, an anode 1114 is depicted adjacent to, and in contact with, a first side of a lithiated Nafion membrane 1112. The anode 1114 may include graphite that is dispersed in a first lithiated Nafion binder. In a detail view of an interface between the anode 1114 and the lithiated Nafion membrane 1112, graphite particles 1116 are illustrated as dispersed in a first lithiated Nafion binder 1140. The graphite particles 1116 may be dispersed and held together or bound with the first lithiated Nafion binder 1140. The anode 1114 may have a weight ratio of between about 50% to about 67% graphite, and between about 33% to about 50% lithiated Nafion binder. The lithiated Nafion binder 1140 may provide first lithium ion pathways in or through the anode 1114 (e.g., between the graphite particles 1116, etc.). However, the first lithium ion pathways may be filled with PVDF material 1130, thereby blocking the flow of lithium ions to the anode current collector during charging and discharging of the lithium-ion cell 1100. Because the lithiated Nafion membrane 1112 also includes Nafion, lithium ions 1130 may pass through the lithium-ion cell 1100, and more particularly between the lithiated Nafion membrane 1112 and the anode 1114, without the use of liquid electrolyte. The cathode may have a similar configuration with an ultrathin cathode (high Nafion and no porosity). FIG. 12 is a schematic illustration of a cross-sectional view of a lithium-ion cell 1200 with a solid electrolyte membrane and mesh current collectors in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 1200 may include anode and/or cathode electrodes that are cast onto mesh current collectors. For example, the anode may be cast onto a copper mesh, and the cathode may be cast onto an aluminum mesh. The mesh count for either of the copper mesh or the aluminum mesh may range from about 30% open area (e.g., 100×100, etc.) to about 3% open area (e.g., 500×500, etc.). Other mesh counts may be used, such as 200×200, 400×400, and so forth. After the electrodes are cast onto the mesh, the anode and cathode (e.g., the respective electrodes and mesh) may be bonded to a lithiated Nafion membrane 1220. The resulting structure may be referred to as a dry membrane electrode assembly. The dry membrane electrode assembly can then be inserted into a coin cell assembly, filled with a predetermined volume of electrolyte (e.g., ethylene carbonate and dimethyl carbonate with or without lithium-containing salt) to create a wet membrane electrode assembly. The cell package may be crimped to create a coin cell with a wet membrane electrode assembly. The mesh may allow for the Nafion to be made wet with a solvent. The lithium-ion cell 1200 may have a solid polymer electrolyte that increases battery robustness. The lithium-ion cell 1200 may optionally include a set of sublayers between the electrodes and the current collectors. The sublayers may be formed of PVDF and the respective electrode materials. For example, for the anode, the adjacent sublayer may be formed of graphite and PVDF, and for the cathode, the adjacent sublayer may be formed of lithium cobalt oxide and PVDF. The sublayers may ensure that Nafion in the electrode layers does not interact with the current collector materials. Because Nafion is a superacid, Nafion may corrode the copper and aluminum current collectors. The resulting ionic contaminants can ion-exchange with the Nafion and cause reduction in conductivity, as well as a reduction in capacity. The addition of sublayers may therefore reduce or eliminate all contact. As a result, only electron flow occurs between the sublayers and the current collectors, and all ionic interactions occur in the Nafion sublayer and the electrodes. The lithium-ion cell 1200 may be part of a lithium-ion battery. For example, a lithium-ion battery may include one or more lithium-ion cells 1200. The lithium-ion cell 1200 may be disposed in a pouch, such as in an interior portion of a flexible or rigid battery pouch, or other suitable package. In some embodiments, the lithium-ion cell 1200 may be part of a lithium-ion pouch battery. In FIG. 12, the lithium-ion cell 1200 may include a first assembly 1210, a lithiated Nafion membrane 1220, and a second assembly 1230. The lithiated Nafion membrane 1220 may be a lithiated solid polymer electrolyte Nafion membrane. The first assembly 1210 may be coupled to a first side of the lithiated Nafion membrane 1220, and the second assembly 1230 may be coupled to a second side, or opposite side, of the lithiated Nafion membrane 1220. The first assembly 1210 may include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 1220. The second assembly 1230 may similarly include more than one component and may be formed or assembled prior to coupling with the lithiated Nafion membrane 1220. In other instances, one or more, or all, components of the lithium-ion cell 1200 may be formed or otherwise assembled at the same time. The first assembly 1210 may include a copper current collector 1212, a first sublayer 1240, an anode 1214, and a first lithiated Nafion sublayer 1216. The copper current collector 1212 may be a negative current collector and/or may form a negative terminal of the battery. The copper current collector 1212 may be disposed adjacent to the first sublayer 1240. The first sublayer 1240 may be positioned between the copper current collector 1212 and the anode 1214. The copper current collector 1212 may be in contact with first sublayer 1240. The copper current collector 1212 may form a first end of the lithium-ion cell 1200. The copper current collector 1212 may be an ultrafine mesh current collector, and may have a mesh count open area between about 3% and about 30%. The electrodes may be cast onto the mesh current collector 1212 to bond to Nafion membrane to form the dry membrane electrode assembly. The anode 1214 may be disposed adjacent to the first sublayer 1240. The anode 1214, the first sublayer 1240, and the first lithiated Nafion sublayer 1216 may be disposed between the lithiated Nafion membrane 1220 and the copper current collector 1212. The first sublayer 1240 may be in contact with the copper current collector 1212 and the anode 1214. The first sublayer 1240 may be formed of PVDF. In some embodiments, the first sublayer 1240 may be formed of graphite and PVDF. For example, the first sublayer 1240 may not include Nafion and may therefore form an insulating layer between the copper current collector 1212 and the anode 1214. In some embodiments, the first sublayer 1240 may be laminated to the anode 1214, while in other embodiments, the first sublayer 1240 may be coupled to the anode 1214 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The anode 1214 may be formed of graphite and lithiated Nafion. For example, the anode 1214 may include graphite that is dispersed in a first lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the graphite. The first lithiated Nafion binder may be configured to provide first lithium ion pathways in the anode 1214, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 1200. The first lithiated Nafion sublayer 1216 may be a layer of lithiated Nafion that is coupled to the anode 1214. The first lithiated Nafion sublayer 1216 may increase conductivity between the anode 1214 and the lithiated Nafion membrane 1220. In some embodiments, the first lithiated Nafion sublayer 1216 may increase the number of lithium ion pathways in the anode 1214. The first lithiated Nafion sublayer 1216 may be disposed between, and/or optionally in contact with both, the anode 1214 and the lithiated Nafion membrane 1220. In some embodiments, the first lithiated Nafion sublayer 1216 may be laminated to the anode 1214, while in other embodiments, the first lithiated Nafion sublayer 1216 may be coupled to the anode 1214 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The lithiated Nafion membrane 1220 may be configured to conduct lithium ions during charging and discharging of the lithium-ion cell 1200. First lithium ion pathways in the anode 1214 may extend from the lithiated Nafion membrane 1220 and/or the first lithiated Nafion sublayer 1216 to the copper current collector 1212. Second lithium ion pathways in the cathode 1234 may extend from the lithiated Nafion membrane 1220 and/or a second lithiated Nafion sublayer 1236 to the aluminum current collector 1232. The second assembly 1230 may include an aluminum current collector 1232, a second sublayer 1242, a cathode 1234, and a second lithiated Nafion sublayer 1236. The aluminum current collector 1232 may be a positive current collector and/or may form a positive terminal of the battery. The aluminum current collector 1232 may be disposed adjacent to the second sublayer 1242. In some instances, the aluminum current collector 1232 may be in contact with the second sublayer 1242. The aluminum current collector 1232 may form a second end of the lithium-ion cell 1200. The aluminum current collector 1232 may be an ultrafine mesh current collector, and may have a mesh count open area between about 3% and about 30%. The electrodes may be cast onto the mesh current collector 1232 to bond to Nafion membrane to form the dry membrane electrode assembly. The cathode 1234 may be disposed adjacent to the second sublayer 1242, such that the second sublayer 1242 is between the cathode 1234 and the aluminum current collector 1232. The cathode 1234 may be disposed between the lithiated Nafion membrane 1220 and the aluminum current collector 1232. For example, the cathode 1234, the second sublayer 1242, and the second lithiated Nafion sublayer 1236 may be disposed between the lithiated Nafion membrane 1220 and the aluminum current collector 1232. The cathode 1234 may be formed of lithium cobalt oxide and lithiated Nafion. For example, the cathode 1234 may include lithium cobalt oxide that is dispersed in a second lithiated Nafion binder, as illustrated in FIG. 3. The lithiated Nafion binder may act as an adhesive or may otherwise be used to form a solid or gel electrode with the lithium cobalt oxide. The second lithiated Nafion binder may be configured to provide second lithium ion pathways in the cathode 1234, as illustrated in FIG. 3, during charging and discharging of the lithium-ion cell 1200. The second lithium ion pathways may extend from the lithiated solid polymer electrolyte Nafion membrane to the aluminum current collector. The second sublayer 1242 may be in contact with the aluminum current collector 1232 and the cathode 1234. The second sublayer 1242 may be formed of PVDF. In some embodiments, the second sublayer 1232 may be formed of lithium cobalt oxide and PVDF. For example, the second sublayer 1242 may not include Nafion and may therefore form an insulating layer between the aluminum current collector 1232 and the cathode 1234. In some embodiments, the second sublayer 1242 may be laminated to the cathode 1234, while in other embodiments, the second sublayer 1242 may be coupled to the cathode 1234 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The cathode 1234 may be coupled to the second sublayer 1242. In some embodiments, the cathode 1234 may be laminated to the second sublayer 1242, while in other embodiments, the cathode 1234 may be coupled to the second sublayer 1242 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. The second lithiated Nafion sublayer 1236 may be a layer of lithiated Nafion that is coupled to the cathode 1234. The second lithiated Nafion sublayer 1236 may increase conductivity between the cathode 1234 and the lithiated Nafion membrane 1220. In some embodiments, the second lithiated Nafion sublayer 1236 may increase the number of lithium ion pathways in the cathode 1234. The second lithiated Nafion sublayer 1236 may be disposed between, and/or optionally in contact with both, the cathode 1234 and the lithiated Nafion membrane 1220. In some embodiments, the second lithiated Nafion sublayer 1236 may be laminated to the cathode 1234, while in other embodiments, the second lithiated Nafion sublayer 1236 may be coupled to the cathode 1234 using a heat and compress technique, a pressure-distribution sheet technique, or a different technique. A first thickness of the first lithiated Nafion sublayer 1216 and/or the second lithiated Nafion sublayer 1236 may be less than a second thickness of the anode 1214 and/or the cathode 1234. The first lithiated Nafion sublayer 1216 and the second lithiated Nafion sublayer 1236 may have equal thicknesses. The first lithiated Nafion sublayer 1216 and the second lithiated Nafion sublayer 1236 may have a first thickness that is less than or equal to a third thickness of the lithiated Nafion membrane 1220. The illustrations in FIG. 12 or any other drawings may not be to scale. The first assembly 1210 may be coupled to a first side of the lithiated Nafion membrane 1220, and the second assembly 1230 may be coupled to a second side of the lithiated Nafion membrane 1220 to form a stack 1240. For example, as illustrated in FIG. 12, the stack may include the lithiated Nafion membrane 1220 disposed between the first lithiated Nafion sublayer 1216 and the second lithiated Nafion sublayer 1236, as well as between the anode 1214 and the cathode 1234. The stack may be formed using a pinch roller or other suitable technique. The stack may be placed into a pouch or other package for use in a battery. The cell 1200 may be placed in a stainless steel coin crimp case. Electrolyte may be imbibed into the electrodes and membrane and the cell may therefore form a wet membrane electrode assembly coin cell. FIG. 13 is a schematic illustration of a cross-sectional view of a lithium-ion cell 1300 with a solid electrolyte membrane and an edge protection frame in accordance with one or more embodiments of the disclosure. Other embodiments may include additional or fewer components. The lithium-ion cell 1300 may be the same cell as the lithium-ion cell 1200 of FIG. 12. However, the lithium-ion cell 1300 may additionally include a first edge protection frame 1370 and a second edge protection frame 1380. The first edge protection frame 1370 may be a first plastic edge protection frame and may be disposed between a lithiated polymer electrolyte membrane 1220 and an anode 1214. The second edge protection frame 1380 may be a second plastic edge protection frame and may be disposed between the lithiated polymer electrolyte membrane 1220 and the cathode 1234. The first edge protection frame 1370 and the second edge protection frame 1380 may protect the membrane from rupture by the sharp die-cut edge/burrs of the copper current collector 1250 or aluminum current collector 1260. The first edge protection frame 1370 and the second edge protection frame 1380 may be used with a mesh version of current collector or the flat foil. The first edge protection frame 1370 and the second edge protection frame 1380 may each be between about 5 micrometers and about 10 micrometers thick, and may be formed of a plastic material, such as PET, PC, expanded Teflon, or another material. The first edge protection frame 1370 and the second edge protection frame 1380 may be introduced at the dry membrane electrode assembly formation step of the forming process, where the respective frames are placed between the separator membrane 1220 and the electrodes as illustrated. Once the lamination is complete, the assembly may be placed into the coin cell cases, wetted with electrolyte, and crimped to form another embodiment of a wet membrane electrode assembly based coin cell. One or more operations of the methods, process flows, or use cases of FIGS. 1-13 may have been described above as being performed by a user device, or more specifically, by one or more program module(s), applications, or the like executing on a device. It should be appreciated, however, that any of the operations of the methods, process flows, or use cases of FIGS. 1-13 may be performed, at least in part, in a distributed manner by one or more other devices, or more specifically, by one or more program module(s), applications, or the like executing on such devices. In addition, it should be appreciated that processing performed in response to the execution of computer-executable instructions provided as part of an application, program module, or the like may be interchangeably described herein as being performed by the application or the program module itself or by a device on which the application, program module, or the like is executing. While the operations of the methods, process flows, or use cases of FIGS. 1-13 may be described in the context of the illustrative devices, it should be appreciated that such operations may be implemented in connection with numerous other device configurations. The operations described and depicted in the illustrative methods, process flows, or use cases of FIGS. 1-13 may be carried out or performed in any suitable order, such as the depicted orders, as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted in FIGS. 1-13 may be performed. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Certain aspects of the disclosure are described above with reference to block and flow diagrams of systems, methods, apparatuses, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and the flow diagrams, respectively, may be implemented by the execution of computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all, according to some embodiments. Further, additional components and/or operations beyond those depicted in blocks of the block and/or flow diagrams may be present in certain embodiments. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. Example Device Architecture FIG. 14 is a schematic block diagram of one or more illustrative electronic device(s) 1400 in accordance with one or more example embodiments of the disclosure. The electronic device(s) 1400 may include any suitable battery-operated computing device including, but not limited to, a server system, a voice interaction device, a mobile device such as a smartphone, a tablet, an e-reader, a wearable device, or the like; a desktop computer; a laptop computer; a content streaming device; or the like. The electronic device(s) 1400 may correspond to an illustrative device configuration for the device(s) of FIGS. 1-13. The electronic device(s) 1400 may be configured to communicate with one or more servers, user devices, or the like. The electronic device(s) 1400 may be configured to receive one or more batteries, such as a lithium-ion battery. The electronic device(s) 1400 may be configured to communicate via one or more networks. Such network(s) may include, but are not limited to, any one or more different types of communications networks such as, for example, cable networks, public networks (e.g., the Internet), private networks (e.g., frame-relay networks), wireless networks, cellular networks, telephone networks (e.g., a public switched telephone network), or any other suitable private or public packet-switched or circuit-switched networks. Further, such network(s) may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), metropolitan area networks (MANs), wide area networks (WANs), local area networks (LANs), or personal area networks (PANs). In addition, such network(s) may include communication links and associated networking devices (e.g., link-layer switches, routers, etc.) for transmitting network traffic over any suitable type of medium including, but not limited to, coaxial cable, twisted-pair wire (e.g., twisted-pair copper wire), optical fiber, a hybrid fiber-coaxial (HFC) medium, a microwave medium, a radio frequency communication medium, a satellite communication medium, or any combination thereof. In an illustrative configuration, the electronic device(s) 1400 may include one or more processors (processor(s)) 1402, one or more memory devices 1404 (also referred to herein as memory 1404), one or more input/output (I/O) interface(s) 1406, one or more network interface(s) 1408, one or more sensor(s) or sensor interface(s) 1410, one or more transceiver(s) 1412, one or more battery(ies) 1414, one or more optional microphone(s) 1416, and data storage 1420. The electronic device(s) 1400 may further include one or more bus(es) 1418 that functionally couple various components of the electronic device(s) 1400. The electronic device(s) 1400 may further include one or more antenna(e) 1430 that may include, without limitation, a cellular antenna for transmitting or receiving signals to/from a cellular network infrastructure, an antenna for transmitting or receiving Wi-Fi signals to/from an access point (AP), a Global Navigation Satellite System (GNSS) antenna for receiving GNSS signals from a GNSS satellite, a Bluetooth antenna for transmitting or receiving Bluetooth signals, a Near Field Communication (NFC) antenna for transmitting or receiving NFC signals, and so forth. These various components will be described in more detail hereinafter. The bus(es) 1418 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit the exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the electronic device(s) 1400. The bus(es) 1418 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The bus(es) 1418 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnect (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth. The memory 1404 of the electronic device(s) 1400 may include volatile memory (memory that maintains its state when supplied with power) such as random access memory (RAM) and/or non-volatile memory (memory that maintains its state even when not supplied with power) such as read-only memory (ROM), flash memory, ferroelectric RAM (FRAM), and so forth. Persistent data storage, as that term is used herein, may include non-volatile memory. In certain example embodiments, volatile memory may enable faster read/write access than non-volatile memory. However, in certain other example embodiments, certain types of non-volatile memory (e.g., FRAM) may enable faster read/write access than certain types of volatile memory. In various implementations, the memory 1404 may include multiple different types of memory such as various types of static random access memory (SRAM), various types of dynamic random access memory (DRAM), various types of unalterable ROM, and/or writeable variants of ROM such as electrically erasable programmable read-only memory (EEPROM), flash memory, and so forth. The memory 1404 may include main memory as well as various forms of cache memory such as instruction cache(s), data cache(s), translation lookaside buffer(s) (TLBs), and so forth. Further, cache memory such as a data cache may be a multi-level cache organized as a hierarchy of one or more cache levels (L1, L2, etc.). The data storage 1420 may include removable storage and/or non-removable storage including, but not limited to, magnetic storage, optical disk storage, and/or tape storage. The data storage 1420 may provide non-volatile storage of computer-executable instructions and other data. The memory 1404 and the data storage 1420, removable and/or non-removable, are examples of computer-readable storage media (CRSM) as that term is used herein. The data storage 1420 may store computer-executable code, instructions, or the like that may be loadable into the memory 1404 and executable by the processor(s) 1402 to cause the processor(s) 1402 to perform or initiate various operations. The data storage 1420 may additionally store data that may be copied to the memory 1404 for use by the processor(s) 1402 during the execution of the computer-executable instructions. Moreover, output data generated as a result of execution of the computer-executable instructions by the processor(s) 1402 may be stored initially in the memory 1404, and may ultimately be copied to the data storage 1420 for non-volatile storage. More specifically, the data storage 1420 may store one or more operating systems (O/S) 1422; one or more database management systems (DBMS) 1424; and one or more program module(s), applications, engines, computer-executable code, scripts, or the like. Some or all of these module(s) may be sub-module(s). Any of the components depicted as being stored in the data storage 1420 may include any combination of software, firmware, and/or hardware. The software and/or firmware may include computer-executable code, instructions, or the like that may be loaded into the memory 1404 for execution by one or more of the processor(s) 1402. Any of the components depicted as being stored in the data storage 1420 may support functionality described in reference to corresponding components named earlier in this disclosure. The data storage 1420 may further store various types of data utilized by the components of the electronic device(s) 1400. Any data stored in the data storage 1420 may be loaded into the memory 1404 for use by the processor(s) 1402 in executing computer-executable code. In addition, any data depicted as being stored in the data storage 1420 may potentially be stored in one or more datastore(s) and may be accessed via the DBMS 1424 and loaded in the memory 1404 for use by the processor(s) 1402 in executing computer-executable code. The datastore(s) may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. The processor(s) 1402 may be configured to access the memory 1404 and execute the computer-executable instructions loaded therein. For example, the processor(s) 1402 may be configured to execute the computer-executable instructions of the various program module(s), applications, engines, or the like of the electronic device(s) 1400 to cause or facilitate various operations to be performed in accordance with one or more embodiments of the disclosure. The processor(s) 1402 may include any suitable processing unit capable of accepting data as input, processing the input data in accordance with stored computer-executable instructions, and generating output data. The processor(s) 1402 may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 1402 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor(s) 1402 may be capable of supporting any of a variety of instruction sets. Referring now to other illustrative components depicted as being stored in the data storage 1420, the O/S 1422 may be loaded from the data storage 1420 into the memory 1404 and may provide an interface between other application software executing on the electronic device(s) 1400 and the hardware resources of the electronic device(s) 1400. More specifically, the O/S 1422 may include a set of computer-executable instructions for managing the hardware resources of the electronic device(s) 1400 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the O/S 1422 may control execution of the other program module(s). The O/S 1422 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system. The DBMS 1424 may be loaded into the memory 1404 and may support functionality for accessing, retrieving, storing, and/or manipulating data stored in the memory 1404 and/or data stored in the data storage 1420. The DBMS 1424 may use any of a variety of database models (e.g., relational model, object model, etc.) and may support any of a variety of query languages. The DBMS 1424 may access data represented in one or more data schemas and stored in any suitable data repository including, but not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. In those example embodiments in which the electronic device(s) 1400 is a mobile device, the DBMS 1424 may be any suitable lightweight DBMS optimized for performance on a mobile device. Referring now to other illustrative components of the electronic device(s) 1400, the input/output (I/O) interface(s) 1406 may facilitate the receipt of input information by the electronic device(s) 1400 from one or more I/O devices as well as the output of information from the electronic device(s) 1400 to the one or more I/O devices. The I/O devices may include any of a variety of components such as a display or display screen having a touch surface or touchscreen; an audio output device for producing sound, such as a speaker; an audio capture device, such as a microphone; an image and/or video capture device, such as a camera; a haptic unit; and so forth. Any of these components may be integrated into the electronic device(s) 1400 or may be separate. The I/O devices may further include, for example, any number of peripheral devices such as data storage devices, printing devices, and so forth. The I/O interface(s) 1406 may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt, Ethernet port or other connection protocol that may connect to one or more networks. The I/O interface(s) 1406 may also include a connection to one or more of the antenna(e) 1430 to connect to one or more networks via a wireless local area network (WLAN) (such as Wi-Fi) radio, Bluetooth, ZigBee, and/or a wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, a ZigBee network, etc. The electronic device(s) 1400 may further include one or more network interface(s) 1408 via which the electronic device(s) 1400 may communicate with any of a variety of other systems, platforms, networks, devices, and so forth. The network interface(s) 1408 may enable communication, for example, with one or more wireless routers, one or more host servers, one or more web servers, and the like via one or more networks. The antenna(e) 1430 may include any suitable type of antenna depending, for example, on the communications protocols used to transmit or receive signals via the antenna(e) 1430. Non-limiting examples of suitable antennae may include directional antennae, non-directional antennae, dipole antennae, folded dipole antennae, patch antennae, multiple-input multiple-output (MIMO) antennae, or the like. The antenna(e) 1430 may be communicatively coupled to one or more transceivers 1412 or radio components to which or from which signals may be transmitted or received. As previously described, the antenna(e) 1430 may include a cellular antenna configured to transmit or receive signals in accordance with established standards and protocols, such as Global System for Mobile Communications (GSM), 3G standards (e.g., Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDMA), CDMA2000, etc.), 4G standards (e.g., Long-Term Evolution (LTE), WiMax, etc.), direct satellite communications, or the like. The antenna(e) 1430 may additionally, or alternatively, include a Wi-Fi antenna configured to transmit or receive signals in accordance with established standards and protocols, such as the IEEE 802.11 family of standards, including via 2.4 GHz channels (e.g., 802.11b, 802.11g, 802.11n), 5 GHz channels (e.g., 802.11n, 802.11ac), or 60 GHz channels (e.g., 802.11ad). In alternative example embodiments, the antenna(e) 1430 may be configured to transmit or receive radio frequency signals within any suitable frequency range forming part of the unlicensed portion of the radio spectrum. The antenna(e) 1430 may additionally, or alternatively, include a GNSS antenna configured to receive GNSS signals from three or more GNSS satellites carrying time-position information to triangulate a position therefrom. Such a GNSS antenna may be configured to receive GNSS signals from any current or planned GNSS such as, for example, the Global Positioning System (GPS), the GLONASS System, the Compass Navigation System, the Galileo System, or the Indian Regional Navigational System. The transceiver(s) 1412 may include any suitable radio component(s) for—in cooperation with the antenna(e) 1430—transmitting or receiving radio frequency (RF) signals in the bandwidth and/or channels corresponding to the communications protocols utilized by the electronic device(s) 1400 to communicate with other devices. The transceiver(s) 1412 may include hardware, software, and/or firmware for modulating, transmitting, or receiving—potentially in cooperation with any of antenna(e) 1430—communications signals according to any of the communications protocols discussed above including, but not limited to, one or more Wi-Fi and/or Wi-Fi direct protocols, as standardized by the IEEE 802.11 standards, one or more non-Wi-Fi protocols, or one or more cellular communications protocols or standards. The transceiver(s) 1412 may further include hardware, firmware, or software for receiving GNSS signals. The transceiver(s) 1412 may include any known receiver and baseband suitable for communicating via the communications protocols utilized by the electronic device(s) 1400. The transceiver(s) 1412 may further include a low noise amplifier (LNA), additional signal amplifiers, an analog-to-digital (A/D) converter, one or more buffers, a digital baseband, or the like. The sensor(s)/sensor interface(s) 1410 may include or may be capable of interfacing with any suitable type of sensing device such as, for example, inertial sensors, force sensors, thermal sensors, photocells, and so forth. Example types of inertial sensors may include accelerometers (e.g., MEMS-based accelerometers), gyroscopes, and so forth. The battery(ies) 1414 may be flexible, and may be any suitable battery type, such as a lithium-ion battery type. The battery(ies) 1414 may be a lithium-ion battery with a solid electrolyte membrane, such as those described in conjunction with FIGS. 1-13. The optional speaker(s)/microphone(s) 1416 may be any device configured to receive analog sound input or voice data. It should be appreciated that the program module(s), applications, computer-executable instructions, code, or the like depicted in FIG. 14 as being stored in the data storage 1420 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple module(s) or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the electronic device(s) 1400, and/or hosted on other computing device(s) accessible via one or more networks, may be provided to support functionality provided by the program module(s), applications, or computer-executable code depicted in FIG. 14 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program module(s) depicted in FIG. 14 may be performed by a fewer or greater number of module(s), or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program module(s) that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program module(s) depicted in FIG. 14 may be implemented, at least partially, in hardware and/or firmware across any number of devices. It should further be appreciated that the electronic device(s) 1400 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the electronic device(s) 1400 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program module(s) have been depicted and described as software module(s) stored in the data storage 1420, it should be appreciated that functionality described as being supported by the program module(s) may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned module(s) may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other module(s). Further, one or more depicted module(s) may not be present in certain embodiments, while in other embodiments, additional module(s) not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain module(s) may be depicted and described as sub-module(s) of another module, in certain embodiments, such module(s) may be provided as independent module(s) or as sub-module(s) of other module(s). One or more operations of the methods, process flows, and use cases of FIGS. 1-13 may be performed by a device having the illustrative configuration depicted in FIG. 14, or more specifically, by one or more engines, program module(s), applications, or the like executable on such a device. It should be appreciated, however, that such operations may be implemented in connection with numerous other device configurations. The operations described and depicted in the illustrative methods and process flows of any of FIGS. 1-13 may be carried out or performed in any suitable order as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted in FIGS. 1-13 may be performed. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Certain aspects of the disclosure are described above with reference to block and flow diagrams of systems, methods, apparatuses, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and the flow diagrams, respectively, may be implemented by execution of computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all, according to some embodiments. Further, additional components and/or operations beyond those depicted in blocks of the block and/or flow diagrams may be present in certain embodiments. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. Program module(s), applications, or the like disclosed herein may include one or more software components including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). Software components may invoke or be invoked by other software components through any of a wide variety of mechanisms. Invoked or invoking software components may comprise other custom-developed application software, operating system functionality (e.g., device drivers, data storage (e.g., file management) routines, other common routines and services, etc.), or third-party software components (e.g., middleware, encryption, or other security software, database management software, file transfer or other network communication software, mathematical or statistical software, image processing software, and format translation software). Software components associated with a particular solution or system may reside and be executed on a single platform or may be distributed across multiple platforms. The multiple platforms may be associated with more than one hardware vendor, underlying chip technology, or operating system. Furthermore, software components associated with a particular solution or system may be initially written in one or more programming languages, but may invoke software components written in another programming language. Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in the flow diagrams to be performed. These computer program instructions may also be stored in a computer-readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in the flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process. Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information and which can be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer-readable communication media (CRCM) may include computer-readable instructions, program module(s), or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM. Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. 16521277 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Nov 27th, 2019 12:00AM https://www.uspto.gov?id=US11314819-20220426 Systems, apparatuses, and method for document ingestion Techniques for intaking one or more documents are described. An exemplary method includes receiving an ingestion request to ingest a document; extracting text from the document; pre-processing the extracted text to generate pre-processed text that is predictable and analyzable; generating an index entry for the extracted text, the index entry to map the extracted text to a reserved field of a plurality of reserved fields; and storing the extracted text, index entry, and pre-processed text in at least one data storage location. 11314819 1. A computer-implemented method comprising: receiving a first request to ingest a first document from a first data source into an enterprise search service in a provider network; acquiring the first document from the first data source based on crawling for documents in the first data source, and fetching an access control list for the first document; extracting a first text from the acquired first document; generating a first index entry for the extracted first text, the first index entry mapping the extracted first text to a reserved label of a plurality of reserved labels reserved by the enterprise search service for use by user accounts of the enterprise search service for labeling text in documents ingested into the enterprise search service in a common manner; receiving a second request to ingest a second document from a second data source into the enterprise search service; acquiring the second document from the second data source based on crawling for documents in the second data source, and fetching an access control list for the second document; extracting a second text from the acquired second document; generating a second index entry for the extracted second text, the second index entry mapping the extracted second text to the reserved label; training a machine learning model based on a corpus of training data including using the reserved label as a common label for the extracted first text and the extracted second text; using the generated first index entry to identify the extracted first text as a candidate answer to a given query; using the trained machine learning model to a generate a confidence score for the extracted first text and the given query; determining that the confidence score exceeds a threshold; and indicating in a displayed graphical user interface that the extracted first text is an answer to the given query. 2. The computer-implemented method of claim 1, further comprising: causing display of a graphical user interface providing graphical user interface controls for selecting whether the enterprise search service is to use the reserved label for labeled text extracted from documents ingested into the enterprise search service from the first data source. 3. The computer-implemented method of claim 1, wherein the plurality of reserved labels comprise one or more of “data source”, “document type”, “body”, “title”, “view count”, “source URI”, “category”, “modification date”, “creation date”, “description”, “authors”, and “tags”. 4. A computer-implemented method comprising: receiving a first request to ingest a first document from a first data source into an enterprise search service in a provider network; extracting a first text from the first document; generating a first index entry for the extracted first text, the first index entry mapping the extracted first text to a reserved label of a plurality of reserved labels reserved by the enterprise search service for use by user accounts of the enterprise search service for labeling text in documents ingested into the enterprise search service in a common manner; receiving a second request to ingest a second document from a second data source into the enterprise search service; extracting second text from the second document; generating a second index entry for the extracted second text, the second index entry mapping the extracted second text to the reserved label; training a machine learning model based on a corpus of training data including using the reserved label as a common label for the extracted first text and the extracted second text; and causing display of a graphical user interface that indicates that the extracted first text is an answer to a given query. 5. The computer-implemented method of claim 4, further comprising: causing display of a graphical user interface providing graphical user interface controls for selecting whether the enterprise search service is to use the reserved label for labeled text extracted from documents ingested into the enterprise search service from the first data source. 6. The computer-implemented method of claim 4, wherein the plurality of reserved labels comprise one or more of “data source”, “document type”, “body”, “title”, “view count”, “source URI”, “category”, “modification date”, “creation date”, “description”, “authors”, and “tags”. 7. The computer-implemented method of claim 4, further comprising: causing display of a graphical user interface providing graphical user interface controls for mapping a label associated with the first data source to the reserved label. 8. The computer-implemented method of claim 4, wherein the given query is in a form of a natural language question; and wherein the first text is indicated in the graphical user interface as an answer to the natural language question. 9. The computer-implemented method of claim 4, wherein the first text is labeled with the reserved label in the first document and the second text is labeled with the reserved label in the second document. 10. The computer-implemented method of claim 4, wherein the first data source is associated with a first user account authorized to access the enterprise search service and the second data source is associated with a second user account authorized to access the enterprise search service. 11. The computer-implemented method of claim 4, further comprising: mapping a first label labeling the first text in the first document to the reserved label; and mapping a second label labeling the second text in the second document to the reserved label. 12. The computer-implemented method of claim 4, wherein the first data source is external to the provider network. 13. The computer-implemented method of claim 4, wherein the first data source is in the provider network. 14. The computer-implemented method of claim 4, further comprising: including a pointer to an access control list controlling access to the first document in the first index entry; and using the access control list to verify that a user account is authorized to access the first document before causing display of the graphical user interface that indicates that the extracted first text is an answer to the given query. 15. A system comprising: a first data storage to store one or more documents for a first user account authorized to access an enterprise search service in a provider network; a second data storage to store one or more documents for a second user account authorized to access the enterprise search service; one or more electronic devices to implement the enterprise search service, the enterprise search service including instructions that upon execution cause the enterprise search service to: receive a first request to ingest a first document from the first data storage; extract a first text from the first document; generate a first index entry for the extracted first text, the first index entry to map the extracted first text to a reserved label of a plurality of reserved labels reserved by the enterprise search service for use by user accounts of the enterprise search service for labeling text in documents ingested into the enterprise search service in a common manner; receive a second request to ingest a second document from the second data storage; extract a second text from the second document; generate a second index entry for the extracted second text, the second index entry to map the extracted second text to the reserved label; train a machine learning model based on a corpus of training data including using the reserved label as a common label for the extracted first text and the extracted second text; and indicate in a displayed graphical user interface that the extracted first text is an answer to a given query. 16. The system of claim 15, wherein the first data storage and the second data storage are in the provider network. 17. The system of claim 15, wherein the first text is labeled with the reserved label in the first document and the second text is labeled with the reserved label in the second document. 18. The system of claim 15, wherein the given query is in a form of a natural language question; and wherein the first text is indicated in the graphical user interface as an answer to the natural language question. 19. The system of claim 15, wherein the threshold is a first threshold; and wherein the first text is highlighted in the displayed graphical user interface with an emphasis determined based on the confidence score exceeding a second threshold in addition to exceeding the first threshold. 20. The system of claim 15, wherein the enterprise search service further includes instructions that upon execution cause the enterprise search service to: include a pointer to an access control list controlling access to the first document in the first index entry; and use the access control list to verify that the first user account is authorized to access the first document before causing display of the graphical user interface that indicates that the extracted first text is an answer to the given query. 20 BACKGROUND Enterprises are generating more data than ever before. Trying to find what data is relevant from that generated data is a non-trivial task. Traditional search solutions rely on keyword-based document analysis to find specific terms in the data which a general-purpose approach inherently limited by its inability to “understand” the content at a more granular level. BRIEF DESCRIPTION OF DRAWINGS Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which: FIG. 1 illustrates embodiments of an enterprise search service. FIG. 2 illustrates embodiments of the enterprise search service used for providing inference functionality. FIG. 3 illustrates embodiments of the enterprise search service used for providing inference functionality. FIG. 4 illustrates embodiments of a method for performing an inference (search on documents). FIG. 5 illustrates embodiments of an improved display of a result of an inference query. FIG. 6 illustrates embodiments of a method for performing an improved display of a result of an inference query. FIG. 7 illustrates embodiments of the enterprise search service 102 used for providing ingestion functionality. FIG. 8 illustrates embodiments of a method for performing ingestion of one or more documents. FIG. 9 illustrates embodiments of exemplary reserved fields for use in ingestion. FIG. 10 illustrates embodiments of a graphical user interface to be used in updating/adding/removing reserved fields for use in ingestion. FIG. 11 illustrates embodiments of a model building system. FIG. 12 illustrates embodiments of a method for model management. FIG. 13 illustrates embodiments of a graphical user interface to be used in active learning of question and answer(s) for training a machine learning model. FIG. 14 illustrates embodiments of a graphical user interface to be used in active learning of document ranking for training a machine learning model. FIG. 15 illustrates embodiments of a method for active learning for training a machine learning model. FIG. 16 illustrates embodiments of a method for training and use of a question generation model. FIG. 17 illustrates a first set of example candidate questions generated by a question generation model trained on known question and answer pairs. FIG. 18 illustrates a second set of example candidate questions generated by a question generation model trained on known question and answer pairs. FIG. 19 illustrates embodiments of a method for training a question generation model. FIG. 20 illustrates an example provider network environment according to some embodiments. FIG. 21 is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers according to some embodiments. FIG. 22 is a block diagram illustrating an example computer system that may be used in some embodiments. DETAILED DESCRIPTION The present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for indexing and searching text-based documents using machine learning. Documents are acquired, text from the documents extracted and indexed, etc. to make them searchable using term-based or question-based queries. These text-based documents including frequently asked questions (FAQs) are to be searched according to a user query for one or more top ranked (most relevant) documents, one or more top ranked passages (where a passage is a limited number of contiguous lines that have been extracted from a given document), and/or one or more top ranked FAQs. Detailed herein are embodiments of an enterprise search service that enables users to intuitively search unstructured data using natural language. It returns specific and personalized answers to questions, giving end users an experience that comes closer to interacting with a human expert. In keyword-based document analysis approaches, it is hard to determine any sort of context of the content. Embodiments detailed herein allow for document corpus hosted internally or externally to be accessed and indexed those documents. The indexing helps to provide a context of a document and provides a semblance of “structure” to an unstructured document. In some instances, a set of reserved fields for the indexes gives a more uniform context to labels in a document. As such, embodiments of the enterprise search service described below allow for the answering of factoid and non-factoid (e.g., how, what, why) questions by extracting relevant information from a document corpus. Such questions (e.g., “What is the latest version of software X”) are usually answerable in a few words. In some embodiments, the enterprise search service allows for the answering of short questions that can be answered in a few lines such as those found in a frequently asked questions document (e.g., “What is the difference between the IP default-gateway, IP default-network, and IP route 0.0.0.0/0 commands?”). In some embodiments, the enterprise search service allows for the answering of descriptive questions through the identification of an entire relevant document where the answer is the entire document. For example, “What is the CLI for Brazil?” Another deficiency in some search systems is showing the user what is relevant about the search result. While some search results bold a particular word or phrase in the result, that is the extent of helping a user identify what is the “correct” answer to the search. Detailed herein are embodiments of further emphasizing the “correct” answer based on the confidence of the machine learning model(s) that found the “correct” answer. Answers that are not as “correct” are either not emphasized, or emphasized in a different manner. Many enterprises use log analytics or have use cases like customer service, searching business reports and FAQs, that could potentially benefit from embodiments detailed herein. The embodiments detailed enable these enterprises to build smarter enterprise search applications that cover a wider range of sources securely, and provide strong natural language understanding capabilities, at a fraction of the time and complexity needed to implement their own search solution. FIG. 1 illustrates embodiments of an enterprise search service. The enterprise search service 102 allows for the querying or searching of documents and/or proper subsets thereof of an enterprise using one or more machine learning models. Details of various aspects of this enterprise search service 102 are discussed below. Prior to any searching, the documents and/or proper subsets thereof have been ingested prior to such querying. In some embodiments, the enterprise search service 102 provides the capability to ingest documents from data sources 105 internal to a provider network 100 and data sources 106 external to the provider network 100 (e.g., stored at a third-party location, stored locally, etc.). An ingestion service 130 allows for the ingestion of documents into the enterprise search service 102. Documents may be pulled from data sources (e.g., in response to a request) and/or pushed from data sources (e.g., a synchronization of when a document is added or altered). The ingestion service 130 may also fetch access control lists (ACLs) associated with the documents. The ACLs can be used to determine if a search result is allowed to be served. To get documents from the data sources 105 or 106, the ingestion service couples to a connector service 180 which offers a plurality of connectors to connect to different data sources and receive data (as a push or a pull) from those sources according to the appropriate protocol for a particular data source. Note that different data sources may use different transmission protocols, storage protocols, encryption protocols, etc. The data connectors of the connector service 180 are configured using a control plane 170. This control plane 170 contains workflows for resource management of the enterprise search service 102. The control plane 170 may also be used to configure a model building pipeline 160 which builds specific models, vocabularies, and embeddings to be hosted in the model hosting service 110 and used in answering a query. Note that in some embodiments, a model management service 150 may be used to refresh a given model. The ingestion service 130 also extracts text from documents, pre-processes the extracted text (e.g., tokenize, normalize, and/or remove noise), and calls an indexing service to generate index entries for text, and causes the documents (or subset thereof) to be stored. The indexing service 140 indexes documents that have been acquired by the ingestion service 130 into one or more indexes 107. An index is a data structure of organized data that maps the data to a plurality of fields. Each document or subset of a document (e.g., passage) is identified with a unique identifier. In some embodiments, the index is comprised a plurality of JSON documents. In some embodiments, the index is an inverted index that lists every unique word that appears in any document and identifies all of the documents each word occurs in. An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain data. Each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees. The indexing service 140 may be schema-less, which means that documents can be indexed without explicitly specifying how to handle each of the different fields that might occur in a document. When dynamic mapping is enabled, the indexing service 140 automatically detects and adds new fields to the index. However, as noted below, a schema of reserved fields may be used to map detected data into datatypes. The reserved fields allow for the distinguishing between full-text string fields and exact value string fields, performing language-specific text analysis, optimizing fields for partial matching, and/or the use datatypes that are not automatically detected. Once a set of documents has been indexed, a query against that set of documents may be made via an inference service 120. The inference service 120 handles search queries from end users by performing query understanding (query classification and enrichment), invoking the indexing service 140 to get a relevant set of documents for the query, retrieving the relevant set of documents, and invoking one or more models of the model hosting service 110 to deduce a search result for given query. Examples of models utilized by the inference service 120 that are run in the model hosting service 110 include, but are limited to a question/answer (e.g., reading comprehension) which extracts answers from passages, a document/passage ranking model which sorts documents in an order of relevance with respect to the query, and a FAQ matching model which attempts to identify a correct the right answer for a given question from a given FAQ document. A frontend 104 of the enterprise search service 102 couples to one or more search service components 103 to provide a way for external communications (e.g., from edge device 108, etc.) with the enterprise search service 102. For example, through the frontend 104 a user may communicate with the ingestion service 130 to configure and start an ingestion of one or more documents, provide a query to be served by the inference service 120, etc. As shown, in some embodiments the enterprise search service 102 is a service provided by a provider network 100. The provider network 100 (or, “cloud” provider network) provides users with the ability to utilize one or more of a variety of types of computing-related resources such as compute resources (e.g., executing virtual machine (VM) instances and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (e.g., object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (e.g., configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (e.g., databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc. These and other computing resources may be provided as services, such as a hardware virtualization service that can execute compute instances, a storage service that can store data objects, etc. The users (or “customers”) of provider networks 100 may utilize one or more user accounts that are associated with a customer account, though these terms may be used somewhat interchangeably depending upon the context of use. Users may interact with a provider network 100 across one or more intermediate networks 101 (e.g., the internet) via one or more interface(s), such as through use of application programming interface (API) calls, via a console implemented as a website or application, etc. The interface(s) may be part of, or serve as a front-end to, a control plane (e.g., control plane 170) of the provider network 100 that includes “backend” services supporting and enabling the services that may be more directly offered to customers. For example, a cloud provider network (or just “cloud”) typically refers to a large pool of accessible virtualized computing resources (such as compute, storage, and networking resources, applications, and services). A cloud can provide convenient, on-demand network access to a shared pool of configurable computing resources that can be programmatically provisioned and released in response to customer commands. These resources can be dynamically provisioned and reconfigured to adjust to variable load. Cloud computing can thus be considered as both the applications delivered as services over a publicly accessible network (e.g., the Internet, a cellular communication network) and the hardware and software in cloud provider data centers that provide those services. A cloud provider network can be formed as a number of regions, where a region may be a geographical area in which the cloud provider clusters data centers. Each region can include multiple (e.g., two or more) availability zones (AZs) connected to one another via a private high-speed network, for example a fiber communication connection. An AZ may provide an isolated failure domain including one or more data center facilities with separate power, separate networking, and separate cooling from those in another AZ. Preferably, AZs within a region are positioned far enough away from one other that a same natural disaster (or other failure-inducing event) should not affect or take more than one AZ offline at the same time. Customers can connect to AZ of the cloud provider network via a publicly accessible network (e.g., the Internet, a cellular communication network). To provide these and other computing resource services, provider networks 100 often rely upon virtualization techniques. For example, virtualization technologies may be used to provide users the ability to control or utilize compute instances (e.g., a VM using a guest operating system (O/S) that operates using a hypervisor that may or may not further operate on top of an underlying host O/S, a container that may or may not operate in a VM, an instance that can execute on “bare metal” hardware without an underlying hypervisor), where one or multiple compute instances can be implemented using a single electronic device. Thus, a user may directly utilize a compute instance (e.g., provided by a hardware virtualization service) hosted by the provider network to perform a variety of computing tasks. Additionally, or alternatively, a user may indirectly utilize a compute instance by submitting code to be executed by the provider network (e.g., via an on-demand code execution service), which in turn utilizes a compute instance to execute the code—typically without the user having any control of or knowledge of the underlying compute instance(s) involved. Circles with numbers inside of them represent exemplary actions that may be taken for performing an inference (query). At circle 1, an inference request is sent by an edge device 108 to the enterprise search service 102. The frontend 104 calls the inference service 120 which begins processing the request at circle 2. The processing of the request includes accessing one or more indexes 107 via the indexing service 140 at circle 3 to get identifiers of sets of documents to analyze, accessing the identified sets of documents (or text thereof) from document storage 109, and providing the documents (or text thereof) and the query to one or more machine learning models in the model hosting service 110 at circle 5 to determine one or more of top documents, a top passage, and/or a top FAQ. The result of the determination by the one or more machine learning models is provided the requestor at circle 6 (subject to any restrictions). The provision of the result may also include using an enhanced display. FIG. 2 illustrates embodiments of the enterprise search service 102 used for providing inference functionality. In particular, the aspects shown may be used to respond to a search query on a set of documents. The frontend 104 takes in a search request (or query) and provides that request to an inference orchestrator 220 of the inference service 120. In some embodiments, the query is submitted as an application programming interface (API) call. In some embodiments, a default response to such a query includes a relevant passage, a matching FAQ, and a relevant document. The query may contain one or more fields indicating how the search is to be performed and/or what is to be returned. This one or more fields include, for example, one or more of: an attribute filter field which enables filtered searches based on document attributes; an exclude document attributes field indicating what attributes to exclude from a response; a facets field defining what document attributes to count; an include document attributes field indicating the document attributes to include in a response; an index identifier field indicating the index(es) to search; a page number field indicating the number of pages of results to return; a page size field indicating the size of pages of results to return; a query result type configuration field which sets the type of query (e.g., FAQ, passage, document); a query text field which includes a string of text to search for; and a user context field which identifies the end user making the query so it can be determined if the query result should be filtered based on the user (e.g., an access control list indicates that the user is not allowed to see the content such a regular employee searching for health records of another employee). The inference orchestrator 220 co-ordinates various services to perform an inference using the query. In some embodiments, the inference orchestrator 220 includes a state machine or algorithm defining the actions to take. In some embodiments, the inference orchestrator 220 performs query classification and enrichment (or couples to a component that does). For example, in some embodiments, key phrases, entities, syntax, topics, and/or classifications are extracted. In some embodiments, a classifier machine learning model determines what type of question is being presented. Factoid questions and non-factoid questions may get different treatment with respect to what models are used to determine top results and how results are shown. The inference orchestrator 220 couples to the indexing service 140 and utilizes the indexing service 140 to access one or more indexes 107 to get matching document identifiers for the query. The indexes 107 include a FAQ index 107A, a question/answer index 107B, and a document/passage index 107C. In some instances, the inference orchestrator 220 provides an indication of what index(es) to use. In some embodiments, the metadata 210 provides a physical location of the indexes 107 for the indexing service 140 to use. The result (e.g., document identifiers) of various index queries are received by the inference orchestrator 220 to use to retrieve one or more documents for use by one or more machine learning models (e.g., FAQ model 212C, question/answer model 212B, and document/passage ranking model(s) 212A) hosted by the model hosting service 110. The inference orchestrator 220 retrieves the identified documents (e.g., an entire document, passage, or FAQ) from text/document storage 109 using document storage service 208. The retrieved documents are then supplied, along with aspects of the query, to one or more of the models 212A-C of the model hosting service 110 to identify one or more of: one or more top ranked documents, one or more top ranked passages, and/or one or more top ranked FAQs. Note that the models 212A-C provide confidence scores of their outputs. Note too that the document storage service 208 stores document artifacts that will be used at the time of inference to extract the answer for a given query. FIG. 3 illustrates embodiments of the enterprise search service 102 used for providing inference functionality. A query 300 is received by the inference orchestrator 220. This is shown at circle 1. The inference orchestrator 220 fires the query against one or more indexes 107. In some embodiments, the query is fired against the document index 107A and the passage index 107B (shown at circle 2). An identification of set of “top” documents (e.g., top 1,000 documents) and “top” passages (e.g., 5,000 passages) are provided from the indexing service 140 back to the inference orchestrator 220. The associated documents and passages are retrieved (shown at circle 3) and then sent to the document/passage ranking model(s) 212A. The document/passage ranking model(s) 212A analyzes and re-ranks the top documents based on relevance scores and, for a top subset (e.g., 100) of the ranked documents, determines a set number (e.g., 3) of passages for each of the top subset of ranked documents (shown at circle 4). In some embodiments, a feature-based deep cross network (DCN) analyzes and re-ranks the top documents. Further, in some embodiments, a bidirectional encoder representations from transformers (BERT) model takes the ranking top subset of documents and finds the passages and outputs relevance scores. The relevance scores of the DCN and BERT are combined to obtain the final reranking of top documents. In some embodiments, when the data is purely textual documents with no metadata fields, then the DCN can be bypassed, and only the BERT used for reranking of the top 100 documents directly. Note that an output of the document/passage ranking model(s) 212A is a set of top ranking document(s) 304 and/or top ranking passages. In some embodiments, the top ranking passages are found using a union of the top ranking documents and the indexed passages. The question and answer model 212B is used to determine a set of one or more top passages for the query. The query is fired against the passage index 107B at circle 5 to find a top number (e.g., 100) of passages which are retrieved and sent to the document/passage ranking model(s) 212A for analysis and reranking. In particular, in some embodiments, the BERT model receives the top passages and re-ranks the passages and sends top few (e.g., 5) to the question and answer model 212B at circle 6. In some embodiments, the question and answer model 212B is also BERT-based. The question and answer model 212B analyzes these few passages and outputs a top passage 306 that is, at times, highlighted) with multiple answer candidates. In some embodiments, when the top passage's confidence score exceeds a first threshold it is displaced. In some embodiments, when aspects the top passage's confidence score exceeds a second, more stringent threshold, those aspects of the top passage are highlighted as the best answer while less confident scores are otherwise enhanced (e.g., bolded). The FAQ model 212C is used to determine a set of one or more top FAQs for the query. The query is fired against the FAQ questions index 107C at circle 7 and the top set of matching questions are sent to the FAQ model 212C from the text/document storage 109. The FAQ model 212C re-ranks the top set of questions and returns the most relevant questions along with their answers 308. In some embodiments, the FAQ model 212C is a BERT-based model. FIG. 4 illustrates embodiments of a method for performing an inference (search on documents). Some or all of the operations (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations are performed by components of the other figures such as under the control of the inference orchestrator 220 which calls the indexing service 140 for document identifiers, fetches the identified documents using the document storage service 208, and calls one or more ML models in the model hosting service 110 to analyze the fetched documents. At 401, a search query is received in a frontend. The search query includes a question to be answered. In some embodiments, the search query includes an indication of what type of answer is expected (e.g., a listing of documents, a passage, and/or an FAQ). For example, prefix (or postfix) such as passage:QUESTIONTEXT may be used. Or, a selection from a list of potential results may be used. Examples of a search query API call have been detailed above. The search query is performed to generate one or more results at 402. Documents that matches the search query are identified by querying one or more indexes at 403. For example, in some embodiments, an index for documents is queried for a set “top” matching documents at 405, an index for passages is queried for a set of “top” matching passages at 406, and/or an index for FAQs is queried for a set of “top” FAQs at 407. Note that these indexes may be independent of each other or combined in any manner. As noted, an inference orchestrator may cause these one or more queries to occur. In some embodiments, the query is formed such that is requests a “match” for the words of the question. A match query returns documents that match provided text, numbers, dates, or Boolean values. The match query may limit the number of results, the number of words of the question to use, etc. At 409, the identified documents, passages, and/or FAQs are fetched according to the matched data at 409. As discussed, an inference orchestrator may cause these one or more fetches to occur. The documents, passages, and/or FAQs may be stored in separate locations or together. Additionally, the documents, passages, and/or FAQs may be pre-processed to make subsequent analysis easier. In some embodiments, the fetching is of whole documents. In some embodiments, the fetching is of extracted text from the documents. One or more of a top ranked passage, top ranked documents, and a top ranked FAQ are determined from the fetched documents, passages, and/or FAQs based upon one or more invocations of one or more machine learning models for the search query at 411. Several operations may occur in this action. Note the models produce a confidence score for their results. In some embodiments, a proper subset of the identified (fetched) set of documents are determined using a first machine learning model at 413. For example, in some embodiments, the fetched documents are reranked using a first model (e.g., DCN model) according to relevance scores and then a second model (e.g., BERT-based) looks at some top number of those reranked documents (e.g., 100) and uses top passages from the retrieved passages for those top documents to determine a relevance score per document. The relevance scores from the first and second models are combined to generate a set of top ranked documents. In other embodiments, only the reranking using the first model is performed. In some embodiments, a proper subset of the identified (and fetched) set of passages is identified using a second machine learning model based upon the query and fetched passages at 417. This proper subset is a reranking of the passages. This may be the same model as the BERT-based model detailed as being used at 413. This reranked subset is provided to a third model (along with aspects of the query) which determines a top passage from the reranked subset at 419. The third model is a BERT-based model in some embodiments. In some embodiments, a proper subset of the identified (and fetched) set of FAQS is determined using a fourth machine learning model on the fetched FAQs and the query at 421. This proper subset includes the top ranked FAQ. In some embodiments, the fourth machine learning model is BERT-based. One or more of the top ranked passage, the top ranked documents, and the top ranked FAQ are returned at 423. The return may include displaying the result. In some embodiments, what is returned is subject to an access control list and/or confidence score threshold. For example, if the top ranked document, etc. is not allowed to be shared with the searching user based on the access control list, then either nothing is returned, or a lower ranked document, etc. is returned instead. In some embodiments, an improved display of the result is utilized. Note that in some embodiments, the returned result is sent back through the frontend and/or inference orchestrator. At 425, feedback is received on the one or more of the returned one or more of the top ranked passage, the top ranked documents, and the top ranked FAQ in some embodiments. This feedback may be used to tune the models. FIG. 5 illustrates embodiments of an improved display of a result of an inference query. As shown, a graphical user interface (GUI) 500 allows for a user to input a query using a query input mechanism 504 (e.g., an input box). In some embodiments, the user may further define the dataset using a dataset indicator/selector 502. For example, the user may define that HR documents are to be queried, or that HR FAQs are to be queried, etc. In some embodiments, by default documents, passages, and FAQs are all queried. The GUI 500 provides an indication of a number of results 506 that are returned along with the results 508, 518 themselves. In some embodiments, an answer 505 to the question being asked is extracted and shown proximately. Note that there may be a result shown per index type (e.g., document, passage, and FAQ). In this example, the first result 508 shows text 510 which includes highlighting for a particularly relevant aspect of the result. In particular, the text “RESULT” has been highlighted from a document's text. This highlighted text is the top ranked text (or at least the top ranked text that the user is allowed to see) where the result exceeds one or more confidence score thresholds. In some embodiments, the highlighting is shown using a font change and in some embodiments the text is color highlighted (as in using a yellow background for the section of text). The first result 508 also includes the location of the result 512 (e.g., the document location) and a means to provide feedback 514, e.g., as feedback input 1106 in FIG. 11. The second result 518 shows text 520 that less emphasizes a relevant aspect of the result. In particular, the text “RESULT” has been emphasized from a document's text, but in a less conspicuous way than the highlighted text. This emphasized text is the top ranked text (or at least the top ranked text that the user is allowed to see) where the result exceeds one or more confidence score thresholds (but not as much as a highlighted text would). The emphasis may be bolding, italicizing, underlining, changing the font size, etc. The second result 518 also includes the location of the result 522 (e.g., the document location) and a means to provide feedback 524, e.g., as feedback input 1106 in FIG. 11. Feedback input may be in the form of an API request that includes one or more parameters such as one or more of: click feedback items (alert that the search result was taken), an identifier of the index that was queried, an identifier of the query itself, relevance such as a thumbs-up or thumbs-down. FIG. 6 illustrates embodiments of a method for performing an improved display of a result of an inference query. Some or all of the operations (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations are performed by components of the other figures. A search query is received at 601. For example, a search query is received at the frontend 104 and passed to the inference orchestrator 220. Examples of search queries have been detailed above. The search query is performed to generate one or more results at 603. Embodiments of such performance have been detailed above (e.g., with respect to at least FIG. 4). The results may include, but are not limited to: text from top ranking documents, a top ranked passage, a top ranked FAQ, etc. In some embodiments, the performance of the search query includes using one or more ML models. The one or more results are displayed at 605. Note that in some embodiments, the returned result is sent back through the frontend 104 and/or inference orchestrator 220 for display. It is typically in one of these components that a determination of what can be shown and/or emphasizing certain aspects of a result is/are made. For example, the application of an access control list, etc. In this example, it is assumed that the result is allowed to be displayed, but how the result is to be displayed differs depending upon how confident the underlying models were in their analysis. At 607, a determination is made as to if an aspect of the result exceeds a first confidence threshold. For example, does the result from one or more ML models indicate that the result is deemed to be fairly correct based on the confidence score of the one or more ML models. When the confidence score is low, this indicates that the result may not be particularly good. When the first threshold is not met, then the result is either not shown, or there is no emphasis in the display of the result at 609. When the first threshold is met, then the result is shown with emphasis. The type of emphasis may differ depending upon a determination whether an aspect of the result exceeds a second confidence threshold that is greater than the first threshold made at 611. When the second threshold is not met, then a first type of emphasis is used to emphasize the aspect at 613. Examples of a first type of emphasis include, but are not limited to: bolding, underlining, changing a font, changing a font size, and italicizing. When the second threshold is met, then a second type of emphasis is used to emphasize the aspect at 615. This second type of emphasis is meant to stand out more than the first type of emphasis and may include highlighting, bolding, underlining, changing a font, changing a font size, italicizing, or a combination thereof. Note that the first and second type of emphasis are different. FIG. 7 illustrates embodiments of the enterprise search service 102 used for providing ingestion functionality. The frontend 104 takes in intake requests, index creation requests, etc. and passes those requests to the ingestion service 130. The ingestion service 130 performs document validation on documents retrieved from data sources 105/106. In some embodiments, the ingestion service 130 co-ordinates various services to perform index creation, index updating, and other ingestion tasks detailed below. In other embodiments, the ingestion service places documents to be processed in a queue 735 which includes an extraction and pre-preprocess pipeline. intake requests ask that a set of documents be in taken such that the documents are acquired, indexed, pre-processed, stored, etc. As shown, the ingestion service 130 is coupled to a plurality of services. The connector service 180 receives (either as a push or a pull) documents from data sources 105/106 where the physical location may be provided by metadata 210. The indexing service 124 is pulls documents and/or text from the queue 735 (which may be pre-processed) and to create or update indexes 107 associated with documents (including passages and FAQs). The metadata 210 may provide the physical location of those indexes 107. The document storage service 208 also pulls documents from the queue 735 to store documents to store documents and chunks thereof in text/document storage 109. Prior to updating an index, it needs to be created. In some embodiments, a create index API call is received by the frontend 104 which calls the indexing service 124 to generate an index of indexes 107. The create index request includes one or more fields to inform the behavior of the indexing service 124 such as a field for a description of the index, field for an index name, a field for the role that gives permission for logs and metrics, a field identifying an encryption key, etc. When an index has been created, it can be updated. This updating may be in the form of a single update or batch update which causes an ingestion of text and unstructured text into an index, the addition of custom attributes to the documents (should it be desired), an attachment of an access control list to the documents added to the index, a storage of the text, a pre-processing of the text (and storage thereof), etc. In some embodiments, a update request includes one or more fields to inform the behavior of the indexing service 124, the document storage service 208, the queue 735 include extraction and pre-processing pipeline, and the connector service 180 and includes one or more fields such as a field for a location of one or more documents, a field for the documents themselves, a field for the index's name, a field for the role that gives permission for logs and metrics, etc. The extraction and pre-processing pipeline extracts text from documents and pre-processes them (e.g., tokenize, etc.). In some embodiments, the extracted text (e.g., tokens) is broken down into overlapping passages using sliding windows by the extraction and pre-process pipeline. The overlapping passages are then indexed and/or stored. FIG. 8 illustrates embodiments of a method for performing ingestion of one or more documents. Some or all of the operations (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations are performed by components of the other figures. At 801, an intake request is received. For example, the frontend 104 receives the intake request. One or more documents are acquired from one or more data sources according to the request at 803. For example, when the request indicates a particular storage bucket, documents are gathered from that bucket. Note that acquiring could simply be pulling one or more documents from the intake request. In some embodiments, the documents are acquired by crawling of one or more data sources at 805. This crawling of documents is performed by the connector service 180 in some embodiments and may include gathering from internal and/or external sources. In some embodiments, the ACL for the acquired one or more documents is fetched at 807. As noted above, ACLs may be used to determine what results a user can see for a performed inference. The ACL may be stored with the document or pointed to by the document's owner. Text is extracted from the acquired one or more documents and pre-processed at 809. Metadata may also be extracted. For example, text may be extracted from a document that includes non-text such as images. The pre-processing of the extracted text includes one or more of tokenizing, normalizing, and/or removing noise. This is performed by the extraction and pre-processing pipeline 735 per document acquired. Note that extracted text may include passages. The extracted text and pre-processed text are stored at 811. This may be performed by the document storage service 208 which puts this extracted text and pre-processed text in text/document storage 109. The text and/or pre-processed text are used during inference. At 813, index entries for the extracted text are generated. In some embodiments, the index entry includes a pointer to an ACL. The generation of the index includes mapping labels of the document into fields for the index entry. In some embodiments, this mapping utilizes reserved fields. Reserved fields are “default” fields that allow for standardization across multiple different accounts. This standardization may help in the training of the models used in an inference as developing a corpus of training data should be easier when common labels are used (as opposed to training using different labels per user account). For example, a reserved field of “TITLE” allows user account 1 and user account 2 to use the same label in their documents. In some embodiments, existing labels are mapped to the reserved fields. That mapping may be automatic or according to a mapping provided by an administrator. In some embodiments, the underlying acquired documents are stored at 815. FIG. 9 illustrates embodiments of exemplary reserved fields for use in ingestion. Documents can use labels to indicate what the text is. For example, a label indicating the text is the title. As discussed above, index entries include fields for the text content and these fields correspond to labels. In the enterprise search service 102 described herein a set of “reserved” labels may be utilized in index entry fields. These reserved labels allow for text to be labeled in a common manner between documents, users, etc. Examples of names of “reserved” fields 901 and their corresponding datatypes 903 are shown. In some embodiments, body and title are default. While these fields are “reserved,” in some embodiments these fields are updatable. For example, if the name “modification date” is not what anyone is using, it can be changed to reflect usage. Additionally, new “reserved” fields may be added as needed or desired. Note that the use of “reserved” fields may be overridden in some embodiments. FIG. 10 illustrates embodiments of a graphical user interface to be used in updating/adding/removing reserved fields for use in ingestion. As shown, a GUI 1000 allows for a user to adjust reserved fields including adding a field, removing a field, and updating a field. The GUI 1000 includes a reserved field search mechanism 1004 (e.g., an input box). In some embodiments, the user may further define the dataset using a dataset indicator/selector 1002. For example, the user may define that HR documents have a certain set of reserved fields, whereas finance documents use a different set. The display includes, per reserved field, a reserved field name 1006, an explicit mapping to the name field 1008, data type field 1012, and an indication of if the reserved field is used. The reserved field name 1006 is the label used by the indexing service 140. The explicit mapping to the name field 1008 allows for a user to provide to the indexing service 140 a mapping of labels in existing documents to the reserved field. The fields 1006, 1008, and 1012 are editable and the application of the update field functionality 1016 commits changes. When the add field functionality 1018 is used, a new reserved field entry is added allowing a user to add reserved field name, an explicit mapping to the name field, data type field, and an indication of if the reserved field is used. This may be performed using one or more GUIs (not shown). A field can be removed using the use field 1014 and then applying the remove field functionality 1014. FIG. 11 illustrates embodiments of a model building system. Model building system may be utilized to build the model and refresh it. Depicted model building system includes frontend 104 coupled to an ingestion service 130, a model building pipeline 160 coupled to the ingestion service 130, a metric aggregator 1102, a control plane 170, and a model storage service 1104. Depicted model building system includes metric aggregator 1102 coupled to control plane 170, and model storage service 1104 coupled to model management service 150. Depicted model management service is coupled to model hosting service 110, which may host one or any combination of: document/passage ranking model(s) 212A, question/answer model 212B, and FAQ model 212C. In one embodiment, ingestion service 130 receives a document or documents that are to be ingested, and sends a report of the ingestion metrics (e.g., metrics that indicate the number of documents, the index size of the corpus of documents, index failures, etc.) to metric aggregator 1102, metric aggregator 1102 polls if document corpus has changed enough (e.g., exceed a threshold) to trigger model building, when the model build is triggered, an indication is sent to control plane 170 such that the control plane cause the model building pipeline 160 to build the model (e.g., machine learning model), the built model is then saved by model storage service 1104. Model building system may further include a training data generation service 1108, e.g., to create training data 1110 from the user's data. Training data 1110 may be utilized by the model building pipeline 160 in creation of a model and/or used by model management service 150 in the refresh of a model. A model (e.g., at or after initial use) may have improved functionality with further training. The training may be based at least in part on feedback input 1106, e.g., feedback provided by a user. In certain embodiments, a model management service 150 is to pull the model (e.g., from model storage service 1104) and refresh it (e.g., based on feedback input 1106). Refreshing the model may include utilizing the feedback (e.g., from feedback input 1106 or other feedback) in a next training iteration of the model. A next version of the model formed from the next training iteration may then be used by saving the model to model hosting service 110, e.g., where the updated model is one or any combination of: document/passage ranking model(s) 212A, question/answer model 212B, and FAQ model 212C. A model refresh (or displaying of a proper subset of the data for labeling by the user in active learning) may be triggered when a confidence value (e.g., score) of a proper subset of the data (e.g., answers and/or documents) the model returns for a search query falls below a confidence threshold. Additionally or alternatively, a model refresh (or displaying of a proper subset of the data for labeling by the user in active learning) may be triggered in response to exceeding a confidence difference threshold for a difference between a first confidence score for a first section (e.g., a first, highest scored candidate answer or candidate document) of the proper subset of the data with respect to its relevance to the search query and a second confidence score for a second section (e.g., a second, next highest scored candidate answer or candidate document) of the proper subset of the data with respect to its relevance to the search query. A proper subset of the data (e.g., answers and/or documents) for presentation to the user (e.g., to be used for labeling by the user in active learning) may be selected based on a confidence value of the proper subset of the data the model returns for a search query. The feedback input 1106 may include click-through data (e.g., how many times a provided link is selected by a user) and/or customer annotated data (e.g., as discussed below in reference to FIGS. 13 and 14). A model may include an input of a search query for a search of ingested data (e.g., the user's documents) and an output of a best answer from a plurality of answers from the data and/or an output of a best document from a plurality of documents from the data. Active learning may be utilized to train the model where a user is to request that the user indicate the desired output (e.g., answer(s) or document(s)) for an input (e.g., search query). However, instead of requiring a user to indicate which answers and/or documents are the most important for a search query (e.g., question) from the entirety (or substantially the entirety) of the data, certain embodiments herein present a proper subset of the data to the user for their indication which answers and/or documents are the most important for a search query. Thus, these embodiments allow a user to perform the labeling (e.g., as being important enough to use for a next iteration of training) of the data without overwhelming them with uninformative document(s). In one embodiment, based on the current performance of the model, active learning is applied to suggest a specific subset of user query (or queries) and document candidates and/or answer candidates to a user (e.g., a human) for labeling. Thus, the suggested subset of document candidates and/or answer candidates provide more value towards improving the machine learning model compared to a randomly sampled set. There may be different approaches for active learning. One example is to check the difference between the confidence scores from the top candidates, e.g., and if the difference is less than a threshold, the model then needs refinement on such queries. Another example is diversity sampling in order to have larger coverage of the data. Next, a user (e.g., customer) can label the relevance of the document and answer candidates (e.g., by selecting an interface element of a graphical user interface). In certain embodiments, when a threshold amount of annotated data (e.g. 1000 samples) are received, the machine learning model is retrained using the annotated data (e.g., and the accuracy of the retrained model is evaluated on a held-out set of data from the user's data (e.g., ingested data)). In one embodiment, if the improvement exceeds a certain threshold, the previous version of the model is replaced with the retrained model. The above can be repeated based on a pre-determined schedule. FIG. 12 illustrates embodiments of a method for model management, for example, implemented by model management service 150. Depicted model management includes (optionally) receiving a search query 1201, (optionally) performing a search of data of a user, using a machine learning model, for the search query to generate a result 1203, and (optionally) providing a result of the search to the user 1205. Active learning 1207 includes generating a confidence score (e.g., by the machine learning model) based on the result of the search 1209, selecting a proper subset of the data based at least in part on a confidence score of the proper subset of the data 1211, displaying the proper subset of the data to the user 1213, receiving an indication from the user of one or more sections of the proper subset of the data for use in a next training iteration of the machine learning model for the search query 1215, performing the next training iteration of the machine learning model with the one or more sections of the proper subset of the data 1217, and (optionally) replacing a previous version of the machine learning model used for the search query with a next version generated from the next training iteration when an accuracy score for the next version exceeds an accuracy score for the previous version 1219. After performing active learning 1207, (optionally) another search query may be received 1221, then performing the another search query of data of the user using the machine learning model trained with the one or more sections of the proper subset of the data 1223, and providing a result of the another search to the user 1225. FIG. 13 illustrates embodiments of a graphical user interface 1300 to be used in active learning of question and answer(s) for training a machine learning model. Depicted graphical user interface 1300 includes a field 1302 that is customizable with text to indicate that the user is to take an action (e.g., “Please select the following answer(s) that are relevant to the indicated query”) and a field 1304 to be populated with the query that the user is to label the answers for relevancy. Optionally, the number of candidate answers 1306 may be indicated. Graphical user interface 1300 includes a plurality of entries 1308A-B and each entry includes a feedback input 1310A-B, respectively. Although two entries are shown, any plurality of entries may be utilized (e.g., where “X” is any positive integer). In the depicted embodiment, a query 1304 that is to have active learning performed on it is provided as well as a plurality of candidate answers, e.g., as discussed herein. For example, with candidate answers including a passage 1312 with the answer highlighted (e.g., bold, underlined, marked as a different color, etc.) and the passage of surrounding text also included (e.g., to provide context to the user for their reading comprehension of the answer and its possible relevancy to the query). A link 1314 may also be included to the source document. As depicted, the example query 1304 is “how much does package pick-up cost?”. Candidate answer 1 1308A includes a passage 1312A stating “There is no charge for this option. We will pick up your return at the address of your choice.” A user may deem that candidate answer 1 to be relevant (e.g., a most important answer(s)) and mark feedback input 1310A (shown as a checkbox as an example). Candidate answer 2 1308B includes a passage 1312B stating “If you choose the Pickup option and the return is not a result of our error, you will be charged a convenience fee of $XX.XX for the Pickup.” (where XX.XX is an actual number value). A user may deem that candidate answer 2 to be relevant (e.g., independent of candidate answer 1 being relevant) and mark feedback input 1310B (shown as a checkbox as an example). The highlighting may be added to the result provided by the model, and the surrounding words (e.g., the sentence before and/or after the result) also provided. Feedback input 1310 may be another interface element, such as, but not limited to a thumbs up (or down), checkbox, button, dropdown menu, etc. A user may click the submit interface element 1316 to cause the feedback input(s) to be sent, e.g., as feedback input 1106 in FIG. 11. Feedback input may be aggregated, e.g., to trigger a retrain of a model as discussed herein. FIG. 14 illustrates embodiments of a graphical user interface to be used in active learning of document ranking for training a machine learning model. Depicted graphical user interface 1400 includes a field 1402 that is customizable with text to indicate that the user is to take an action (e.g., “Please select the following document(s) that are relevant to the indicated query”) and a field 1404 to be populated with the query that the user is to label the documents for relevancy. Optionally, the number of candidate documents 1406 may be indicated. Graphical user interface 1400 includes a plurality of entries 1408A-B and each entry includes a feedback input 1410A-B, respectively. Although two entries are shown, any plurality of entries may be utilized (e.g., where “X” is any positive integer). In the depicted embodiment, a query 1404 that is to have active learning performed on it is provided as well as a plurality of candidate documents, e.g., as discussed herein. For example, with candidate documents including a link 1412 to the (e.g., hosted in storage 109) document. As depicted, the example query 1404 is “operating manual for widget Y?”. Candidate document 1 1408A includes a link 1412A to a first document. A user may deem that candidate document 1 to be relevant (e.g., a most important document(s)) and mark feedback input 1410A (shown as a checkbox as an example). Candidate document 2 1408B includes a link 1412B to a second document. A user may deem that candidate document 2 to be relevant (e.g., independent of candidate document 1 being relevant) and mark feedback input 1410B (shown as a checkbox as an example). Feedback input 1410 may be another interface element, such as, but not limited to a thumbs up (or down), checkbox, button, dropdown menu, etc. A user may click the submit interface element 1416 to cause the feedback input(s) to be sent, e.g., as feedback input 1106 in FIG. 11. Feedback input may be aggregated, e.g., to trigger a retrain of a model as discussed herein. FIG. 15 illustrates embodiments of a method for active learning for training a machine learning model. Depicted method includes performing a search of data of a user, using a machine learning model, for a search query to generate a result 1501, generating a confidence score for the result of the search 1503, selecting a proper subset of the data to be provided to the user based on the confidence score 1505, displaying the proper subset of the data to the user 1507, receiving an indication from the user of one or more sections of the proper subset of the data for use in a next training iteration of the machine learning model 1509, and performing the next training iteration of the machine learning model with the one or more sections of the proper subset of the data 1511. In certain embodiment, the models discussed herein (e.g., document/passage ranking model(s) 212A, question/answer model 212B, and FAQ model 212C) are trained with a set of training data. Training data may include a question and a corresponding answer from a user's data (e.g., in contrast to public data or data from other enterprises). However, the generation of such pairs of questions and answers may require annotation by a human, and thus be costly in terms of time and expense and/or prone to human errors. Training data may be used by a model building system (e.g., model building system depicted in FIG. 11). Model building system may include a training data generation service (e.g., training data generation service 1108 in FIG. 11), for example, to create training data 1110 from a user's data. Training data (e.g., training data 1110 in FIG. 11) may be utilized by a model building pipeline (e.g., model building pipeline 160 in FIGS. 1 and 11) in creation of a model and/or used by a model management service (e.g., model management service 150 in FIGS. 1 and 11) in a refresh of a model. Certain embodiments herein remove a human from generating training data, for example, by removing the human from identifying a question for an answer and/or identifying an answer from a question. These embodiments may include training a language machine learning model to identify (e.g., generate) a set of question and answers pairs from a user's data (e.g., from their unstructured text data) without requiring human annotation or other human involvement. In one embodiment, a request to build a model (e.g., with model building pipeline 160 in FIGS. 1 and 11) causes a service (e.g., training data generation service 1108 in FIG. 11) to generate training data from a user's data. Training data may include questions and their corresponding answers, e.g., candidate questions generated by the service for probable answers in a user's data. Completed training data may then be provided to a model building pipeline for use in building a model specifically for a user's data. FIG. 16 illustrates embodiments of a method for training and use of a question generation model. In certain embodiments, the training data to be generated is a set of one or more candidate questions from a user's data (e.g., the user's documents, passages, etc.) containing answers. Depicted method in FIG. 16 includes training of a question generation model 1601. In one embodiment, the training of question generation model 1601 includes training a (e.g., language) machine learning model with known question and answer pairs to predict a question from an answer 1603. The known question and answer pairs may be data that does not include the user's data (e.g., data not from the user), e.g., this data may be public data. Examples of “known question and answer pairs” are within a MAchine Reading COmprehension (MARCO) dataset. The known question and answer pairs may be public data, e.g., in contrast to the user's private data (e.g., that is hosted in storage 109 in FIG. 1). One example of a language machine learning (ML) model is a transformer-based language model that predicts the next word of a string of text based on the previous words within the string of text. Certain embodiments herein modify a language ML model used to predict the next word for each successive word of a string of text to predict each next (e.g., successive) word of a question for a given answer, e.g., to predict each successive word of a known question (e.g., a multiple word question) from its known answer (e.g., a multiple word answer). One example language ML model is a transformer model (e.g., a GPT-2 model) that is first trained on (e.g., a very large amount of) data in an unsupervised manner using language modeling as a training signal, and is second fine-tuned on much smaller supervised datasets (e.g., known questions and their corresponding known answers) to help it solve specific tasks. Referring to FIG. 16, training 1601 includes training a (e.g., language) machine learning model with known question and answer pairs to predict a question from an answer 1603, receiving one or more documents from a user 1605, generating a set of question and answer pairs from the one or more documents from the user using the trained machine learning model 1607, and (optionally) storing the set of question and answer pairs (e.g., in training data 1110 storage in FIG. 11) generated from the one or more documents from the user 1609. In certain embodiments, the training data (e.g., which has been generated by a machine and not by a human or using human's annotations) is then used to train another machine learning model. For example, training a second machine learning model (e.g., document/passage ranking model(s) 212A, question/answer model 212B, and FAQ model 212C), with the set of question and answer pairs (e.g., from training data 1110 storage in FIG. 11) generated from the one or more documents of the user's, to determine one or more top ranked answers from data of the user for a search query from the user 1611. After training the second machine learning model, it may then be used, for example, after receiving a search query from the user 1613, for performing the search query on data (e.g., documents) of the user using the second machine learning model 1615, and providing a result of the search query to the user 1617, e.g., a result of one or more of a top ranked document, top ranked passage, or top ranked question. In certain embodiments, the language ML model is also trained to detect an end of question (EOQ) token. In certain embodiments, the input to a language model to generate training data (e.g., generate questions for known answers) includes a passage having the answer and the question, for example, in the format of a beginning of service indicator (e.g., <bos>) followed by beginning of question indicator (e.g., <boq>), followed by the question, and then followed by an end of question indicator (e.g., <eoq>). FIG. 17 illustrates a first set of example candidate questions generated by a question generation model trained on known question and answer pairs. In FIG. 17, example synthetic question generation 1700 includes a known question 1702 (e.g., with an end of question token), a passage with a known answer 1704 to the known question 1702, and illustrates the candidate questions (e.g., and their end of question token <eoq>) generated by the model 1706 trained to generate synthetic questions from those two inputs 1702 and 1704. The model trained to do so may thus be used on a user's data to generate specific training data based on the user's (e.g., customer's) documents, e.g., instead of only being trained on other's documents. FIG. 18 illustrates a second set of example candidate questions generated by a question generation model trained on known question and answer pairs. In FIG. 18, example synthetic question generation 1800 includes a known question 1802 (e.g., with an end of question token), a passage with a known answer 1804 to the known question 1802, and illustrates the candidate questions (e.g., and their end of question token <eoq>) generated by the model 1806 trained to generate synthetic questions from those two inputs 1802 and 1804. The model trained to do so may thus be used on a user's data to generate specific training data based on the user's (e.g., customer's) documents, e.g., instead of only being trained on other's documents. In one embodiment, a same model is used to generate questions 1806 in FIG. 18 and questions 1706 in FIG. 17. Trained model may then be used to generate training data that is subsequently used to train a second machine learning model (e.g., document/passage ranking model(s) 212A, question/answer model 212B, and FAQ model 212C), e.g., and the trained, second machine learning model used as discussed herein. FIG. 19 illustrates embodiments of a method for training a question generation model. Depicted method includes receiving one or more documents from a user 1901, generating a set of question and answer pairs from the one or more documents from the user using a machine learning model trained to predict a question from an answer 1903, and storing the set of question and answer pairs generated from the one or more documents from the user 1905. FIG. 20 illustrates an example provider network (or “service provider system”) environment according to some embodiments. A provider network 2000 may provide resource virtualization to customers via one or more virtualization services 2010 that allow customers to purchase, rent, or otherwise obtain instances 2012 of virtualized resources, including but not limited to computation and storage resources, implemented on devices within the provider network or networks in one or more data centers. Local Internet Protocol (IP) addresses 2016 may be associated with the resource instances 2012; the local IP addresses are the internal network addresses of the resource instances 2012 on the provider network 2000. In some embodiments, the provider network 2000 may also provide public IP addresses 2014 and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that customers may obtain from the provider 2000. Conventionally, the provider network 2000, via the virtualization services 2010, may allow a customer of the service provider (e.g., a customer that operates one or more client networks 2050A-2050C including one or more customer device(s) 2052) to dynamically associate at least some public IP addresses 2014 assigned or allocated to the customer with particular resource instances 2012 assigned to the customer. The provider network 2000 may also allow the customer to remap a public IP address 2014, previously mapped to one virtualized computing resource instance 2012 allocated to the customer, to another virtualized computing resource instance 2012 that is also allocated to the customer. Using the virtualized computing resource instances 2012 and public IP addresses 2014 provided by the service provider, a customer of the service provider such as the operator of customer network(s) 2050A-2050C may, for example, implement customer-specific applications and present the customer's applications on an intermediate network 2040, such as the Internet. Other network entities 2020 on the intermediate network 2040 may then generate traffic to a destination public IP address 2014 published by the customer network(s) 2050A-2050C; the traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the local IP address 2016 of the virtualized computing resource instance 2012 currently mapped to the destination public IP address 2014. Similarly, response traffic from the virtualized computing resource instance 2012 may be routed via the network substrate back onto the intermediate network 2040 to the source entity 2020. Local IP addresses, as used herein, refer to the internal or “private” network addresses, for example, of resource instances in a provider network. Local IP addresses can be within address blocks reserved by Internet Engineering Task Force (IETF) Request for Comments (RFC) 1918 and/or of an address format specified by IETF RFC 4193 and may be mutable within the provider network. Network traffic originating outside the provider network is not directly routed to local IP addresses; instead, the traffic uses public IP addresses that are mapped to the local IP addresses of the resource instances. The provider network may include networking devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to local IP addresses and vice versa. Public IP addresses are Internet mutable network addresses that are assigned to resource instances, either by the service provider or by the customer. Traffic routed to a public IP address is translated, for example via 1:1 NAT, and forwarded to the respective local IP address of a resource instance. Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses. In some embodiments, the mapping of a standard IP address to a local IP address of a resource instance is the default launch configuration for all resource instance types. At least some public IP addresses may be allocated to or obtained by customers of the provider network 2000; a customer may then assign their allocated public IP addresses to particular resource instances allocated to the customer. These public IP addresses may be referred to as customer public IP addresses, or simply customer IP addresses. Instead of being assigned by the provider network 2000 to resource instances as in the case of standard IP addresses, customer IP addresses may be assigned to resource instances by the customers, for example via an API provided by the service provider. Unlike standard IP addresses, customer IP addresses are allocated to customer accounts and can be remapped to other resource instances by the respective customers as necessary or desired. A customer IP address is associated with a customer's account, not a particular resource instance, and the customer controls that IP address until the customer chooses to release it. Unlike conventional static IP addresses, customer IP addresses allow the customer to mask resource instance or availability zone failures by remapping the customer's public IP addresses to any resource instance associated with the customer's account. The customer IP addresses, for example, enable a customer to engineer around problems with the customer's resource instances or software by remapping customer IP addresses to replacement resource instances. FIG. 21 is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers, according to some embodiments. Hardware virtualization service 2120 provides multiple computation resources 2124 (e.g., VMs) to customers. The computation resources 2124 may, for example, be rented or leased to customers of the provider network 2100 (e.g., to a customer that implements customer network 2150). Each computation resource 2124 may be provided with one or more local IP addresses. Provider network 2100 may be configured to route packets from the local IP addresses of the computation resources 2124 to public Internet destinations, and from public Internet sources to the local IP addresses of computation resources 2124. Provider network 2100 may provide a customer network 2150, for example coupled to intermediate network 2140 via local network 2156, the ability to implement virtual computing systems 2192 via hardware virtualization service 2120 coupled to intermediate network 2140 and to provider network 2100. In some embodiments, hardware virtualization service 2120 may provide one or more APIs 2102, for example a web services interface, via which a customer network 2150 may access functionality provided by the hardware virtualization service 2120, for example via a console 2194 (e.g., a web-based application, standalone application, mobile application, etc.). In some embodiments, at the provider network 2100, each virtual computing system 2192 at customer network 2150 may correspond to a computation resource 2124 that is leased, rented, or otherwise provided to customer network 2150. From an instance of a virtual computing system 2192 and/or another customer device 2190 (e.g., via console 2194), the customer may access the functionality of storage service 2110, for example via one or more APIs 2102, to access data from and store data to storage resources 2118A-2118N of a virtual data store 2116 (e.g., a folder or “bucket”, a virtualized volume, a database, etc.) provided by the provider network 2100. In some embodiments, a virtualized data store gateway (not shown) may be provided at the customer network 2150 that may locally cache at least some data, for example frequently-accessed or critical data, and that may communicate with storage service 2110 via one or more communications channels to upload new or modified data from a local cache so that the primary store of data (virtualized data store 2116) is maintained. In some embodiments, a user, via a virtual computing system 2192 and/or on another customer device 2190, may mount and access virtual data store 2116 volumes via storage service 2110 acting as a storage virtualization service, and these volumes may appear to the user as local (virtualized) storage 2198. While not shown in FIG. 21, the virtualization service(s) may also be accessed from resource instances within the provider network 2100 via API(s) 2102. For example, a customer, appliance service provider, or other entity may access a virtualization service from within a respective virtual network on the provider network 2100 via an API 2102 to request allocation of one or more resource instances within the virtual network or within another virtual network. Illustrative Systems In some embodiments, a system that implements a portion or all of the techniques described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media, such as computer system 2200 illustrated in FIG. 22. In the illustrated embodiment, computer system 2200 includes one or more processors 2210 coupled to a system memory 2220 via an input/output (I/O) interface 2230. Computer system 2200 further includes a network interface 2240 coupled to I/O interface 2230. While FIG. 22 shows computer system 2200 as a single computing device, in various embodiments a computer system 2200 may include one computing device or any number of computing devices configured to work together as a single computer system 2200. In various embodiments, computer system 2200 may be a uniprocessor system including one processor 2210, or a multiprocessor system including several processors 2210 (e.g., two, four, eight, or another suitable number). Processors 2210 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 2210 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, ARM, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2210 may commonly, but not necessarily, implement the same ISA. System memory 2220 may store instructions and data accessible by processor(s) 2210. In various embodiments, system memory 2220 may be implemented using any suitable memory technology, such as random-access memory (RAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above are shown stored within system memory 2220 as enterprise search service code 2225 and data 2226. In one embodiment, I/O interface 2230 may be configured to coordinate I/O traffic between processor 2210, system memory 2220, and any peripheral devices in the device, including network interface 2240 or other peripheral interfaces. In some embodiments, I/O interface 2230 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2220) into a format suitable for use by another component (e.g., processor 2210). In some embodiments, I/O interface 2230 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 2230 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 2230, such as an interface to system memory 2220, may be incorporated directly into processor 2210. Network interface 2240 may be configured to allow data to be exchanged between computer system 2200 and other devices 2260 attached to a network or networks 2250, such as other computer systems or devices as illustrated in FIG. 1, for example. In various embodiments, network interface 2240 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface 2240 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks (SANs) such as Fibre Channel SANs, or via I/O any other suitable type of network and/or protocol. In some embodiments, a computer system 2200 includes one or more offload cards 2270 (including one or more processors 2275, and possibly including the one or more network interfaces 2240) that are connected using an I/O interface 2230 (e.g., a bus implementing a version of the Peripheral Component Interconnect—Express (PCI-E) standard, or another interconnect such as a QuickPath interconnect (QPI) or UltraPath interconnect (UPI)). For example, in some embodiments the computer system 2200 may act as a host electronic device (e.g., operating as part of a hardware virtualization service) that hosts compute instances, and the one or more offload cards 2270 execute a virtualization manager that can manage compute instances that execute on the host electronic device. As an example, in some embodiments the offload card(s) 2270 can perform compute instance management operations such as pausing and/or un-pausing compute instances, launching and/or terminating compute instances, performing memory transfer/copying operations, etc. These management operations may, in some embodiments, be performed by the offload card(s) 2270 in coordination with a hypervisor (e.g., upon a request from a hypervisor) that is executed by the other processors 2210A-2210N of the computer system 2200. However, in some embodiments the virtualization manager implemented by the offload card(s) 2270 can accommodate requests from other entities (e.g., from compute instances themselves), and may not coordinate with (or service) any separate hypervisor. In some embodiments, system memory 2220 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system 2200 via I/O interface 2230. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g., SDRAM, double data rate (DDR) SDRAM, SRAM, etc.), read only memory (ROM), etc., that may be included in some embodiments of computer system 2200 as system memory 2220 or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2240. Various embodiments discussed or suggested herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general-purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and/or other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of widely-available protocols, such as Transmission Control Protocol/Internet Protocol (TCP/IP), File Transfer Protocol (FTP), Universal Plug and Play (UPnP), Network File System (NFS), Common Internet File System (CIFS), Extensible Messaging and Presence Protocol (XMPP), AppleTalk, etc. The network(s) can include, for example, a local area network (LAN), a wide-area network (WAN), a virtual private network (VPN), the Internet, an intranet, an extranet, a public switched telephone network (PSTN), an infrared network, a wireless network, and any combination thereof. In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including HTTP servers, File Transfer Protocol (FTP) servers, Common Gateway Interface (CGI) servers, data servers, Java servers, business application servers, etc. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, PHP, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, IBM®, etc. The database servers may be relational or non-relational (e.g., “NoSQL”), distributed or non-distributed, etc. Environments disclosed herein can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and/or at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc-Read Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. In the preceding description, various embodiments are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) are used herein to illustrate optional operations that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments. Reference numerals with suffix letters may be used to indicate that there can be one or multiple instances of the referenced entity in various embodiments, and when there are multiple instances, each does not need to be identical but may instead share some general traits or act in common ways. Further, the particular suffixes used are not meant to imply that a particular amount of the entity exists unless specifically indicated to the contrary. Thus, two entities using the same or different suffix letters may or may not have the same number of instances in various embodiments. References to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Moreover, in the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. 16697964 amazon technologies, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jul 12th, 2017 12:00AM https://www.uspto.gov?id=US11316666-20220426 Generating ephemeral key pools for sending and receiving secure communications A method, system, and non-transitory computer readable medium are described for providing a sender a plurality of ephemeral keys such that a sender and receiver can exchange encrypted communications. Accordingly, a sender may retrieve information, such as a public key and a key identifier, for the first receiver from a local storage. The retrieved information may be used to generate a key-encrypting key that is used to generate a random communication encryption key. The random communication encryption key is used to encrypt a communication, while the key-encrypting key encrypts the random communication key. The encrypted communication and the encrypted random communication key are transmitted to the first receiver. 11316666 1. A method comprising: determining, by a first device and from a computing resource, that one or more encryption keys to establish secure communication with a second device are not available from the computing resource; in response to the determining, generating, by the first device, a first plurality of asymmetric key pairs comprising a first plurality of private keys and a first plurality of public keys; assigning, by the first device, a unique identifier to each pair of the first plurality of asymmetric key pairs; storing the first plurality of private keys and their assigned unique identifiers in a memory of the first device; transmitting, from the first device, the first plurality of public keys and their assigned unique identifiers to a first server; in response to the determining, generating, by the first device, a second plurality of asymmetric key pairs comprising a second plurality of private keys and a second plurality of public keys; assigning, by the first device, a unique identifier to each pair of the second plurality of asymmetric key pairs; storing the second plurality of private keys and their assigned unique identifiers in the memory; transmitting, from the first device, the second plurality of public keys and their assigned unique identifiers to the second device; receiving, by the first device and from the second device, an encrypted peer-to-peer communication, wherein the encrypted peer-to-peer communication comprises an encrypted symmetric key, a public key associated with the second device, and a first unique identifier corresponding to the public key; recovering, by the first device and using an application identifier associated with an application executing on the first device, the encrypted symmetric key and the first unique identifier from the encrypted peer-to-peer communication; retrieving, based on the first unique identifier, a private key associated with the public key; deriving a key-encrypting key, wherein the key-encrypting key is derived according to a key agreement protocol using the private key associated with the public key, the public key associated with the second device, and the application identifier; decrypting, using the key-encrypting key, the encrypted symmetric key; decrypting, using the symmetric key, the encrypted peer-to-peer communication; and deleting, from the memory and based on the decrypting the encrypted peer-to-peer communication, the private key. 2. The method of claim 1, further comprising: generating, by the first device, a signature for each of the public keys of the first plurality of asymmetric key pairs; encrypting, by the first device, the first plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys using a public key of the first server; and transmitting, from the first device, the first plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys to the first server. 3. The method of claim 2, further comprising: encrypting, by the first device and prior to being stored in the memory, each of the first plurality of private keys and their assigned unique identifiers using a local storage key. 4. The method of claim 1, further comprising: generating, by the first device, a signature for each of the public keys of the second plurality of asymmetric key pairs; calculating, by the first device, a first encryption key, wherein the first encryption key is calculated by inputting a first set of pseudorandom bytes into a key derivation function; encrypting, by the first device, the second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys using the first encryption key; and transmitting, from the first device, the second plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys to the second device. 5. The method of claim 4, further comprising: encrypting, by the first device, the first encryption key. 6. The method of claim 5, further comprising: transmitting, from the first device, the encrypted first encryption key with the second plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys. 7. The method of claim 1, further comprising: encrypting, by the first device and prior to being stored in the memory, each of the second plurality of private keys and their assigned unique identifiers using a local storage key. 8. The method of claim 1, further comprising: receiving, at the first device, a third plurality of public keys, a unique identifier for each public key in the third plurality of public keys, and a signature for each of the third plurality of public keys from the second device; validating, at the first device, the signature for each public key in the third plurality of public keys; and storing, at the first device, the third plurality of public keys and the unique identifier for each public key in the third plurality of public keys when the signatures for each public key in the third plurality of public keys are valid. 9. The method of claim 8, wherein the signature for each public key in the third plurality of public keys comprises a signature chain. 10. The method of claim 1, wherein the computing resource is one or more of a key distribution center, a secure communication platform, or a cloud service provider, the computing resource comprising one or more servers. 11. A computing device comprising: one or more processors; and memory comprising instructions that, when executed by the one or more processors, cause the computing device to: determine, from a computing resource, that one or more encryption keys to establish secure communication with a second device are not available from the computing resource; in response to the determining, generate a first plurality of asymmetric key pairs comprising a first plurality of private keys and a first plurality of public keys; assign each pair of the first plurality of asymmetric key pairs a unique identifier; store the first plurality of private keys and their assigned unique identifiers; transmit, to a first server, the first plurality of public keys and their assigned unique identifiers; in response to the determining, generate a second plurality of asymmetric key pairs comprising a second plurality of private keys and a second plurality of public keys; assign each pair of the second plurality of asymmetric key pairs a unique identifier; store the second plurality of private keys and their assigned unique identifiers; transmit, to the second device, the second plurality of public keys and their assigned unique identifiers; receive, from the second device, an encrypted peer-to-peer communication, wherein the encrypted peer-to-peer communication comprises an encrypted symmetric key, a public key associated with the second device, and a first unique identifier corresponding to the public key; recover, using an application identifier associated with an application executing on the first device, the encrypted symmetric key and the first unique identifier from the encrypted peer-to-peer communication; retrieve, based on the first unique identifier, a private key associated with the public key; derive a key-encrypting key, wherein the key-encrypting key is derived according to a key agreement protocol using the private key associated with the public key, the public key associated with the second device, and the application identifier; decrypt, using the key-encrypting key, the encrypted symmetric key; decrypt, using the symmetric key, the encrypted peer-to-peer communication; and delete, based on decrypting the encrypted peer-to-peer communication, the private key. 12. The computing device of claim 11, wherein the instructions, when executed by the one or more processors, further cause the computing device to: sign each public key in the first plurality of public keys prior to transmitting the first plurality of public keys and their assigned unique identifiers to the first server. 13. The computing device of claim 11, wherein the instructions, when executed by the one or more processors, further cause the computing device to: sign each public key in the second plurality of public keys prior to transmitting the second plurality of public keys and their assigned unique identifiers to the second device. 14. The computing device of claim 11, wherein the computing resource is one or more of a key distribution center, a secure communication platform, or a cloud service provider, the computing resource comprising one or more servers. 15. A non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, perform the steps of: determining, from a computing resource, that one or more encryption keys to establish secure communication with a second device are not available from the computing resource; in response to the determining, generating a first plurality of asymmetric key pairs comprising a first plurality of private keys and a first plurality of public keys; assigning a unique identifier to each pair of the first plurality of asymmetric key pairs; storing the first plurality of private keys and their assigned unique identifiers in a memory of the first device; transmitting the first plurality of public keys and their assigned unique identifiers to a first server; in response to the determining, generating a second plurality of asymmetric key pairs comprising a second plurality of private keys and a second plurality of public keys; assigning a unique identifier to each pair of the second plurality of asymmetric key pairs; storing the second plurality of private keys and their assigned unique identifiers in the memory; transmitting the second plurality of public keys and their assigned unique identifiers to the second device; receiving, from the second device, an encrypted peer-to-peer communication, wherein the encrypted peer-to-peer communication comprises an encrypted symmetric key, a public key associated with the second device, and a first unique identifier corresponding to the public key; recovering, using an application identifier associated with an application configured to be executed by the at least one processor, the encrypted symmetric key and the first unique identifier from the encrypted peer-to-peer communication; retrieving, based on the first unique identifier, a private key associated with the public key; deriving a key-encrypting key, wherein the key-encrypting key is derived according to a key agreement protocol using the private key associated with the public key, the public key associated with the second device, and the application identifier; decrypting, using the key-encrypting key, the encrypted symmetric key; decrypting, using the symmetric key, the encrypted peer-to-peer communication; and deleting, from the memory and based on the decrypting the encrypted peer-to-peer communication, the private key. 16. The non-transitory computer-readable medium of claim 15, further comprising instructions for: generating a signature for each of the public keys of the first plurality of asymmetric key pairs; encrypting the first plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys using a public key of the first server; and transmitting the first plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys to the first server. 17. The non-transitory computer-readable medium of claim 16, further comprising instructions for: encrypting, prior to being stored in the memory, each of the first plurality of private keys and their assigned unique identifiers using a local storage key. 18. The non-transitory computer-readable medium of claim 15, further comprising instructions for: generating a signature for each of the public keys of the second plurality of asymmetric key pairs; calculating a first encryption key, wherein the first encryption key is calculated by inputting a first set of pseudorandom bytes into a key derivation function; encrypting the second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys using the first encryption key; and transmitting the second plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys to the second device. 19. The non-transitory computer-readable medium of claim 18, further comprising instructions for: encrypting the first encryption key. 20. The non-transitory computer-readable medium of claim 19, further comprising instructions for: transmitting the encrypted first encryption key with the second plurality of encrypted public keys, the encrypted assigned unique identifiers, and the encrypted signature for each of the public keys. 21. The non-transitory computer-readable medium of claim 15, further comprising instructions for: encrypting, prior to being stored in the memory, each of the second plurality of private keys and their assigned unique identifiers using a local storage key. 22. The non-transitory computer-readable medium of claim 15, further comprising instructions for: retrieving, based on an identifier included in the encrypted peer-to-peer communication, a private key from the memory; and decrypting the encrypted peer-to-peer communication using, in part, the private key. 23. The non-transitory computer-readable medium of claim 15, wherein the computing resource is one or more of a key distribution center, a secure communication platform, or a cloud service provider, the computing resource comprising one or more servers. 23 STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT This invention was made with Government support under Contract No. 2014-14031000011 awarded by the Central Intelligence Agency. The Government has certain rights in the invention. CROSS-REFERENCE TO RELATED APPLICATIONS This application is related to U.S. application Ser. No. 15/647,569, entitled, “Sending Secure Communications Using a Local Ephemeral Key Pool,” filed concurrently herewith, and U.S. application Ser. No. 15/647,577, entitled, “Provisioning Ephemeral Key Pools for Sending and Receiving Secure Communications,” also filed concurrently herewith, the entireties of which are herein incorporated by reference. BACKGROUND OF THE INVENTION Key distribution centers allow users to upload keys to a repository so that others may access those keys to send the users encrypted messages. For example, a first user generates an asymmetric key pair and uploads the public key to a key distribution center. A second user subsequently requests the first user's public key from the key distribution center. After authenticating the second user, the key distribution center provides the second user with the first user's public key. Upon receiving the first user's public key, the second user encrypts data with the first user's public key received from the key distribution center and transmits the encrypted data to the first user. The first user receives the encrypted data and decrypts it with the first user's private key to access the data. Accordingly, key distribution centers play an important role in facilitating secure communications between the first user and the second user. However, key distribution centers suffer from a technological problem in that the key distribution center represent a single source of failure. For example, if the key distribution center is unavailable, the first and second user no longer have the ability to exchange encrypted communications. In another example, the key distribution center may be compromised, which could allow a malicious user to conduct a man-in-the-middle attack and eavesdrop on the communications between the first and second users. Thus, there is the need for a technical solution that provides users with the ability to exchange encrypted communications without having to obtain a key from a key distribution center. BRIEF SUMMARY OF THE INVENTION The present disclosure describes a method, system, and non-transitory computer readable medium that includes instructions for providing encrypted communications when a key distribution center and communication server are unavailable. According to one example, the present disclosure describes a method that includes transmitting a request for a user profile to a server from a first device. The first device determines whether a response to the request has been received, and, when no response has been received, retrieves information, such as an ephemeral public key and a key identifier, for the first user from a local storage. If information for the user is not present in the local storage, an error message is displayed. Next, the first device generates an encryption key that is used to encrypt a communication to the user. The encryption key may be derived from a set of pseudorandom bytes. Next, the first device may generate an ephemeral key pair. The generated ephemeral private key and the retrieved public key may be used to derive a key-encrypting key to encrypt the encryption key. Finally, the first device transmits the encrypted communication, the key identifier, the generated public key, and the encrypted encryption key to the user. Another example describes a system that includes an interface that transmits a request to a server for a user's profile information, receives a response from the server that includes the requested information, and transmits and receives encrypted communications. The system also includes a processor that generates an encryption key, encrypts a communication using the encryption key, derives a key-encrypting key, encrypts the encryption key with the key-encrypting key, and decrypts encrypted communications from other users. The system also includes a memory that stores information about other users, as well as a first plurality of private keys and identifiers and a second plurality of public keys and identifiers. The user information stored in the memory may include a username, at least one application identifier a user-signing key, and at least one application-signing key. In other examples, the system includes a crypto accelerator to assist the processor with cryptographic functions. The system may also include a display to provide decrypted communications to a user, as well as an input/output unit to allow the user to compose and respond to the communications. In some examples, the processor may also generate a plurality of asymmetric key pairs, assign each key pair in the first plurality of asymmetric key pairs a unique identifier, and store the first plurality of private keys and identifiers in the memory. The interface may receive a plurality of public keys and identifiers from another user via the interface. Still in further examples, the processor may generate an asymmetric key pair and derive the key-encrypting key from the public key of the receiver and the generated private key. According to another example, the present disclosure describes a non-transitory computer readable medium that includes instructions for transmitting a request for a user profile to a server from a first device. The instructions include determining whether a response to the request has been received, and, when no response has been received, retrieve information, such as a public key and a key identifier, for the first user from a local storage. If information for the user is not present in the local storage, the instructions display an error message. Next, the instructions generate an encryption key that is used to encrypt a communication to the user. The encryption key is derived from a set of pseudorandom bytes. Next, the instructions generate an ephemeral key pair; the generated ephemeral private key and the retrieved public key may be used to derive a key-encrypting key to encrypt the encryption key. Finally, the instructions transmit the encrypted communication, the key identifier, the generated public key, and the encrypted encryption key to the user. The disclosure also includes a method that generates a first plurality of asymmetric key pairs, assigns a unique identifier to each pair of the first plurality of asymmetric key pairs, and transmits the first plurality of public keys and their associated unique identifiers to a first server. The method also includes generating a second plurality of asymmetric key pairs, assigning a unique identifier to each pair of the second plurality of asymmetric key pairs, and transmitting the second plurality of public keys and their associated unique identifiers to a second device. In some examples, the method may generate a signature for each of the public keys of the first plurality of asymmetric keys and encrypt the first plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys before transmitting them to the server. Each of the private keys of the first plurality of asymmetric keys may be encrypted with a local storage key and stored with their unique identifiers in local storage on the device. Further examples include generating a signature for each of the public keys of the second plurality of asymmetric keys. The method then calculates an encryption key that is used to encrypt the second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys. The encrypted second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys are transmitted to a second user. In some examples, the method includes deriving a key-encrypting key, which encrypts the encryption key before it is transmitted to the second user with the second plurality of encrypted public keys, the encrypted unique identifiers, and the encrypted signature for each of the public keys. In additional examples, the method includes encrypting each of the private keys of the second plurality of asymmetric keys and their associated unique identifiers using a local storage key and storing them in a memory of the first device. In still further examples, the method may include receiving an encrypted communication from the second device and decrypting it, in part, using a private key from either the first plurality of asymmetric key pairs or the second plurality of asymmetric key pairs. Additionally, the method may include receiving a third plurality of public keys, a unique identifier for each of public key in the third plurality of public keys, and a signature for each of the third plurality of public keys from the second device. The method validates the signature for each public key in the third plurality of public keys and stores the third plurality of public keys and their unique identifier when the signatures for each public key in the third plurality of public keys are valid. The disclosure also includes a system that includes an interface for transmitting a first plurality of public keys and their associated unique identifiers to a first server and transmitting a second plurality of public keys and their associated unique identifiers to a second device. The system may include a processor that generates a first plurality of asymmetric key pairs, assigns each pair of the first plurality of asymmetric key pairs a unique identifier, generates a second plurality of asymmetric key pairs, and assigns each pair of the second plurality of asymmetric key pairs a unique identifier. The system also includes a memory to store the first plurality of private keys and their identifiers and the second plurality of private keys and their identifiers. In additional examples, the processor may sign each public key in the first plurality of public keys prior to transmitting them to the first server. Similarly, the processor may sign each public key in the second plurality of public keys prior to transmitting them to the second device. The interface receives a third plurality of public keys, a unique identifier for each of the public keys in the third plurality of public keys, and a signature for each of the third plurality of public keys from the second device. The processor validates the signature for each public key in the third plurality of public keys and stores the third plurality of public keys and their unique identifier for each public key in the third plurality of public keys in the memory when the signatures for each public key in the third plurality of public keys are valid. According to another example, the disclosure describes a non-transitory computer readable medium that includes instructions for generating a first plurality of asymmetric key pairs, assigning a unique identifier to each pair of the first plurality of asymmetric key pairs, and transmitting the first plurality of public keys and their associated unique identifiers to a first server. The instructions may also include generating a second plurality of asymmetric key pairs, assigning a unique identifier to each pair of the second plurality of asymmetric key pairs, and transmitting the second plurality of public keys and their associated unique identifiers to a second device. In some examples, the instructions may generate a signature for each of the public keys of the first plurality of asymmetric keys and encrypt the first plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys before transmitting them to the server. Each of the private keys of the first plurality of asymmetric keys may be encrypted with a local storage key and stored with their unique identifiers in local storage on the device. Further examples include instructions for generating a signature for each of the public keys of the second plurality of asymmetric keys. The instructions then calculate an encryption key that is used to encrypt the second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys. The encrypted second plurality of public keys, their assigned unique identifiers, and the signature for each of the public keys are transmitted to a second user. In some examples, the instructions include deriving a key-encrypting key, which encrypts the encryption key before it is transmitted to the second user with the second plurality of encrypted public keys, the encrypted unique identifiers, and the encrypted signature for each of the public keys. In additional examples, the instructions include encrypting each of the private keys of the second plurality of asymmetric keys and their associated unique identifiers using a local storage key and storing them in a memory of the device. In still further examples, the instructions may include receiving an encrypted communication from the second device and decrypting it, in part, using a private key from either the first plurality of asymmetric key pairs or the second plurality of asymmetric key pairs. Additionally, the instructions may include receiving a third plurality of public keys, a unique identifier for each of public key in the third plurality of public keys, and a signature for each of the third plurality of public keys from the second device. The instructions validate the signature for each public key in the third plurality of public keys and stores the third plurality of public keys and their unique identifier when the signatures for each public key in the third plurality of public keys are valid. According to another aspect of the disclosure, a method includes receiving an encrypted communication from a second device and decrypting it to obtain a plurality of ephemeral public keys, their unique identifiers, and a signature for each public key of the plurality of ephemeral public keys. The method proceeds by validating the signature of each public key in the plurality of the plurality ephemeral public keys and storing the plurality of ephemeral public keys when the signature of each public key in the plurality of ephemeral public keys is valid. The method may include encrypting the plurality of ephemeral public keys with a local storage device key prior to storing them. Another example discloses a system that includes an interface for receiving an encrypted communication from a second device and a processor that decrypts the encrypted communication received to obtain a plurality of ephemeral public keys, their unique identifiers, and a signature for each public key of the plurality of ephemeral public keys and validates the signature of each public key in the plurality of the plurality ephemeral public keys. The system includes a memory to store the plurality of ephemeral public keys when the signatures of each public key are valid. In some examples, the processor may encrypt the plurality of ephemeral public keys with a local storage device key prior to storing them in the memory. In other examples, the interface is configured to transmit a request for a user profile of the second device to a first server. Furthermore, the processor determines whether a response to the request has been received from the first server, retrieves a first ephemeral public key and a key identifier from the memory when no response has been received, generates a first encryption key, derives a key-encrypting key using at least the first ephemeral public key; encrypts a first communication to the second device using the first encryption key and encrypts the first encryption key using the key-encrypting key. After the communication and the encryption key are encrypted, the interface transmits the encrypted communication, the key identifier, and the encrypted encryption key to the second device. BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings. FIG. 1 illustrates an example of an environment where secure communications are exchanged. FIG. 2 shows a client device that transmits and receives encrypted communications using the secure communication application of the current disclosure. FIG. 3 illustrates an exemplary process for generating a pool of ephemeral asymmetric key pairs. FIG. 4 shows an example of a table of ephemeral public keys generated according to one example of the disclosure. FIGS. 5A and 5B illustrate a process for transmitting an encrypted communication according to an example of the disclosure. FIG. 6 shows a method for decrypting a received encrypted communication according to one example of the disclosure. FIG. 7 shows a method for transmitting a pool of ephemeral asymmetric key pairs to a user according to another example of the disclosure. FIG. 8 illustrates a process for receiving the table of ephemeral public keys at a user device according to an example of the disclosure. FIGS. 9A and 9B show a method for transmitting a secure message using the table of ephemeral public keys according to one example. DETAILED DESCRIPTION The present disclosure can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a non-transitory computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. These implementations, or any other form that the present disclosure may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the present disclosure. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the present disclosure is provided below along with accompanying figures that illustrate the principles of the present disclosure. The present disclosure is described in connection with such embodiments, but the present disclosure is not limited to any embodiment. The scope of the present disclosure is limited only by the claims and the present disclosure encompasses numerous alternatives, modifications, and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the present disclosure. These details are provided for the purpose of example and the present disclosure may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the present disclosure has not been described in detail so that the present disclosure is not unnecessarily obscured. FIG. 1 illustrates an embodiment of an environment in which secure communications are exchanged. Specifically, FIG. 1 shows a first client device 116 and a second client device 118 connected to secure communication platform 120, located on server 100, via network 112. Typically, secure communications are exchanged using secure communication datagrams, which encapsulate a sender's communication. The datagram also allows information such as encryption information, hardware binding information, message security controls, and decryption information—for multiple receivers (as applicable)—to securely travel with the message. The secure communication datagram also provides cross-platform support so that users may communicate regardless of their operating systems (e.g., Linux, iOS, and Windows), smart phone platforms (e.g., iPhone, Android, Windows, Blackberry, etc.), and device types (e.g., mobile smart phones, tablets, laptops, desktops, etc.). Using the techniques described herein, only intended accounts on intended devices are able to decrypt the communications. Thus, for example, the secure communication platform 120 is unable to decrypt messages. As will further be described in more detail below, using the techniques described herein, communication participants can maintain a forward secret secure communication channel, whether communicating synchronously (e.g., where all participants are online or otherwise able to communicate with platform 120) or asynchronously (e.g., where at least one participant is offline or otherwise not in communication with platform 120). As shown in FIG. 1, secure communication platform 120 may be implemented on server 100. Server 100 may include a processor 102, memory 104, user directory 106, and the secure communication platform 120. In this regard, server 100 may be a stand-alone server, a corporate server, or a server located in a server farm or cloud-computing environment. In some embodiments, the server 100 may be a cloud service provider running a virtual machine configured to provide secure communication platform 120 to an enterprise as a Software as a Service (SaaS). Processor 102 may be any conventional processor capable of interacting with memory 104, user directory 106, and secure communication platform 120. In this regard, processor 102 may include a processor, a multiprocessor, a multicore processor, or any combination thereof. Alternatively, processor 102 may be a dedicated controller, such as an Application Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA). Memory 104 stores information accessible by processor 102, including instructions and data that may be executed or otherwise used by the processor 102. Memory 104 may be any type of media capable of storing information accessible by the processor, including a non-transitory computer-readable medium or any other suitable medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, solid state drive, memory card, flash drive, ROM, RAM, DVD, or other optical disks, as well as other write-capable and read-only memories. Memory 104 may include short term or temporary storage as well as long term or persistent storage. According to some embodiments, memory 104 may include a storage area network (SAN) accessible by the secure communication platform 120. User directory 106 may be any database or table capable of providing directory services. For example, user directory may include a corporate directory that include employees' first and last names, usernames, email address, phone numbers, department information, etc. Alternatively, user directory 106 may be a database or table to maintain user information for users of secure communication platform 120. In this regard, user directory 106 may be encrypted. In some embodiments, user directory 106 may serve as a secure directory that includes a table of hashed usernames, a table of application identifiers, and a table of device identifiers for a secure communication application. Accordingly, user directory 106 may be used to share information about users, systems, networks, services and applications. According to some embodiments, the user directory 106 may include a Lightweight Directory Access Protocol (LDAP). Although FIG. 1 illustrates processor 102, memory 104, user directory 106, and secure communication platform 120 as being located on server 100, processor 102 and memory 104 may comprise multiple processors and memories that may or may not be stored within the same physical housing. For example, memory 104 may be a hard drive or other storage media located in a server farm of a data center, such as a storage area network (SAN). Accordingly, references to a processor, a computer, or a memory will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Further, the user directory 106 may be located in a separate physical housing from processor 102 and memory 104. Moreover, secure communication platform 120 may be distributed across multiple servers. Secure communication platform 120 may be configured to facilitate the exchange of communications for users of a secure communication application. As used herein, “communications” and “messages” may be used interchangeably to describe a variety of telecommunications, including: text messages, chat room messages, control messages, commands, e-mails, documents, audiovisual files, Short Message Service messages (SMSes), audio calls, voice calls (i.e., VOIP), and video calls. Additionally, the content of the messages and/or communications may pertain to electronic transactions, such as credit card security, password protection, directories, and storage drive protection, video on demand security, online gaming, gambling, electronic distribution of music, videos, documents, online learning systems, databases, cloud storage and cloud environments, bank transactions, voting processes, military communications, security of medical records, communication between medically implanted devices and doctors, etc. The exchange of messages and/or communications is explained in further detail below. Secure communication platform 120 may provide encrypted communications that easily integrate into and secure existing systems while providing compliant and secure communications. In this regard, secure communication platform 120 may integrate with existing identity systems, such as user directory 106. Further, secure communication platform 120 may include built-in support for enterprise data retention and support systems. Secure communication platform 120 may also include database 130. Database 130 may be a relational database that stores information in a variety of tables. In this regard, database 130 may include a record for each user of platform 120 to allow users to find other users and communicate with others. Accordingly, database 130 may include a table of user names 132, a table of application identifiers 134, a pool of ephemeral keys 136, and a table of user profile information 138. User profile information may include a privacy mode set by the user and one or more privacy lists to control with whom the user may communicate. Additionally, database 130 may include a table of communications 140. That is, the secure communication platform may store communications for a predetermined time in table 140. For example, when a communication is received, the secure communication platform may store the communication in the table of communications 140 and provide an alert, such as a push notification, to the receiver. Accordingly, a receiver may access the secure communication platform to obtain his or her communications stored in table 140. In preferred embodiments, table 140 may store communications for 30 days; however, this may be adjusted, as needed, based on industry standards and/or to comply with regulatory schemes. While a database is shown in FIG. 1, other techniques can be used to store the information used by platform 120 to facilitate the exchange of encrypted communications. For example, the table of communications may be stored in a separate storage, such as memory 104, instead of being stored within database 130. Alternatively, the information contained in the database 130 may be divided between database 130 and user directory 106. In this regard, database 130 and user directory 106 may interface to exchange information. Further, additional information can be securely stored on platform 120, whether in database 130 or another appropriate location. Secure communication platform 120 may include one or more interface(s) 122 for communicating with the first client device 116 and the second client device 118. As one example, platform 120 may provide an application programming interface (API) configured to communicate with applications installed on client devices. Platform 120 may also provide other types of interfaces, such as a web interface, or stand-alone software programs for desktops and laptops, running on various Operating Systems (OSes). The web interface may allow users of client devices to exchange communications securely (whether with one another or other users), without the need for a separately installed communication application. The standalone software program may allow users to exchange secure communications via software that is downloaded by each user. According to some embodiments, platform 120 may make available a master clock time available via the one or more interface(s) 122. The master clock time may be used by client applications to enforce secure time-to-live (TTL) values of messages. The TTL values can be used to enforce (e.g., on behalf of a message sender) time constraints on communication access (e.g., by a receiver). Users of client devices, such as client devices 116 and 118, may communicate securely with one another using the techniques described herein. For example, the first client device 116 and the second client device 118 may make use of the secure communication platform 120 and the techniques described herein via a secure communication application 146 and 148, respectively. As shown in FIG. 1, client devices may be mobile devices, such as a laptops, smart phones, or tablets, or computing devices, such as desktop computers or servers. As noted above, the secure communication application described herein allows cross-platform communications, thereby allowing users of various devices to communicate seamlessly. Further, each user may have different instances of the communication application across multiple devices. That is, the user of device 116 may be able to receive messages on both device 116 as well as on any other devices that the user may have that includes a copy of the secure communication application, such as a laptop. In some embodiments, client devices 116 and 118 may be the users' personal devices (i.e. a bring your own device (BYOD) scenario). Alternatively, client devices may include other types of devices, such as game consoles, camera/video recorders, video players (e.g., incorporating DVD, Blu-ray, Red Laser, Optical, and/or streaming technologies), smart TVs, and other network-connected appliances, as applicable. Communications between users of client devices 116 and 118 may be exchanged via network 112. Network 112 may include various configurations and use various protocols including the Internet, World Wide Web, intranets, virtual private networks, local Ethernet networks, private networks using communication protocols proprietary to one or more companies, cellular and wireless networks (e.g., WiFi), instant messaging, HTTP and SMTP, and various combinations of the foregoing. As will be described in greater detail below, processor 102 may perform a plurality of tasks on behalf of secure communication platform 120. Furthermore, whenever platform 120 is described as performing a task, either a single component or a subset of components or all components of platform 120 or enterprise server 100 may cooperate to perform the task. For example, platform 120 may designate one of the keys in a pool of ECDH public keys received from a user of a device as a “reserve” key. Another task performed by platform 120 may include facilitating the addition of new keys to a user's pool of public keys as they are used. Yet another task performed by platform 120 may include dynamically adjusting the size of a user's pool of public keys as needed. To make use of the secure communication platform described above, users must download and install the secure communication application on their client device. FIG. 2A illustrates an exemplary client device 200 that may access the security platform 120 via a secure communication application. In this regard, client device 200 includes a processor 202, a memory 204, a display 206, an I/O unit 208, a cryptographic (“crypto”) accelerator 212, and a network interface 214 all interconnected by bus 216. Processor 202 may be any processor capable of interacting with the components of client device 200. For example, processor 202 may include a processor, multiprocessors, multicore processor, a dedicated controller, such as an ARM processor, an ASIC, or an FPGA, or any combination thereof. According to some examples, processor 202 may be configured to generate a first plurality of asymmetric key pairs, assign each pair of the first plurality of asymmetric key pairs a unique identifier, generate a second plurality of asymmetric key pairs, and assign each pair of the second plurality of asymmetric key pairs a unique identifier. Processor 202 may also sign each public key in the first and second plurality of public keys before they are transmitted. In other examples, processor 202 may be configured to decrypt an encrypted communication received from a second device to obtain a plurality of ephemeral public keys, their unique identifiers, and a signature for each public key of the plurality of ephemeral public keys and validate the signature of each public key in the plurality of the plurality ephemeral public key. Processor 202 may also be configured to encrypt the plurality of ephemeral public keys with a local storage device key. In still yet other examples, processor 202 may be configured to determine whether a response to a request has been received from a first server; retrieve a first ephemeral public key and a key identifier from a memory when no response has been received from the first server; generate a first encryption key; deriving a key-encrypting key using at least the first ephemeral public key; encrypt a first communication to a second device using the first encryption key; and encrypt the first encryption key using the key-encrypting key. Memory 204 may store information accessible by processor 202, including instructions and data that may be executed or otherwise used by the processor 202 and/or crypto accelerator 212. For example, memory 204 may store instructions, such as application 224. In preferred embodiments, application 224 may be a secure communication application that provides users with the ability to participate in voice and video calls, share encrypted content, and exchange encrypted communications. Encrypted communications may include direct communications (e.g., one-to-one communications between a sender and receiver), group chats, or secure chat room communications. Data stored by memory 204 may include database 234. Database 234 may be encrypted via an encryption algorithm, such as Advanced Encryption Standard (AES), and a 256-bit key, referred to hereinafter as a local storage key. In some embodiments, database 234 may store information related to secure communication application 224. For example, database 234 may index information related to the secure communication application, such as key information (e.g. a user signing key, an application signing key, etc.), user information (e.g., username, application identifier, etc.), friend information, and communications. In this regard, communications transmitted and received by the secure communication application, including a message identifier, a hash of the sender's username, a hash of the sender's application identifier, a hash of the receiver's username, a hash of the receiver's application identifier, the communication encryption key, and a timestamp of each communication may be stored in database 234. Memory 204 may also store a plurality of ephemeral keys received from a second user that would allow the first and second user to exchange encrypted communication if security platform 120 were unavailable or non-responsive. Accordingly, memory 204 may be any type of media capable of storing the information above, including a non-transitory computer-readable medium or any other suitable medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, solid state drive, memory card, flash drive, ROM, RAM, DVD, or other optical disks, as well as other write-capable and read-only memories. Further, memory 204 may include short term or temporary storage as well as long term or persistent storage. Display 206 may be any electronic device capable of visually presenting information. In mobile devices, such as smart phones and tablets, display 206 may be a touchscreen display. Accordingly, display 206 may be integrated with I/O unit 208 to detect user inputs, as well as output data. In computing devices, display 206 may be an output, such as a VGA, DVI, or HDMI output, configured to connect to a monitor. In operation, display 206 may be configured to provide the decrypted communications to a second user or display an error message when receiver information is unobtainable, either from security platform 120 or locally on the sending device. I/O unit 208 may be capable of receiving input from a user. As noted above, the I/O unit 208 may work with touchscreen displays to receive input from a user. Alternatively, the I/O unit may be an interface capable of interacting with input and output devices, such as keyboards, mice, monitors, printers, etc. In operation, the input/output unit may be configured to allow a user to compose a communication before it is encrypted and transmitted to a receiver. Additionally, the I/O unit 208 may include at least one accelerometer, a Global Positioning Satellite (GPS) system, a magnetometer, a proximity sensor, an ambient light sensory, a moisture sensor, a gyroscope, etc. to determine the orientation of the device, as well as environmental factors. Crypto accelerator 212 may be dedicated hardware, software, or any combination thereof that is capable of performing cryptographic operations, such as key generation, random number generation, encryption/decryption, signature generation, signature verification, etc. In preferred embodiments, crypto accelerator 212 is a dedicated processor configured to perform cryptographic operations on behalf of processor 202. In this regard, application 224 may make use of crypto accelerator 212 to provide the secure communication functions described in greater detail below. Network interface 214 may be dedicated hardware, software, or any combination thereof that is capable of connecting client device 200 to network 112. In this regard, network interface 214 may include various configurations and use various communication protocols including Ethernet, TCP/IP, ATM, cellular and wireless communication protocols (e.g. 802.11, LTE), instant messaging, HTTP and SMTP, and various combinations of the foregoing. Network interface 214 may be configured to transmit a first plurality of public keys and their associated unique identifiers to a first server and transmit a second plurality of public keys and their associated unique identifiers to a second device. In other examples, interface 214 may be configured to transmit a request to a first server for a first user's profile information, receive a response from the first server that includes the first user's profile information, transmit a first encrypted communication to the first user, and receive a second encrypted communication from the first user. After installing the secure communication application, a user must enroll themselves and their first device with the secure communication platform. User enrollment includes generating a unique username. In this regard, a username may be negotiated with secure communication platform 120 to ensure that every user has a unique username. In alternative examples, the user's username may be an identifier assigned by a third party, such as a system administrator, such as a corporate, enterprise, or government login. In other examples, the username may be a random identifier assigned to the user. The random identifier may be generated by the secure communication application and confirmed by the secure communication platform. Alternatively, the random identifier may be assigned to the user by the secure communication platform. Once a username has been selected, the secure communication application generates a first asymmetric key pair using an asymmetric derivation function. In preferred embodiments, the first asymmetric key pair is generated according to elliptic curve cryptography (ECC) using a first P-521 curve. Next, the secure communication application generates a first symmetric key that is used to encrypt account-level backups of the secure communication application. For example, the first symmetric key may be used to encrypt account information using any symmetric encryption algorithm, such as AES-GCM, and store the encrypted account information on the secure communication platform. Next, the secure communication application generates a second symmetric key to encrypt data stored on the user's device according to any symmetric encryption algorithm, preferably AES-GCM. Finally, the secure communication application generates a third symmetric key to encrypt user information, including a plurality of keys that are used to identify the user, that is uploaded to and stored on the secure communication platform. User enrollment is completed when the secure communication application transmits the first public key and the username to secure communication platform 120, which creates a new entry for the user in database 130. After completing user enrollment, the user must enroll their device with the secure communication platform 120. Device enrollment occurs any time a user logs in to the secure communication application on a new device, including on the first device after user enrollment occurs. Device enrollment begins with the secure communication application generating a second asymmetric key pair. In preferred embodiments, the second asymmetric key pair is generated according to ECC using a second P-521 curve. The second asymmetric key pair is unique to the instance of the secure communication application. In this regard, if the user has the secure communication application installed on several devices, each device will have its own unique second asymmetric key, while the first asymmetric key pair will be the same for each instantiation of the secure communication application. Next, the secure communication application derives a local storage device key. The local storage device key protects data stored locally on the user's device via symmetric encryption. In some examples, the local storage device key is generated by combining the second symmetric key and device data through a key derivation function. In preferred embodiments, the key derivation function is an HMAC key derivation function with SHA-256 as the underlying hash function. In subsequent installations, the secure communication application obtains the second symmetric key from the secure communication platform. Device data includes device-specific data and/or identifiers derived from installed hardware or operating system sources that are unique and constant across application installs. For example, device data may include hard drive identifiers, motherboard identifiers, CPU identifiers, and MAC addresses for wireless, LAN, Bluetooth, and optical cards, configuration information, or a combination of the foregoing. Next, the secure communication application generates an application identifier. The application identifier is a random identifier that is generated by hashing a set of pseudorandom bytes using SHA256. The application identifier is used by the secure communication platform to identify the secure communication application and the device with which it is associated. Subsequently, the secure communication application generates a first signature of the second public key using the first private key. In preferred embodiments, the secure communication application generates the signature according to Elliptic Curve Digital Signature Algorithm (ECDSA). Finally, the application identifier, the second public key, and the first signature of the second public key are transmitted to the server. The secure communication platform stores this information within the user's profile on the secure communication platform. After both user and device enrollment have been completed, each instance of the secure communication application creates a pool of asymmetric key pairs. These key pairs are used as part of a key agreement protocol and enable the secure communication application to begin receiving encrypted communications. As the secure communication application begins receiving encrypted communications, the pool of asymmetric key pairs will become depleted and need to be replenished. FIG. 3 shows a method 300 for generating the pool of ephemeral asymmetric key pairs. As used herein, ephemeral asymmetric key pairs, ephemeral public keys, and ephemeral private keys means a short-term key or key pair or a single use key or key pair. In block 310, the secure communication application on a first device generates a pool of ephemeral, asymmetric key pairs. In preferred embodiments, the ephemeral asymmetric key pairs are generated according to ECC according to a third P-521 curve. In block 320, a unique identifier is assigned to each key pair. Next, in block 330, the secure communication application calculates a signature for each of the ephemeral public keys using the second private key associated with the user's secure communication application. The signatures for each ephemeral public key may be generated according to any standard signature generation algorithm, including ECDSA. In block 340, each of the ephemeral public keys, along with its unique identifier and corresponding signature, are uploaded to the server. Accordingly, the server stores the pool of ephemeral public keys in the user's profile on the secure communication platform. The corresponding pool of ephemeral private keys are encrypted with the local storage device key and are stored securely, along with their assigned unique identifiers, on the user's device. As noted above, the process 300 is initially performed after the user's first user enrollment and device enrollment. The process 300 may be subsequently repeated for each new device enrollment. Additionally, process 300 may be performed to send a pool of ephemeral public keys to an individual receiver to allow the sender and receiver to communicate using peer-to-peer (P2P) techniques. Alternatively, the pool of ephemeral public keys may be used as a reserve pool of ephemeral public keys when the sender is unable to obtain an ephemeral public key from the secure communication platform. The method shown in FIG. 3 may be repeated as necessary as the pool of asymmetric keys becomes depleted since the public key will be deleted from the pool of available keys each time it is used in an encrypted communication exchange. Removing the public key from the pool of available public keys may occur on the sender's device, when the key is selected from a plurality of keys maintained on the sender's device, or on the secure communication platform after the secure communication platform provides the ephemeral public key to a sender. FIG. 4 shows an example of a plurality of ephemeral public keys generated according to the process 300. The table 400 includes three columns. Column 410 is the unique identifier assigned to each ephemeral key pair, column 420 shows includes the ephemeral public keys, and column 430 illustrates the signature of each ephemeral public key generated using the second private key. As noted above, the plurality of ephemeral public keys may be transmitted to the secure communication platform for distribution to other users. Alternatively, the plurality of ephemeral public keys may be transmitted to another user for direct communications between the first and second users. According to these examples, the plurality of ephemeral public keys may enable the first and second users to communicate using the encryption techniques described herein in a peer-to-peer protocol. In these examples, the pool of ephemeral public keys maintained by the secure communication platform may be used for an initial encrypted transmission and then as a reserve set of ephemeral public keys. The secure communication provided by the secure communication platform can be best understood as providing device-to-device communication rather than user-to-user communication. As discussed above, a single user may have the secure communication applications executing on multiple associated devices. For the purposes of transmitting a communication, each instance of the secure communication application could be considered a device. For example, a first user with two devices who sends a message to a second user with three devices is sending an encrypted message to four devices—the three device devices associated with the second user, and the first user's second device. FIGS. 5A and 5B illustrate a process 500 for transmitting an encrypted communication per this principle. In block 505, a first device's secure communication application retrieves one or more receiving users' profile information from the secure communication platform 120. In this regard, the first device's secure communication application may request the receiving users' profile information from the secure communication platform 120. This may occur, for example, when the user of the first device begins composing the communication. The user profile information includes the user's username, a list of the user's devices, the second public key for each device, and the signature of the second public key for each receiving device. Next, the first device's secure communication application builds a list of receiving devices based on a union of the receiver devices and the sender's devices in block 510. In block 515, the first device's secure communication application retrieves a signed ephemeral public key and its associated unique identifier. In examples where the first and second devices are communicating P2P, the first device's secure communication application retrieves the signed ephemeral public key and its associated unique identifier from local storage on the first device. In other examples, such as the first time the sender and receiver communicate, the first device's secure communication application may retrieve the signed ephemeral public key and its associated unique identifier for each of the receiving devices from the secure communication platform 120. As discussed in greater detail below, the initial communication may include a plurality of ephemeral public keys, their associated identifiers, and a signature of each of the ephemeral public keys that allow P2P communications between the sender and receiver. Subsequent communications may use the plurality of ephemeral public keys transmitted in the initial communication. These subsequent communications may include replenishments to the plurality of ephemeral public keys. According to some embodiments, the signed ephemeral public key and the associated unique identifier may be obtained along with the receiving users' profile information. In block 520, the first device's secure communication application validates the signature chain for each ephemeral public key received from the secure communication platform. In this regard, the signature of the ephemeral public key is authenticated according to a signature verification algorithm, such as ECDSA, using the second public key; the signature of the second public is verified using the first public key; and the username corresponds to an expected user identity. If the signature chain is invalid, the secure communication application may request the one or more receiving users' profile information from the secure communication platform. Alternatively, the secure communication application may discard the communication and refuse to communicate with the one or more receiving devices with the invalid signature chain. If the signature chain is valid, then the secure communication application continues preparing the communication to send to the one or more receiver devices. In block 525, the first device generates a random communication encryption key. In preferred embodiments, the random communication encryption key is a 256-bit key derived from a first set of pseudorandom bytes. Alternatively, the random communication encryption key may be generated by applying a key derivation function (e.g. HKDF) to the first set of pseudorandom bytes derived from a sending client's device. The first set of pseudorandom bytes may be derived from ephemeral environmental noise obtained from device drivers and other kernel operations. For example, data from the various sensors (e.g., the at least one accelerometer, Global Positioning Satellite (GPS) system, magnetometer, proximity sensor, ambient light sensor, moisture sensor, and gyroscope) may be used as the first set of pseudorandom bytes. In block 530, the first device's secure communication application generates an ephemeral key pair. In block 535, the first device's secure communication application calculates a key-encrypting key (KEK) for each receiving device. The key-encrypting key is calculated by deriving a shared secret using the ephemeral private key the sending secure communication application generated and the receiving device's ephemeral public key received from the secure communication platform. In preferred embodiments, the shared secret is derived according to Diffie-Hellman. The shared secret and the receiving device's application identifier are inputted into a key derivation function to derive a key-encrypting key. By encrypting the random communication encryption key with the key-encrypting key, the encryption communication is effectively bound to the receiver's secure communication application. This improves security by allowing only the receiving device to access the communication. That is, a receiver would not be able to transfer the communication from one device to another and still be able to decrypt the message since the keys used to generate the key-encrypting key are unique to the specific installation of the secure communication application. Block 535 is repeated for each of the one or more receivers' devices. After calculating the key-encrypting key for each of the one or more receivers' devices, the first device's secure communication application encrypts 540 the communication using the random communication encryption key. In preferred examples, the communication is encrypted via a symmetric encryption algorithm using the random communication encryption key. In block 545, the communication key is encrypted using the derived KEK for each of the receiving devices. After the random communication encryption key has been encrypted with the KEK derived for each receiving device, process 500 proceeds to block 550, where the first device's secure communication application creates a serialized packet that includes the encrypted communication, the ephemeral public key that the first device's secure communication application generated in block 530, the one or more unique identifiers for the receiver's public key received from the secure communication platform, and the one or more encrypted communication encryption keys. In block 555, the first device's secure communication application transmits the serialized packet to the secure communication platform for distribution to the one or more receiving devices. In this way, the secure communication platform receives a single packet and distributes the single packet to the one or more receiving devices. The secure communication platform provides each of the one or more receiving devices with an alert, such as a push notification, that they have received a new communication. The secure communication applications contact the secure communication platform and download the new communication to their devices. FIG. 6 illustrates a method 600 for receiving and decrypting an encrypted communication on a receiving device. In block 610, the first device (e.g. receiving device) receives a serialized packet from a second device (e.g. sending device). Receiving the serialized packet includes retrieving the serialized packet from the secure communication platform in response to receiving an alert or notification. Additionally, the first device is responsible for identifying the appropriate key material to decrypt the communication content. If this is the first time the sending device and the receiving device have communicated, the first device may obtain information about the second device from the secure communication platform, such as the application identifier of the sending device, the username, and user profile information of the sending device. The first device may store this information in database 234 for subsequent communication exchanges. After obtaining the communication and information about the sender, the secure communication application on the first device uses its application identifier to retrieve the encrypted communication key and the unique identifier of the first device's ephemeral key pair from the received serialized packet in block 620. In block 630, the first device's secure communication application uses the unique identifier to identify and retrieve the ephemeral private key from a local storage that corresponds to the ephemeral public key used by the second device to derive the KEK. According to some examples, the first device's secure communication application may decrypt the ephemeral private key retrieved from local storage using the first device's local storage device key. Next, the secure communication application on the first device calculates the key-encrypting key in block 640. Specifically, the first device calculates a shared secret using the first device's ephemeral private key and the second device's ephemeral public key. The shared secret and the first device's application identifier are inputted to a key derivation function to generate the key-encrypting key. In block 650, the first device's secure communication application decrypts the encrypted communication encryption key. In block 660, the decrypted communication encryption key is used to decrypt the message. In block 670, the first device's secure communication application provides the decrypted message to the user. In block 680, the communication may be encrypted with the first device's local storage device key and stored in a local storage on the first device. The above-described examples describe using the secure communication platform to provide a sender with a receiver's ephemeral public key. However, the first and second devices may prefer to exchange communications via a peer-to-peer exchange without having to obtain a key from the secure communication platform. Alternatively, circumstances may render the secure communication platform unavailable. For instance, the secure communication platform 120 may be unavailable or the device may be in a location where they cannot access the secure communication platform 120. In order to facilitate the encryption techniques described herein, a first device may provide a second device with a plurality of ephemeral public keys such that the second device would be able to exchange encrypted communications directly with the first device without having to request an ephemeral key from the secure communication platform. FIG. 7 illustrates a process 700 for providing a second device with a plurality of ephemeral public keys. The process 700 begins, in block 710, with the first device's secure communication application generating a second plurality of ephemeral, asymmetric key pairs. In preferred embodiments, the ephemeral asymmetric key pairs are generated according to ECC according to a fourth P-521 curve. In block 720, a unique identifier is assigned to each key pair. Next, in block 730, the secure communication application calculates a signature for each of the ephemeral public keys using the second private key associated with the device's secure communication application. In block 740, each of the ephemeral public keys, along with its unique identifier and corresponding signature, are encrypted using a random communication encryption key. In block 750, the encrypted second plurality of ephemeral public keys, their assigned unique identifiers, and the signature of each public key of the second plurality of ephemeral public keys are transmitted to the second device. That is, the encrypted second plurality of ephemeral public keys, their assigned unique identifiers, and the signature of each public key of the second plurality of ephemeral public keys are encrypted and transmitted to the second device using the techniques discussed above with respect to FIGS. 5A and 5B. In block 760, the second plurality of ephemeral private keys and their assigned unique identifiers are encrypted with the first device's local storage device key and stored locally on the first device. Process 700 may be repeated by each of the sender's secure communication applications such that the receiver has a second plurality of ephemeral public keys for each of the sender's instantiations of the secure communication application. FIG. 8 illustrates an exemplary process 800 for receiving a second plurality of ephemeral public keys from a first device. In block 810, the first device's secure collaboration application receives an encrypted communication from the second device. In block 820, the encrypted communication is decrypted to retrieve the second plurality of ephemeral public keys, their assigned unique identifiers, and the signature of each public key of the second plurality of ephemeral public keys. Preferably, the secure collaboration application employs the processes described above with respect to FIG. 6 to decrypt the received encrypted communication. In block 830, the signature chain for each of the public keys in the second plurality of ephemeral public keys is validated. As discussed above, the second public key is used to validate the signature of each of the public keys in the second plurality of public keys, and the first public key is used to validate the signature of the second public key. If the signature chain is invalid, process 800 proceeds to block 840 where the second plurality of ephemeral public keys, their associated identifiers and corresponding signatures, is discarded. In block 850, the second device's secure communication application may be notified that the second plurality of ephemeral public keys were invalid. In some examples, block 850 may be omitted as a security measure in the event that the second plurality of ephemeral public keys were sent by a malicious user. When the signature chain for each of the public keys in the second plurality of public keys is validated in block 830, the process proceeds to block 860. In block 860, the second plurality of public keys and their assigned unique identifiers are encrypted with the first device's local storage key. The encrypted second plurality of public keys and their assigned unique identifiers are then stored locally on the first device. The techniques described above may be performed by used by the first device to provide the second device with a second plurality of public keys. Once both devices have provided each other with a second plurality of ephemeral public keys, the devices may exchange encrypted communications using the techniques described above with respect to FIGS. 5A and 5B without having to contact the secure communication platform 120 to obtain one or more receivers user profiles and one or more ephemeral public key for each of the receivers. However, there may be occasions when devices try to obtain the ephemeral public key from the secure communication platform 120, but the secure communication platform 120 may not be available. FIGS. 9A and 9B illustrate an exemplary process 900 for sending an encrypted communication using an ephemeral public key stored locally on the first device when the secure communication platform 120 is unreachable. In block 905, a first device's secure communication application transmits a request for one or more receiver profiles to the secure communication platform. In block 910, the first device's secure communication application determines whether a response to the request has been received from the secure communication platform. If the first device's secure communication platform receives a response from the secure communication platform, process 900 proceeds to block 915, which sends an encrypted communication in accordance with process 500 described above. However, if the first device's secure communication platform fails to receive a response, process 900 proceeds to block 920, where the first device's secure collaboration application retrieves receiver information from the first device's local storage. In this regard, the first device's secure collaboration application may not receive a response from the secure communication within a predetermined period and time out. That is, the secure communication platform may exceed a response time expected by the first device's secure communication application. Accordingly, if no response is received from the secure communication platform within a predetermined period, the first device's secure communication application may determine that the secure communication platform is unreachable. Alternatively, the secure communication platform may indicate that there are no ephemeral public keys stored at the server for the receiver. In yet another alternative, the first plurality of the ephemeral keys may be depleted and the secure communication platform may provide an indication that the only key remaining on the secure communication platform is a reserve key (e.g. a key that is provided to senders repeatedly until the receiver has an opportunity to replenish the first plurality of ephemeral public keys maintained by the secure communication platform). When retrieving receiver information from local storage, the first device's secure communication application may have to decrypt the information with the local storage device key. The first device's secure communication application will only have receiver information stored locally if the first device and second device have communicated previously. Furthermore, the receiver information may include at least the receiver's username, the first public key, at least one second public key, at least one first ephemeral public key, and at least one key identifier. In some instances, the first device's local storage may not contain any information for the receiver. In these examples, the first device's secure communication application may display an error message. After obtaining the receiver information from local storage, the first device's secure communication application builds a list of receiving devices based on a union of the receiver devices and sender's devices in block 925. In block 930, the first device's secure communication application generates a random communication encryption key. In block 935, the first device's secure communication application generates an ephemeral key pair. In block 940, the first device's secure communication application calculates a key-encrypting key (KEK) for each receiver device. The key-encrypting key is calculated by deriving a shared secret using the ephemeral private key the sending secure communication application generated and the receiving device's ephemeral public key retrieved from local storage on the sender's device. The shared secret may be generated according to a key agreement protocol, such as Diffie-Hellman. The generated shared secret and the receiving device's application identifier are inputted into a key derivation function to derive the key-encrypting key. Block 940 may be repeated for each of the one or more receivers' devices. After calculating the key-encrypting key for each of the one or more receivers' devices, the first device's secure communication application encrypts the communication using the random communication encryption key in block 945. In block 950, the random communication encryption key is encrypted using the derived KEK for each receiving device. After the random communication encryption key has been encrypted with the KEK derived for each receiving device, the first device's secure communication application creates a serialized packet that includes the encrypted communication, the ephemeral public key that the first device's secure communication application generated in block 935, the one or more unique identifiers for the receiver's public key received from secure communication platform, and the one or more encrypted communication encryption keys in block 955. The first device's secure communication application transmits the serialized packet to the one or more receiver devices in block 960. In some examples, the first device's secure communication application may transmit a single serialized packet to the secure communication platform for distribution to the one or more receiving devices. Alternatively, the first device's secure communication application may directly transmit the serialized packet to each of the one or more receiving devices, for example, through a P2P communication. The above-described examples provide a technical solution that provides users with the ability to exchange encrypted communications without having to obtain a key from the secure communication platform. In particular, providing a user with a plurality of ephemeral public keys allows the user to send encrypted communications when the user is unable to obtain a public key from the secure communication platform. Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements. 15647576 amazon technologies, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Aug 7th, 2020 12:00AM https://www.uspto.gov?id=US11314842-20220426 Hardware implementation of mathematical functions Methods and systems for performing hardware computations of mathematical functions are provided. In one example, a system comprises a mapping table that maps each base value of a plurality of base values to parameters related to a mathematical function; a selection module configured to select, based on an input value, a first base value and first parameters mapped to the first base value in the mapping table; and arithmetic circuits configured to: receive, from the mapping table, the first base value and the first plurality of parameters; and compute, based on a relationship between the input value and the first base value, and based on the first parameters, an estimated output value of the mathematical function for the input value. 11314842 1. A system comprising: a first circuit configured to store input boundary values and parameters related to a mathematical function, wherein each input boundary value of the input boundary values is associated with a subset of the parameters that includes one or more slope values of the mathematical function at the input boundary value and an output value of the mathematical function at the input boundary value; and a second circuit configured to: receive an input value; receive, from the first circuit and based on the input value, a first input boundary value and first parameters associated with the first input boundary value; and compute, based on a relationship between the input value and the first input boundary value, and based on the first parameters, an output value of the mathematical function at the input value. 2. The system of claim 1, wherein the second circuit comprises a subtraction circuit, a multiplier circuit, and an adder circuit. 3. The system of claim 2, wherein the one or more slope values comprise a single slope value; wherein the subtraction circuit is configured to generate a difference result between the input value and the input boundary value; wherein the multiplier circuit is configured to generate a multiplication result between the difference result and the single slope value of the mathematical function at the first input boundary value; and wherein the adder circuit is configured to generate the output value of the mathematical function at the input value based on adding the multiplication result to an output value of the mathematical function at the first input boundary value. 4. The system of claim 2, wherein the one or more slope values comprise a first Taylor series coefficient, a second Taylor series coefficient, and a third Taylor series coefficient; wherein the second circuit comprises a register; wherein, in a first time: the subtraction circuit is configured to generate a difference result between the input value and the input boundary value; the multiplier circuit is configured to generate a first multiplication result between the difference result and the first Taylor series coefficient of the mathematical function at the first input boundary value; the adder circuit is configured to generate a first intermediate output based on adding the first multiplication result to the second Taylor series coefficient of the mathematical function at the first input boundary value; and the register is configured to store the intermediate output; and wherein, in a second time: the multiplier circuit is configured to generate a second multiplication result between the difference result and the first intermediate output; and the adder circuit is configured to generate a second intermediate output based on adding the second multiplication result to the third Taylor series coefficient; and wherein the output value of the mathematical function at the input value is generated based on the second intermediate output and an output value of the mathematical function at the first input boundary value. 5. The system of claim 1, wherein the input boundary values span a numerical range for the mathematical function; and wherein the input boundary values are unevenly distributed within the numerical range. 6. The system of claim 5, wherein a separation between adjacent input boundary values is determined based on a degree of linearity of the mathematical function between the adjacent input boundary values. 7. The system of claim 1, wherein the mathematical function is symmetrical or anti-symmetrical between a first input subrange and a second input subrange; and wherein the input boundary values are defined based on the first input subrange. 8. The system of claim 1, wherein the first circuit comprises a hardware mapping table including a plurality of buckets, each bucket of the plurality of buckets being associated with an address and configured to store an input boundary value and parameters related to the mathematical function associated with the input boundary value. 9. The system of claim 8, wherein the second circuit comprises a selection module configured to select a bucket of the plurality of buckets based on generating an address associated with the selected bucket based on the input value; and wherein the second circuit is configured to generate the output value of the mathematical function at the input value based on the first input boundary value and the first parameters stored in the selected bucket. 10. The system of claim 9, wherein the plurality of buckets are organized into a plurality of blocks of buckets, each block of buckets having a base address associated with a first bucket of the block of buckets; wherein the selection module comprises: a selection table that associates a plurality of bases addresses with a plurality of block sizes of the blocks of buckets; and a processing circuit configured to: retrieve, based on a first part of the input value, a first base address and a first block size from the selection table; determine an address of the selected bucket based on the first base address, a second part of the input value, and the first block size; and provide the address to the hardware mapping table to select the bucket. 11. The system of claim 10, wherein the first part of the input value comprises an exponent of the input value; and wherein the second part of the input value comprises a mantissa of the input value. 12. The system of claim 11, wherein the processing circuit is configured to: extract a number of most significant bits (MSBs) from the mantissa of the input value based on the first block size, and generate the address based on adding the extracted MSBs to the first base address. 13. The system of claim 10, wherein: the mathematical function is a first mathematical function; the input boundary values are first input boundary values; the parameters are first parameters; and wherein the first circuit is configured to store second input boundary values and second parameters related to a second mathematical function, and to associate the second input boundary values with the second parameters; and wherein the second circuit is configured to receive a selection of one of the first mathematical function or the second mathematical function and compute the output value based on the selection. 14. The system of claim 13, further comprising a computing engine configured to perform arithmetic operations for a first neural network layer to generate a first weighted sum and to perform arithmetic operations for a second neural network layer to generate a second weighted sum; wherein, at a first time: the first circuit is programmed, based on first programming information, to store a first plurality of input boundary values and first parameters related to a first activation function; and the second circuit is configured to generate a first output value of processing of the first weighted sum with the first activation function for the first neural network layer; and wherein, at a second time: the first circuit is programmed, based on second programming information, to store a second plurality of input boundary values and second parameters related to a second activation function; and the second circuit is configured to generate a second output value of processing of the second weighted sum with the second activation function for the second neural network layer. 15. A method comprising: storing, at a first circuit, input boundary values and parameters related to a mathematical function, wherein each input boundary value of the input boundary values is associated with a subset of the parameters that includes one or more slope values of the mathematical function at the input boundary value and an output value of the mathematical function at the input boundary value; receiving, at a second circuit, an input value; receiving, at the second circuit from the first circuit, a first input boundary value and first parameters associated with the first input boundary value; and computing, based on a relationship between the input value and the first input boundary value, and based on the first parameters, an output value of the mathematical function at the input value. 16. The method of claim 15, wherein the one or more slope values comprise a single slope value; and wherein the method further comprises: generating a difference result between the input value and the input boundary value; generating a multiplication result between the difference result and the single slope value of the mathematical function at the first input boundary value; and generating the output value of the mathematical function at the input value based on adding the multiplication result to an output value of the mathematical function at the first input boundary value. 17. The method of claim 15, wherein the one or more slope values comprise a first Taylor series coefficient, a second Taylor series coefficient, and a third Taylor series coefficient; wherein the method comprises, in a first time: generating a difference result between the input value and the input boundary value; generating a first multiplication result between the different result and the first Taylor series coefficient of the mathematical function at the first input boundary value; generating a first intermediate output based on adding the first multiplication result to the second Taylor series coefficient of the mathematical function at the first input boundary value; and in a second time: generating a second multiplication result between the difference result and the first intermediate output; and generating a second intermediate output based on adding the second multiplication result to the third Taylor series coefficient; and generating the output value of the mathematical function at the input value based on the second intermediate output and an output value of the mathematical function at the first input boundary value. 18. The method of claim 15, wherein the input boundary values span a numerical range for the mathematical function; and wherein the input boundary values are unevenly distributed within the numerical range. 18 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a Continuation of U.S. Pat. No. 10,740,432, granted Aug. 11, 2020, and entitled “Hardware Implementation of Mathematical Functions,” the content of which is herein incorporated by reference in its entirety. BACKGROUND Artificial neural networks are computing systems with an architecture based on biological neural networks. Artificial neural networks can be trained, using training data, to learn about how to perform a certain computing task. A neural network may include a set of processing nodes. Each processing node can process a piece of the input data based on a weight to generate an output. The outputs can be processed using an activation function to generate a decision. A neural network may be implemented by a neural network processor including, for example, circuitries and data paths, part of which can be used to implement the activation functions. The throughput and accuracy of the neural network processing may depend on how the activation functions are implemented in the neural network processor or other hardware components used for the neural network processing. BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which: FIG. 1 illustrates an example classifier device that uses techniques disclosed herein to process data; FIGS. 2A-2C illustrate simplified block diagrams illustrating a prediction model and the computations that use techniques disclosed herein, according to certain aspects of the present disclosure; FIG. 3 illustrates simplified block diagrams for some of the components of an apparatus for implementing the prediction model of FIGS. 2A-2E, according to certain aspects of the present disclosure; FIGS. 4A-4G illustrate simplified block diagrams of hardware implementation of mathematical functions that can be used in the apparatus of FIG. 3 for neural network processing, according to certain aspects of the present disclosure; FIG. 5 illustrates an example flow diagram of performing hardware-based mathematical function processing of data, according to certain aspects of the present disclosure; and FIG. 6 illustrates an example of a computing device, according to certain aspects of the disclosure. DETAILED DESCRIPTION Examples of the present disclosure relate to hardware implementation of a mathematical function which can be used, for example, for neural network processing. In one example, a system comprises a programmable mapping table that maps each input boundary value of a plurality of input boundary values to parameters related to a mathematical function. The system also includes a selection module configured to select, based on an input value, a first input boundary value and first parameters mapped to the first input boundary value in the programmable mapping table. The system further includes arithmetic circuits configured to receive, from the mapping table, the first input boundary value and the first plurality of parameters. The arithmetic circuits can compute, based on a relationship between the input value and the first input boundary value, and based on the first parameters, an estimated output value of the mathematical function for the input value. In some examples, the system can be part of a neural network processor and can be used to implement an activation function for neural network processing. The system can receive intermediate values of a neural network layer as inputs, select the input boundary values and activation function parameters mapped to the input boundary values based on the inputs, and compute estimated output values of the activation functions for the intermediate values. An artificial neural network (herein after “neural network”) may include multiple processing nodes. The processing nodes can be divided into layers including, for example, an input layer, a number of intermediate layers (also known as hidden layers), and an output layer. Each processing node of the input layer receives an element of an input set and scales the element with a weight to indicate the element's degree of influence on the output. The processing nodes in the intermediate layers may combine the scaled elements received from each processing node of the input layer to compute a set of intermediate outputs. For example, each processing node in the intermediate layers may compute a sum of the element-weight products and then generate an intermediate output by applying an activation function to the sum. The intermediate outputs from each processing node of one intermediate layer may be considered as an activated vote (or no-vote), associated with a weight indicating the vote's influence, to determine the intermediate output of the next intermediate layer. The output layer may generate a binary output (e.g., “yes” or “no”) based on the activated votes (or no-votes) and their associated weights. To improve the speed of neural network processing, the computations of an artificial neural network can be performed using hardware components, such as a neural network processor. The neural network processor can include a systolic array to perform arithmetic operations to generate the element-weight products and the sums of the element-weight products. The neural network processor may also include hardware circuitries, such as multiplexor circuitries configured as a lookup table (LUT), to implement the activation function processing. For example, the LUT can map a set of discrete candidate inputs (of an input range) to a set of discrete candidate outputs, with each candidate output representing a result of processing a corresponding candidate input with the activation function. To apply the activation function on a sum of element-weight products, the input can be quantized to one of the candidate inputs, which the mapping table can then map to one of the candidate outputs. Although the deployment of lookup table can speed up activation function processing, the speed improvement is achieved at the cost of reduced accuracy. For example, the quantization of the input to one of the candidate inputs introduces quantization error and the quantization error is translated to an output error when the mapped candidate output does not equal to the exact output value the activation function would have generated by processing the actual input. The quantization error can be reduced by expanding the lookup table to map a larger set of discrete candidate inputs (of the same input range) to a larger set of corresponding discrete candidate outputs. But the expansion increases the complexity of the multiplexor circuitries, which in turn limits the number of discrete candidate inputs and the improvement in quantization error. Moreover, the accuracy of an LUT, once created, is fixed based on the number of discrete candidate inputs and cannot be changed. Such inherent inflexibility can limit further improvement in the accuracy of the activation function processing, or potential improvement in the speed of the activation function processing if accuracy can be reduced to trade off for speed. Examples of the present disclosure relate to hardware implementation of mathematical functions. Examples of such functions include activation functions, which can be used for neural network processing. In one example, a system comprises a programmable mapping table that maps each input boundary value of a plurality of input boundary values to parameters related to a mathematical function, such as an activation function. The system also includes a programmable selection module configured to select, based on an input value, a first input boundary value and first parameters mapped to the first input boundary value in the programmable mapping table. The system further includes arithmetic circuits configured to receive, from the programmable mapping table, the first input boundary value and the first plurality of parameters. The arithmetic circuits can compute, based on a relationship between the input value and the first input boundary value, and based on the first parameters, an estimated output value of the mathematical function for the input value. In some examples, the system can be part of a neural network processor to implement an activation function. The system can receive intermediate values of a neural network layer as inputs, select the input boundary values and activation function parameters mapped to the input boundary values based on the inputs, and compute estimated output values of the activation functions for the intermediate values. Compared with the aforementioned LUT approach, examples of the present disclosure use a two stage process to first map an input to a set of function parameters using the mapping table, and then compute the estimated activation function output for that input based on the mathematical function parameters using arithmetic circuits, which can improve the accuracy in the activation function processing. For example, the accuracy of the processing is no longer dominated by the number of input boundary values, and the resulting quantization errors and the accuracy can be improved based on the mathematical function parameters which approximate the input-output relationship of the mathematical function between adjacent input boundary values. For example, depending on the linearity of the activation function between adjacent input boundary values, the mathematical function parameters can include a rate of change (e.g., a slope) measurement, a set of coefficients representing a Taylor series expansion, etc. The input boundary values and the mathematical function parameters can be programmed into the mapping table based on the activation function and the mapping table can be programmed to implement different activation functions based on different sets of input boundary values and function parameters. In addition, the programmability of the mapping table also allows tradeoff between speed and accuracy. For example, to improve accuracy, the entire mapping table can be used to store more fine-grained input boundary values and the associated function parameters of a single activation function. In a case where accuracy can be reduced, the mapping table can be partitioned among a plurality of activation functions, with each partition to store a reduced number of coarse-grained input boundary values and the associated function parameters of an activation function. Storing a plurality of activation functions in the mapping table can improve the processing speed by, for example, avoiding or reducing the delays in loading a new activation function for different neural network layers. In some examples, techniques such as non-uniform quantization can also be applied in the programming of the input boundary values and the associated function parameters to further improve the accuracy of activation function processing. For example, as part of the programming, more fine-grained input boundary values can be mapped to regions of the mathematical function with a high rate of change, whereas more coarse-grained input boundary values can be mapped to regions of the mathematical function with low rate of change. All these can improve the operation flexibility of a neural network processor, as well as the speed and/or accuracy of neural network processing. In the description herein, various embodiments are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiments being described. FIG. 1 illustrates an example classifier device 100 that uses techniques disclosed herein to process data. Classifier device 100 can be, for example, a computing device operating a software application 102 and a prediction model 103 to predict information included in a data sequence, and perform a pre-determined function based on the prediction. For example, classifier device 100 can be part of an image recognition service provided to identify certain objects (e.g., texts, a person, etc.) from an image. It is understood that the image recognition service is merely provided as an illustrative example, and that techniques disclosed herein can be used for other data processing applications including, for example, text-based data processing (e.g., processing of search queries), audio data processing, etc. Moreover, classifier device 100 may operate a number of different prediction models to process different input data, either in parallel or at different times. In some examples, the image recognition service can be provided in a multi-tenant compute service system. The multi-tenant compute service system may typically include a plurality of servers that can host data and be used by multiple clients or organizations to run instances, such as virtual machine instances or bare-metal instances (e.g., operating systems that run directly on the server hardware). In most cases, instances, such as bare-metal or virtual machine instances, a multi-tenant compute service system may be allocated to a client when the client needs them and decommissioned when they are no longer needed, such that the resources can be reallocated to other clients. In the present disclosure, the terms “tenant,” “client,” and “customer” may be used interchangeably, although such terms do not necessarily imply the existence of any particular business arrangement. For example, the term “instance” may refer to, for example, an instance that is executed directly on server hardware or as a virtual machine. Different types of instances generally correspond to different hardware functions and/or arrangements of hardware (e.g., different amounts of available memory and/or processing hardware). In the example of FIG. 1, the multi-tenant compute service system may provide the image recognition service when the client needs it and decommissioned when it is no longer needed, such that the resources supporting the image recognition service (e.g., access to software application 102, and the underlying hardware resources for processing software application 102) can be reallocated to other clients. Different clients (or one client) may request software application 102 to perform processing of different input data using the same or different prediction models including prediction model 103. In the example of FIG. 1, software application 102 can receive pixel data of an image 104 from a user. Image 104 may include an array of pixels. Software application 102 can perform analysis on the pixel data, and predict one or more objects 106 depicted in image 104. The analysis may include, for example, comparing the pixel data against a set of pre-determined features data. The pre-determined features data may include data associated with a set of pre-determined visual image features such as, for example, a nose object, a mouth object, etc. The pre-determined features data may also include data associated with non-visual image features, or a combination of visual and non-visual image features. As to be discussed in more detail below, software application 102 may employ prediction model 103 to compute a set of scores based on the pixel data of image 104. The set of scores may represent, for example, the likelihood of image 104 including the image features represented by the features data. Software application 102 can then determine other information about the content of image 104 based on the scores. For example, based on the scores, software application 102 can determine that image 104 is an image of, for example, a panda, a cat, or other objects. The present disclosure provides examples of techniques to allow allocation and placement of resources for deployment of prediction model 103 to be performed at different times and by different systems, which can expedite the operations of software application 102, as to be discussed below. Prediction model 103 can be in the form of an artificial neural network. The artificial neural network may include a plurality of processing nodes, with each processing node configured to process part of the input pixel data, or to further process the intermediate outputs from other processing nodes. FIG. 1 illustrates an example of prediction model 103 that uses techniques disclosed herein. In FIG. 1, prediction model 103 may be a multi-layer neural network such as a deep neural network (DNN), a convolutional neural network (CNN), etc. Prediction model 103 may include an input layer 207, a set of intermediate layers including intermediate layers 209 and 211, and an output layer (not shown in FIG. 2A). It is understood that prediction model 103 can also include other different types of neural networks including, for example, long short-term memory (LSTM), multilayer perception (MTP), multiscale densenet, etc. Layer 207 may process pixel data representing different portions of image 104. For example, in the example of FIG. 2A, layer 207 may process the pixel data of image 204. Each processing node of layer 207 is assigned to receive a pixel value (e.g., x0, x1, x2, . . . xn) corresponding to a pre-determined pixel within image 104, and transmit one or more weights with the received pixel value to layer 209. In a case where prediction model 203 is a DNN, each processing node of layer 207 can be assigned a set of weights defined based on a matrix W1 Each processing node of layer 207 can send the received pixel value and the assigned weights to each processing node of layer 209. In a case where prediction model 103 is a CNN, groups of the processing nodes of layer 207 may share a set of weights, and each group may send the set of weights and the pixel values received by the group of processing nodes to a single processing node of layer 209. Different neural network models may include different topologies (e.g., including different number of layers, different connections between layers, etc.), and/or include a different set of weights for each layer. Layer 209 may process the scaled outputs from layer 207 to generate a set of intermediate outputs. For example, assuming processing node 210a of layer 209 is connected to n processing nodes in layer 207, processing node 210a may generate a sum of the scaled outputs received from layer 207 based on the following equation: sum210a=Σi=0n(W1i×xi)  (Equation 1) Here, sum210a represents a sum generated by processing node 210a. W1i×xi represents a scaling of a particular pixel value (e.g., x0) with the associated weight (e.g., W10) by a processing node of layer 207. In a case where prediction model 103 is a DNN, each processing node of layer 209 may generate the sum based on the scaling of pixel values from each processing node of layer 207, and then generate a sum (e.g., Sum210a) by summing the scaled pixel values. The sum may also represent a dot-product between an input vector comprising a number of elements (e.g., pixel values) and a weight vector (e.g., W1). In a case where prediction model 103 is a CNN, each processing node of layer 209 may generate the sum based on the scaling of pixel values from a group of processing nodes of layers 207. The sum may represent a convolution result between a group of pixel values and a filter comprising the weight values. FIG. 2B illustrates an example of a convolution operation layer 209 may perform. In FIG. 2B, filter 230 may include a two-dimensional array of weights. The weights in filter 230 may represent a spatial distribution of pixels for certain features to be detected from the image. The two-dimensional array may have a height of R rows and a width of S columns, and is typically smaller than an input image with a height of H pixels and a width of W pixels. Each weight may be mapped to a pixel in a rectangular block of pixel values with the same R rows and S columns. A processing node of layer 209 (e.g., processing node 210a) can receive, from a group of processing nodes of input layer 207, a group 240 of pixel values corresponding to a first rectangular block of pixels from the input image, and generate a convolution output 242 based on a summation of multiplication results between each weight of filter 230 and each corresponding pixel in group 240 according to Equation 1, to generate a dot-product between a matrix represented by filter 230 and a matrix represented by group 240. Another processing node of layer 209 can also receive, from another group of processing nodes of input layer 207, a group 244 of pixel values corresponding to a second rectangular block of pixels from the input image, and generate a convolution output 246 based on a summation of multiplication results between each weight of filter 230 and each corresponding pixel in group 244 according to Equation 1, to generate a dot-product between the matrix of filter 230 and a matrix represented by group 240. In some examples, each convolution output in FIG. 2B (e.g., convolution output 242, convolution output 346, etc.) can correspond to the output of a processing node of layer 309. In some examples, the pixel data in the input image may be referred to as an input feature map to indicate that the pixels are processed by the same filter (or same sets of filters) corresponding to certain feature(s). The convolution outputs may be referred to as an output feature map to indicate that the output is the result of processing an input feature map with the filter. As shown in FIG. 2B, the convolution operations can be arranged in a sliding-window such that the second rectangular block overlaps, or is otherwise adjacent to, the first rectangular block in the input image. For example, in the example of FIG. 2B, D may be a distance of stride (in pixel) of the sliding-window for each convolution operations, such that the block of pixels corresponding to group 244 may be situated at a distance D (in terms of pixels) from the block of pixels corresponding to group 240, and the next block of pixels may also be situated at the same distance D from group 244. Other processing node of layer 209 may also receive groups of pixels corresponding to other rectangular blocks and generate other intermediate outputs. The convolution outputs can be part of a convolution output array. The array of convolution outputs can have a smaller height and a smaller width than the input image. Rectangular blocks of the convolution outputs can be further grouped, and convolution operations can be performed at layer 211 between the groups of convolution outputs and another set of filter weights to generate another set of convolution outputs. Referring back to FIG. 2A, one processing node of layer 209 may be configured to generate the convolution output elements of one convolution output array, and a set M of processing nodes of layer 209 can correspond to a set M of convolution output arrays. The processing node of layer 209 can also process each convolution output with an activation function to generate an activation output. The activation function may translate the convolution output into a decision of whether to forward the convolution output to intermediate layer 211 to influence the classifier decision (analogous to the firing of a biological neuron). An example of the activation function can be a rectified linear unit (ReLU) defined according to the following equation: ReLU(y)=max(0,y)  (Equation 2) In addition to ReLU, other forms of activation function can also be used including, for example, a softplus function (which can be a smooth approximation of a ReLU function), a hyperbolic tangent function (tanh), an arc tangent function (arctan), a sigmoid function, a Gaussian function, etc. A processing node of layer 209 (e.g., processing node 210a) may process the sum with the ReLU function to generate a first intermediate output based on the following equation: first_intermediate_output210a=ReLU(Sum210a)  (Equation 3) In a case where prediction model 103 is a CNN, prediction model 103 may include a pooling layer (not shown in FIG. 2A) to reduce the size of the first intermediate outputs. For example, the pooling layer may perform a down-sampling operation and forward a maximum intermediate output (generated based on the ReLU function) among a group of intermediate outputs (while discarding the rest of the intermediate outputs in the group) to layer 211. Layer 211 may further process the scaled intermediate outputs from layer 209 by, for example performing additional convolution operations based on different sets of filters. The outputs from each processing node of layer 211 may be forwarded to other higher intermediate layers, or to an output layer (not shown in FIG. 2A). The output layer may form an output vector representing, for example, a probability that certain features are included in image 104, and/or a probability that image 104 includes an image of a panda. For example, the output vector may be compared against a reference vector associated with a nose object of a panda, or a reference vector associated with a panda. A decision about whether image 104 is an image of a panda can be determined based on the comparison result. FIG. 2C illustrates an example of a sigmoid function which can be used as an activation function in prediction model 103 of FIG. 2A. Graph 280 on the left illustrates an example of a sigmoid function S(x), which can be defined according to the following equation: S ⁡ ( x ) = e x 1 + e x ( Equation ⁢ ⁢ 4 ) In Equation 4, ex refers to an exponential function. The sigmoid function can accept an infinite input range, but as shown in FIG. 2C, for an input larger than 3 or less than −3, the output of sigmoid function is essentially flat and does not change with the input. There are different ways to compute the value of S(x) for a particular value of x. In one example, the value of exponential function ex of the numerator can be computed, based on which the denominator 1+ex can also be computed, followed by the ratio between the numerator and the denominator. In another example, the entire input range to the sigmoid function can be divided into a plurality of input subranges, and the values of the sigmoid function corresponding to the plurality of input subranges can be pre-computed and stored in a mapping table. Each input subrange can be defined by a pair of input boundary values. The value of the sigmoid function for each input subrange can be evaluated at a base value representing the input subrange, and mapped to that base value in the mapping table. The base value may include, for example, one of the input boundary values of the input subrange, or any point within the input subrange (e.g., a mid-point). For an input that matches a base value stored in the mapping table, a value of S(x) can be retrieved from the mapping table. For an input (denoted as ‘A’ in FIG. 2C) that does not match any of the base values in the mapping table, a value of S(x) for that input can be computed based on a process of extrapolation. Graph 290 of FIG. 2C illustrates an example process of extrapolation to estimate the value of S(x) at input A. As shown in graph 290, two input boundary values (denoted as “x292” and “x294”) that define an input subrange including input A can be determined. The value of S(x) at input A can be extrapolated from the value of S(x) at one of input boundary values x292 or x294 which acts as a base value. For example, to extrapolate from the base value of input boundary value x292, a difference Δ can be determined between input A and input boundary value x292, and a slope (denoted “m”) of S(x) can be determined at input boundary value x292. The value of S(x) at input A can be estimated based on the following equation: Estimated S(A)=S(x292)+m×Δ  (Equation 5) In Equation 5, the value of S(A) can be estimated by first estimating a change from S(x292) (which represents value of S(x) at x292) to S(A). The estimation can be performed by multiplying the slope m (which indicates the rate of change of S(x) at x292) and Δ (which represents the difference between A and x292). The estimated change can then be added to S(x292) to obtain the estimation of S(A). In some examples, the estimation of S(A) can also be based on a Taylor series expansion based on the following equation: Estimated ⁢ ⁢ S ⁡ ( A ) = S ⁡ ( x 2 ⁢ 9 ⁢ 2 ) + ∑ n = 1 ∞ ⁢ ( S n ⁡ ( x 2 ⁢ 9 ⁢ 2 ) n ! × Δ n ) ( Equation ⁢ ⁢ 6 ) In Equation 6, “n!” represents a factorial of n, “Sn(x292)” represents an n-th degree derivative of S(x) evaluated at input boundary value x292, whereas Δn refers to raising the difference (between input A and input boundary value x292) to the power n. In some examples, a Taylor series expansion up to the third degree (n=3) may provide sufficient accuracy in estimating the value of S(A). Equation 6 can be rewritten as follows: Estimated ⁢ ⁢ S ⁡ ( A ) = S ⁡ ( x 2 ⁢ 9 ⁢ 2 ) + S 1 ⁡ ( x 2 ⁢ 9 ⁢ 2 ) × Δ + S 2 ⁡ ( x 2 ⁢ 9 ⁢ 2 ) 2 × Δ 2 + S 3 ⁡ ( x 2 ⁢ 9 ⁢ 2 ) 6 × Δ 3 ( Equation ⁢ ⁢ 7 ) Equation 7 can be further rewritten as follows: Estimated S(A)=S(x292)+((I×A+J)×A+K)×A  (Equation 8) In Equation 8, parameter I can represent the Taylor series coefficient S3(x292)/6, parameter J can represent the Taylor series coefficient S2(x292)/2, whereas parameter K can represent the Taylor series coefficient S1(x292). Although FIG. 2C illustrates extrapolation operation from an input boundary value (e.g., x292), the extrapolation operation can also be performed relative to a base value between the input boundary values, such as a mid-point, as described above. As to be described in more details below, a programmable mapping table can be used as part of a neural network processor (or in conjunction with a neural network processor) to provide an estimation of an activation function based on Equations 6-8. FIG. 3 shows an apparatus 300 according to some embodiments of the present disclosure. Apparatus 300 may be part of a computer system, e.g., a host server. Apparatus 300 may be part of a multi-tenant compute service system and can communicate with a host device (not shown in FIG. 3) to provide computing and memory resources for a computing service. For example, referring back to FIG. 1, apparatus 300 may provide computing and memory resources for computations with prediction model 103. A host device can operate software application 102 and communicate with apparatus 300 to perform one or more image recognition tasks based on computations with prediction model 103. The host device may transmit multiple image data sets associated with multiple contexts, and provide the multiple image data sets to apparatus 300, which can generate multiple outputs to predict, for example, whether each of the multiple image data sets includes a pre-determined object. In the example of FIG. 3, apparatus 300 may include a neural network processor 302 coupled to memory 312, a direct memory access (DMA) controller 316, and a host interface 314 via an interconnect 318. As to be discussed in more details, neural network processor 302 can provide the computing resources to support the computations with prediction model 103. Neural network processor 302 can include an integrated circuit such as, for example, a System-on-Chip (SoC). Memory 312 may be configured to store the instructions, input data (e.g., pixel groups 249 and 244 of FIG. 2B) and the weights (e.g., filter 230) received from the host device. Memory 312 may also be configured to store the output of neural network processor 302 (e.g., convolution outputs 242 and 246 of FIG. 2B). Memory 312 may include any suitable memory, e.g., dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate DRAM (DDR DRAM), storage class memory (SCM), flash memory devices, etc. DMA controller 316 may be configured to perform DMA operations to transfer data between neural network processor 302 and the host device. For example, as discussed above, the host device can store the instructions, input data, and the weights at memory 312. The host device can provide the memory addresses for the stored instructions, data and weights to neural network processor 302 (e.g., in the form of memory descriptors). Neural network processor 302 can then obtain the stored instructions, data, and weights based on the memory addresses provided by the host device. Neural network processor 302 can also store the results of computations (e.g., one or more image recognition decisions) at memory 312, and provide the memory addresses for the stored results to the host device. Host interface 314 may be configured to enable communication between the host device and neural network processor 302. For example, host interface 314 may be configured to transmit the memory descriptors including the memory addresses of the stored data (e.g., input data, weights, results of computations, etc.) between the host device and neural network processor 302. Host interface 314 may include, for example, a peripheral component interconnect express (PCIe) interface or any suitable interface for communicating with the host device. Neural network processor 302 can provide the computing resources to support the computations with one or more instances of prediction model 103. In the example of FIG. 3, neural network processor 302 may include a state buffer 322, a computing engine 324, an output buffer 328, a post-processor 329, and a computation controller 330. State buffer 322 may be configured to provide caching of data used for computations at computing engine 324. The data cached at state buffer 322 may include, for example, the input data and weights obtained acquired from memory 312, as well as intermediate outputs of computations at computing engine 324. The caching can reduce the effect of memory access bottleneck (e.g., caused by the latencies at memory 312, DMA controller 316, interconnect 318, etc.) on the performance of computing engine 324. State buffer 322 can be controlled by computation controller 330 to pre-fetch a set of weights to computing engine 324, and then fetch the input data as a sequential stream to computing engine 324, which performs the computations and generate a sequential stream of intermediate output data. The stream of intermediate output data can be collected at output buffer 328 and post-processed by post-processor 329. The post-processed stream of intermediate output data can be stored at state buffer 322. The intermediate output data can be stored at a different location in state buffer 322 from where the input data and weights are stored to avoid overwriting, for example, input data that are yet to be fetched to computing engine 324. State buffer 322 can be an on-chip memory device and may include, for example, static random access memory (SRAM). Computing engine 324 may include a set of circuitries configured to perform one or more arithmetic operations involved in neural network computations. For example, computing engine 324 may include a set of multipliers to perform the scaling of input data with associated weights, and a set of adders to generate a sum of the results of multiplications representing dot-products and convolution results, as discussed above. Computing engine 324 may obtain the weights used for the scaling from state buffer 322. The outputs of computing engine 324 can be post-processed (e.g., filtered, combined, accumulated, etc.) at a post-processor (not shown in FIG. 3), and the post-processed outputs can be processed by activation engine 326 to generate the intermediate outputs for a neural network layer. The intermediate outputs can be written back into state buffer 322, which can then fetch the intermediate outputs to computing engine 324 for subsequent processing (e.g., the processing by the next neural network layer). In some examples, computing engine 324 may be controlled by computation controller 330 to perform computations for different neural network layers sequentially, to reduce the sizes of the weights stored in state buffer 322. For example, computing engine 324 may be controlled to perform the arithmetic operations for one neural network layer (e.g., layer 207) within one time period, and then to perform the arithmetic operations for the next neural network layer (e.g., layer 209) in the next time period. Computation controller 330 can control state buffer 322 to pre-fetch weights of a neural network layer (and input data for the input neural network layer) from memory 312, and fetch the weights and input data for a neural network layer to computing engine 324 to perform computations for that neural network layer. Computation controller 330 can also configure activation engine 326 for that neural network layer based on, for example, setting a particular activation function for that neural network layer and setting a particular input range that for that neural network layer. After the computation for the neural network layer completes, computation controller 330 can control state buffer 322 to fetch another set of weights and input data for the next neural network layer to computing engine 324, and configure activation engine 326 (e.g., by setting a different activation function and/or setting a different input range) for the next neural network layer. FIG. 4A-FIG. 4G illustrate examples of internal components of activation engine 326. As shown in FIG. 4A, activation engine 326 includes a programmable module 402 and an arithmetic module 404. Programmable module 402 includes a function table 406 and a selection module 408. Function table 406 can store one or more mapping tables. Each of the mapping tables can represent an activation function, and each mapping table can map a set of base values to a set of parameters related to the activation function evaluated at the corresponding base values. Each base value can be an input boundary value of an input subrange, or a point within the input subrange (e.g., a mid-point). The set of parameters can include, for example, a value of the activation function at the corresponding base value, a change rate (e.g., a slope) of the activation function at the corresponding base value, a set of Taylor series coefficients of the activation function at the corresponding base value, etc. Selection module 408 can accept a set of inputs including function selection 410 and input value 412 and generate a selection signal 414. Selection module 408 can forward selection signal 414 to function table 406 to select a base value 416 and a set of parameters 418 mapped to the base value. Parameters 418 can include, for example, a value of an activation function identified by function selection 410 at base value 416, as well as a change rate and/or a set of Taylor series coefficients of the activation function at base value 416. Base value 416 and parameters 418, as well as input value 412, can be provided to arithmetic module 404, which include arithmetic circuitries (e.g., adders, multipliers, subtractors, etc.) to perform extrapolation operations based on, for example, Equations 5 to Equation 8 as described above to compute output data 420. Output data 420 can provide an estimated output of an activation function (selected based on function selection 410) when processing input value 412. In some examples, input value 412 may be extracted as the exponents of floating numbers representing a set of weighted sums generated by computing engine 324 for a neural network layer, and output data 420 can represent the intermediate output of that neural network layer. In some examples, function table 406 includes a programmable memory device. The programmable memory device may include, for example, a set of registers, an SRAM device, a DRAM device, non-volatile memory devices such as storage class memory (SCM), etc. The programmable memory device can be organized into a plurality of buckets/entries including, for example, buckets 430 and 431. Each bucket can represent an input subrange and can be associated with an address (e.g., address 422 of bucket 430, denoted as “address0a”) and each bucket can store a base value (e.g., base value 424, denoted as “x0a”), a set of parameters including an output value of an activation at the base value (e.g., function output value 426, denoted as “f(x0a)” in FIG. 4B), a slope and/or Taylor series coefficient at the base value (e.g., slope 428, denoted as “slope @ x0a”). The base value can be an input boundary value of the input subrange, a point within the input subrange, etc. The parameters in a bucket can be retrieved based on the address associated with the bucket. For example, function table 406 can receive, from selection module 408 and as part of selection 414, an input address. Function table 406 can use the input address to retrieve the bucket and output the parameters as well as the input boundary value stored in the retrieved bucket to arithmetic module 404. The number of buckets as well as the base value for each bucket in function table 406 can be determined in an iterative process based on, for example, an upper limit on the errors introduced by the extrapolation operations, an upper limit on the total number of buckets, etc. For example, to begin with, the input range of a function represented by function table 406 can be divided into N input subranges. A mid-point of input subrange can be selected as a base value to evaluate the function. An average approximation error for an input subrange, defined by an upper input boundary and a lower input boundary, can be computed based on the following equation: Error = ∫ lower ⁢ ⁢ input ⁢ ⁢ boundry upper ⁢ ⁢ input ⁢ ⁢ boundary ⁢ ( f ⁡ ( mid_x ) - f_app ⁢ ( x ) ) ⁢ dx upper ⁢ ⁢ input ⁢ ⁢ boundary - lower ⁢ ⁢ input ⁢ ⁢ boundary ( Equation ⁢ ⁢ 9 ) In Equation 9, an average approximation error (“Error”) for an input subrange between an upper input boundary and a lower input boundary of the input subrange can be determined by finding an absolute difference between the function f(x) evaluated at a mid-point of the input subrange (mid_x) and an approximation value f_app(x) of an input value x, and integrating the absolute difference with respect to x. In some examples, the absolute difference can be scaled by a probability of x prior to integration. The approximation value f_app(x) can be determined based on an extrapolation using a set of parameters such as, for example, slope, Taylor series expansion, etc., based on Equations 5-8 as described above. The integration result can be divided by the size of the input subrange (represented by difference between upper input boundary and lower input boundary) to obtain the average approximation error for the input subrange. The average approximation error for each input subrange can be computed, and an aggregate error for a given number of buckets/input subranges can be determined by summing the average approximate error for each input subrange. A set of aggregate errors for a set of number of buckets can be computed, and a relationship between aggregate errors and number of buckets can be obtained. The number of buckets can be selected based on the relationship and based on, for example, an upper limit on the aggregate error introduced by the extrapolation operations, an upper limit on the total number of buckets, etc. For example, a minimum number of buckets needed to achieve a target aggregate error can be determined from the relationship. As an illustrative example, in a case where the function is an exponential function, the target aggregate error can be +/−0.5%. In some examples, the base values for each input subrange/bucket that contribute to a minimum aggregate error can also be determined in the iterative process. For example, as described above, the base values can be set as mid-point of each input subrange (“x_mid”) for computation of an initial aggregate error for a number of input subranges/buckets. For that number of input subranges/buckets, additional iterations can be further performed by moving the base values towards the lower input boundary or towards the upper input boundary based on the pseudocodes below: x_upper_half=(x_mid+upper_input_boundary)/2; x_lower_half=(x_mid+lower_input_boundary)/2; if (aggregate_error (x_upper_half))<(aggregate_error (x_lower_half)) then x_mid=x_upper_half; else x_mid=x_lower_half; In the pseudocodes above, for each input subrange, the base value (represented by “x_mid”) can be moved towards the upper input boundary by finding an average before the mid-point and the upper_input_boundary (“x_upper_half”), and a first aggregate error (“aggregate_error(x_upper_half)”) with the base value set at x_upper_half can be determined. A second aggregate error (“aggregate_error(x_lower_half)”) with the base value set at x_lower_half can also be determined. If the first aggregate error is smaller than the second aggregate error, the base value can be set at x_upper_half, otherwise the base value can be set at x_lower_half. The recalculation of the aggregate error can be repeated for each number of buckets to determine a revised relationship between the aggregate error and number of buckets. The number of buckets can be chosen based on the revised relationship and based on the target aggregate error (e.g., minimum number of buckets to achieve +/−0.5% error). Such arrangements can achieve reduction in the number of buckets especially for functions that exhibit high rate of change, such as exponential functions. Referring back to FIG. 4A, in some examples, function table 406 can be programmed to store a mapping table for an activation function. In some other examples, as shown in FIG. 4B, function table 406 can also be programmed to store multiple mapping tables. Each mapping table can occupy a different address space that spans a different set of addresses, and each mapping table can be associated with an activation function and a set of discrete input boundary values. In some examples, the mapping tables can be associated with different activation functions for different neural network layers. In some examples, the mapping tables can also be associated with the same activation function evaluated at different sets of input boundary values for different neural network layers. For example, the different neural network layers may use the same activation function to generate the intermediate outputs, but the weighted sums output by the neural network layers may have different ranges, and each mapping table can map different sets of input boundary values corresponding to different input ranges to output values for the same activation function. In some examples, some of addresses 422 can be designated as base addresses of blocks of buckets. A base address can be used as a reference for indexing (by adding a certain offset) into any bucket of a block of buckets associated with the base address. As to be described in details below, such arrangements provide efficiency and flexibility in programming selection module 408. Multiple addresses 422 can be designated as base addresses. For example, the address of a first bucket of a block of buckets assigned to an activation function can be designated as a base address associated with that activation function. Moreover, the address of a first bucket of a block of buckets assigned to a certain input subrange of a certain activation function can also be designated as a base address associated with that input subrange of that activation function. Different input subranges for different activation functions may be associated with different base addresses. In the illustrative example of FIG. 4A, address0a and address2a may be base addresses for function A, whereas address0b may be a base address for function B. Selection module 408 may include logic circuits as well as a programmable memory device (e.g., a set of registers, an SRAM device, a DRAM device, non-volatile memory devices such as storage class memory (SCM), etc.). The memory device can store a selection table that contain address information of function table 406. The logic circuits can retrieve, based on function selection 410 and input value 412, address information from function table 406. Based on the address information, selection module 408 can retrieve a bucket from function table 406 and provide the parameters stored in the bucket to arithmetic module 404 for activation function computation. The address information can be programmed to reflect the most up-to-date bucket assignment in function table 406. FIG. 4C illustrates an example of internal components of selection module 408. As shown in FIG. 4C, selection module 408 may include a selection table 440 as well as logic circuits including an index generator 442, an exponent extractor 444, a mantissa extractor 446, an offset module 448, and an address generator module 460. Selection table 440 may be stored on a programmable memory device and can store a set of buckets/entries including, for example, entry 441. Each bucket in selection table 440 can represent a block of buckets in function table 406 and may store, for example, a base address (e.g., base address 452) of the block. In some examples, each bucket in selection table 440 may also store a block size (e.g., block size 454) of a block of buckets in function table 406. Each bucket in selection table 440 may be associated with an index/address (e.g., index 456), and the base address and block size information stored in a bucket of selection table 440 can be retrieved based on the associated index. Each index in selection table 440 can reflect an activation function as well as the starting value of an input subrange associated with the block of buckets in function table 406. In a case where input value 412 is a floating number point, the starting value can of the input subrange can be represented based on the exponent of input value 412, since the exponent can provide an accurate representation of the magnitude of input value 412 and can be used to distinguish different input values for different input subranges. On the other hand, each bucket within a block of buckets can be identified based on the mantissa of input value 412, to distinguish different input values within the same input subranges. The association between the indices and the buckets in selection table 440 may be fixed, but the content stored in the buckets can be programmed to reflect the most up-to-date bucket assignment in function table 406. Such arrangements enables efficient storage and programming of buckets block addresses information (of function table 406) in selection table 440. For example, selection table 440 needs not store each address of each buckets block of function table 406. Instead, selection table 440 only stores a base address for each buckets block and, optionally, the block size information. The hard wiring of the indices and the buckets also allow programming to be confined only in selection table 440, and that the logic circuits that generate the index needs not be programmable. All these allows selection table 440 to be programmed to track the changes in function table 406 in an efficient manner. Index generator 442, together with exponent extractor 444 and offset module 448, can generate an index 457 in selection table 440 based on function selection 410 and input value 412. For example, based on a pre-determined format of input value 412, exponent extractor 444 can extract a subset of bits from input value 412 to form exponent 458. Optionally, offset module 448 can add an offset to the subset of bits to form exponent 458, to ensure that a value of exponent 458 is non-negative. Index generator 442 can generate index 457 based on function selection 410 and exponent 458 to extract, for example, block size 454 and base address 452 of a block of buckets of function table 406. In addition, mantissa extractor 446 can extract mantissa 462 of input value 412, and use mantissa 462 to select a bucket within a block of buckets of function table 406 associated with base address 452. For example, as shown in FIG. 4C, address generator module 460 may include a most significant bit (MSB) extractor 464 that can extract a number of MSB bits from mantissa 462 based on block size 454. For example, if block size 454 indicates that the block includes 4 buckets, two MSBs can be extracted from mantissa 462. If block size 454 indicates that the block include 8 buckets, three MSBs can be extracted from mantissa 462. The extracted MSBs can become an offset 466, which can be added to base address 452 to obtain the address of the selected bucket. The address can then be included in selection signal 414. In some examples, block size information can be represented in log format (e.g., 4 buckets is represented by a block size value of 2, 8 buckets is represented by a block size value of 3, etc.), such that the block size information can be directly fed to MSB extractor 464 to extract the corresponding number of MSBs. Referring back to FIG. 4A, the buckets of function table 406, as well as the buckets of selection table 440, can be programmed based on programming information 419 to implement different activation functions. Programming information 419 may be received from computation controller 330, which in turn can generate programming information 419 based on information received from a software application that interacts with apparatus 300 to perform neural network processing (e.g., software application 102 of FIG. 1). Programming information 419 can update the mathematical function parameters and/or input boundary values stored in each bucket of function table 406 to, for example, update the activation function being implemented by function table 406, to partition the buckets to change from storing a single activation function to multiple activation functions, etc. Corresponding programming can also be provided to update the content of the buckets of selection table 440 to reflect the changes in function table 406. As an example, in a case where function table 406 is to store a single activation function for a pre-determined input range, programming information 419 can program function table 406 such that each bucket stores an input boundary value within the pre-determined input range and each bucket can be programmed to store the input boundary value, a value of the activation function at that input boundary value (which can be pre-computed by software or provided from other sources), and the slope and/or Taylor series expansion of the activation function at the input boundary value. Programming information 419 can include, for example, the input boundary value and function parameters to be stored in each bucket and the address of the bucket, which can be provided to a controller that manages the memory device of function table 406 (not shown in the figures) and the controller can program the memory device based on programming information 419. In addition, programming information 419 also include corresponding programming information for selection module 408 to reflect the programming of function table 406. For example, in a case where function table 406 stores a single activation function associated with a particular value of function selection 410, only buckets associated with indices representing the particular value of value of function selection 410 are used to store the block size and base address information. In addition, the block size information can also be programmed to change a number of MSBs to be extracted from mantissa 471 for bucket address generation. As another example, programming information 419 can also partition the buckets of function table 406 for multiple activation functions. For example, different consecutive blocks of buckets can be assigned to different activation functions, which can be reflected in the mapping among the addresses, the input boundary values, and the parameters in programming information 419. Programming information 419 also include corresponding programming information for selection module 408 to support multiple activation functions in function table 406. For example, selection table 440 can be programmed to store block sizes and base addresses for different functions at different buckets, with the indices of the buckets representing different functions and different input subranges. The block size information can also be programmed to change a number of MSBs to be extracted from mantissa 471 for bucket address generation. As described above, the choice between storing a single activation table and storing multiple activation tables in function table 406 can be based on a trade-off between speed and accuracy. For example, to improve accuracy, a single activation function can be stored in function table 406 where a larger number of buckets of function table 406 can be used to store more fine-grained input boundary values and the associated function parameters of the single activation function. By quantizing an input value using the more fine-grained input boundary values, quantization error can be reduced, and the accuracy of estimating the activation function for the input value can be improved. In a case where accuracy can be reduced, function table 406 can be partitioned among a plurality of activation functions, with each partition to store a reduced number of coarse-grained input boundary values and the associated function parameters of an activation function. Storing a plurality of activation functions in the mapping table can improve the processing speed by, for example, avoiding or reducing the delays in loading a new activation function for different neural network layers, which can be leveraged for certain applications which have relaxed requirements for accuracy. As an illustrative example, an image recognition application may have relaxed requirements for accuracy but have tight requirement for speed. The image recognition application can control computer controller 330 to program activation engine 326 to store multiple activation functions to reduce the time spent in transitioning from one neural network layer operation to another neural network operation. As another illustrative example, a self-driving vehicle application may have tight requirements for accuracy (e.g., due to safety concerns) and can control computer controller 330 to program activation engine 326 to store a single activation function for one neural network operation, to maximize the accuracy of activation function processing. In addition, various techniques can be employed for the mapping between input boundary values and the mathematical function parameters at function table 406 to further improve the accuracy of activation function processing. For example, referring to FIG. 4D, in a case where the activation function is symmetrical or anti-symmetrical over an input range 470, function table 406 can be programmed to store only the input boundary values and function parameters for half of the input range (e.g., input subrange 472). Such arrangements allow the entries of function table 406 allocated to that activation function to be distributed within half of the input range, which can result in more fine-grained input boundary values (and lower quantization errors) than the case where the entries are distributed within the full input range, and accuracy can be improved. In addition, index generator 442 can be configured to ignore the sign of input value 412 and map input data having the same magnitude but opposite signs to the same address in function table 406. In a case where the activation function is anti-symmetric as shown in FIG. 4D, index generator 442 can forward the sign information to arithmetic module 404, which can then add the sign information to data output 420. As another example, function table 406 can also be programmed to implement non-uniform quantization, where the step size between adjacent input boundary values is different from different input subranges. The distribution of the input boundary values can be determined based on, for example, a degree of linearity as well as a degree of change of the activation function for a particular input subrange. A degree of linearity can reflect whether the slope of the activation function is a constant or is changing within that input subrange. A high degree of linearity means the slope of the activation function remains constant, whereas a low degree of linearity means the slope of the activation function changes. Referring to FIG. 4E, to improve the accuracy of extrapolation based on slope and/or Taylor series coefficients, input boundary values can be more sparsely distributed for input subranges where the activation function is relatively linear (e.g., input subrange 474) and where the activation function experiences very small change with respect to input (e.g., input subranges 476 and 478). On the other hand, for input subrange 480, the activation function is relatively non-linear and the input boundary values can be more densely distributed within input subrange 480 to improve the accuracy of extrapolation and the resultant activation processing result. In some examples, the non-uniform distribution of the input boundary values of FIG. 4E can be implemented in function table 406 and in selection module 408. For example, the mapping between the addresses and the input boundary values can be configured such that certain input subrange (e.g., input subrange 480) is mapped to a larger number of addresses (and buckets) than another input subrange (e.g., input subrange 476) having the same or similar numeric range in function table 406. The base addresses stored in selection table 440 of selection module 408 can also be set to reflect the non-uniform mapping between addresses and input boundary values in function table 406. Both function table 406 and selection table 440 can be programmed to implement the non-uniform quantization scheme based on programming information 419. FIG. 4F and FIG. 4G illustrate examples of internal components of arithmetic module 404. As shown in FIG. 4F, arithmetic module 404 may include a subtractor 490, a multiplier 492, and an adder 494. Arithmetic module 404 may receive an input boundary value and function parameters from function table 406 based on selection signal 414 from selection module 408, as described above. For example, as shown in FIG. 4F, selection signal 414 may include address 422 to select bucket 430, and function table 406 can output base value 424 (denoted as “x0a” in FIG. 4F), function output value 426 (denoted as “f(x0a)”), and slope 428 (denoted as “slope @ f(x0a)”) to arithmetic module 404. Arithmetic module 404 can perform an extrapolation operation based on base value 424, function output value 426, and slope 428 based on, for example, Equation 5 as described above. For example, subtractor 490 can determine a difference between input value 412 (with value “x”) and base value 424 as x0a−x. Multiplier 492 can multiply the difference (x0a−x) with slope 428 to estimate the change of the activation function between at base value 424 and at input value 412. Adder 494 can add the change to function output value 426 to generate output data 420. FIG. 4G illustrates additional internal components of arithmetic module 404 that can support the extrapolation operations using Taylor series expansion coefficients based on Equation 8. As shown in FIG. 4G, in addition to subtractor 490, multiplier 492, and adder 494, arithmetic module 404 may further include a register 496 and a multiplexor circuit 498 to perform arithmetic operations according to Equation 8 in multiple iterations. For example, in a first iteration, multiplexor circuit 498 can be configured to pass parameter I (Taylor series coefficient S3(x0a)/6) to multiplier 492, which also receives the difference Δ (the difference between input x and input boundary value x0a from subtractor 490. Multiplier 492 can generate a first multiplication product I×A. Adder 494 can add the parameter J (Taylor series coefficient S2(x0a)/2) to the first multiplication product to obtain a first sum I×Δ+J, and the first sum can be stored in register 496. In the second iteration, the first sum stored in register 496 can be forwarded, by multiplexor circuit 498, to multiplier 492, which can multiply the first sum with the difference Δ to generate a second multiplication product (I×Δ+J)×Δ). Adder 494 can add the parameter K (Taylor series coefficient S1 (x0a)) to the second multiplication product to generate a second sum (I×Δ+J)×Δ+K, and the second sum can be stored in register 496 (and replace the first sum). In the third iteration, the second sum stored in register 496 can be forwarded, by multiplexor circuit 498, to multiplier 492, which can multiply the second sum with the difference Δ to generate a third multiplication product ((I×Δ+J)×Δ+K)×A. Adder 494 can forward the third multiplication product as output data 420. FIG. 5 illustrates an example flow diagram of a process 500 for performing hardware-based mathematical function processing. Process 500 can be performed by, for example, various internal components of activation engine 326 as described in FIG. 4A-FIG. 4G including, for example, arithmetic module 404, function table 406, and selection module 408, and may include the techniques described above. At operation 502, selection module 408 receives an input value. The input value can be received from, for example, computing engine 324, and can comprise a weighted sum of inputs. In some examples, the input value can be a floating point number. In some examples, the input value can be extracted as an exponent of the floating point number. At operation 504, selection module 408 generates a selection signal based on the input value. The selection signal may include an input address of the mapping table (e.g., function table 406). The generation of the selection signal including the address may include selecting, from a selection table (e.g., selection table 440), a base address, and a block size of a buckets block of function table 406. The selection signal can be based on generating an index based on an exponent of the input value. An offset to the base address can also be determined based on a mantissa of the input value and the block size. The input address of function table 406 can be determined by adding the offset to the base address, and the input address can be included in the selection signal. At operation 506, selection module 408 can provide the selection signal to a mapping table (e.g., function table 406) to select a bucket that contains a base value, an output value of a mathematical function at the base value, and a set of parameters of the mathematical function mapped to the base value. Function table 406 may include a plurality of buckets, with each bucket being associated with an address and stores a base value, an output value of a mathematical function at the base value, and a set of parameters of the mathematical function mapped to the base value. The base values can be input boundary values that span an input range of the mathematical function (e.g., an activation function). The distribution of the input boundary values can be based on the techniques described above including, for example, FIG. 4D and FIG. 4E. The set of parameters may include, for example, a slope of the mathematical function at the base value, a Taylor series expansion of the mathematical function evaluated at the base value, etc. At operation 508, arithmetic circuit 404 can be operated to perform an extrapolation operation based on the input value, the base value, the output value of the mathematical function at the base value, and the set of parameters, to generate an estimated output value of the mathematical function at the input value based on Equations 6-8 as described above. Arithmetic circuit 404 may include a subtractor (e.g., subtractor 490), a multiplier (e.g., multiplier 492), and an adder (e.g., adder 494) to perform the extrapolation operation. For example, subtractor 490 can determine a difference between the input value and the base value. Multiplier 492 can multiply the difference with a slope included in the set of parameters to estimate the change of the activation function between the base value and the input value. Adder 494 can add the change to the output value of the mathematical function output value at the base value to generate the estimated output value of the mathematical function at the input value. In some examples, arithmetic circuit 404 can perform multiple iterations of multiplications and additions, as described in FIG. 4G, to generate the estimated output value. FIG. 6 illustrates an example of a computing device 600. Functionality and/or several components of the computing device 600 may be used without limitation with other embodiments disclosed elsewhere in this disclosure, without limitations. A computing device 600 may perform computations to facilitate processing of a task. As an illustrative example, computing device 600 can be part of a server in a multi-tenant compute service system. Various hardware and software resources of computing device 600 (e.g., the hardware and software resources associated with provision of an image recognition service) can be allocated to a client upon request. In one example, the computing device 600 may include processing logic 602, a bus interface module 608, memory 610, and a network interface module 612. These modules may be hardware modules, software modules, or a combination of hardware and software. In certain instances, modules may be interchangeably used with components or engines, without deviating from the scope of the disclosure. The computing device 600 may include additional modules, not illustrated here. In some implementations, the computing device 600 may include fewer modules. In some implementations, one or more of the modules may be combined into one module. One or more of the modules may be in communication with each other over a communication channel 614. The communication channel 614 may include one or more busses, meshes, matrices, fabrics, a combination of these communication channels, or some other suitable communication channel. The processing logic 602 may include one or more integrated circuits, which may include application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems-on-chip (SoCs), network processing units (NPUs), processors configured to execute instructions or any other circuitry configured to perform logical arithmetic and floating point operations. Examples of processors that may be included in the processing logic 602 may include processors developed by ARM®, MIPS®, AMD®, Intel®, Qualcomm®, and the like. In certain implementations, processors may include multiple processing cores, wherein each processing core may be configured to execute instructions independently of the other processing cores. Furthermore, in certain implementations, each processor or processing core may implement multiple processing threads executing instructions on the same processor or processing core, while maintaining logical separation between the multiple processing threads. Such processing threads executing on the processor or processing core may be exposed to software as separate logical processors or processing cores. In some implementations, multiple processors, processing cores or processing threads executing on the same core may share certain resources, such as for example busses, level 1 (L1) caches, and/or level 2 (L2) caches. The instructions executed by the processing logic 602 may be stored on a computer-readable storage medium, for example, in the form of a computer program. The computer-readable storage medium may be non-transitory. In some cases, the computer-readable medium may be part of the memory 610. Processing logic 602 may also include hardware circuities for performing artificial neural network computation including, for example, neural network processor(s) 306, etc. The access to processing logic 602 can be granted to a client to provide the personal assistant service requested by the client. For example, computing device 600 may host a virtual machine, on which an image recognition software application can be executed. The image recognition software application, upon execution, may access processing logic 602 to predict, for example, an object included in an image. As another example, access to processing logic 602 can also be granted as part of bare-metal instance, in which an image recognition software application executing on a client device (e.g., a remote computer, a smart phone, etc.) can directly access processing logic 602 to perform the recognition of an image. The memory 610 may include either volatile or non-volatile, or both volatile and non-volatile types of memory. The memory 610 may, for example, include random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, and/or some other suitable storage media. In some cases, some or all of the memory 810 may be internal to the computing device 600, while in other cases some or all of the memory may be external to the computing device 600. The memory 610 may store an operating system comprising executable instructions that, when executed by the processing logic 602, provides the execution environment for executing instructions providing networking functionality for the computing device 600. The memory 610 may also store, for example, software applications for performing artificial neural network computation. For example, memory 610 may store software routines related to the computations of the equations above. In a case where processing logic 602 is in the form of FPGA, memory 610 may store netlists data representing various logic circuit components of processing logic 602. In some examples, memory 610 can include memory 312. The bus interface module 608 may enable communication with external entities, such as a host device and/or other components in a computing system, over an external communication medium. The bus interface module 608 may include a physical interface for connecting to a cable, socket, port, or other connection to the external communication medium. The bus interface module 608 may further include hardware and/or software to manage incoming and outgoing transactions. The bus interface module 608 may implement a local bus protocol, such as Peripheral Component Interconnect (PCI) based protocols, Non-Volatile Memory Express (NVMe), Advanced Host Controller Interface (AHCI), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Serial AT Attachment (SATA), Parallel ATA (PATA), some other standard bus protocol, or a proprietary bus protocol. The bus interface module 608 may include the physical layer for any of these bus protocols, including a connector, power management, and error handling, among other things. In some implementations, the computing device 600 may include multiple bus interface modules for communicating with multiple external entities. These multiple bus interface modules may implement the same local bus protocol, different local bus protocols, or a combination of the same and different bus protocols. The network interface module 612 may include hardware and/or software for communicating with a network. This network interface module 612 may, for example, include physical connectors or physical ports for wired connection to a network, and/or antennas for wireless communication to a network. The network interface module 612 may further include hardware and/or software configured to implement a network protocol stack. The network interface module 612 may communicate with the network using a network protocol, such as for example TCP/IP, Infiniband, RoCE, Institute of Electrical and Electronics Engineers (IEEE) 802.11 wireless protocols, User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM), token ring, frame relay, High Level Data Link Control (HDLC), Fiber Distributed Data Interface (FDDI), and/or Point-to-Point Protocol (PPP), among others. In some implementations, the computing device 600 may include multiple network interface modules, each configured to communicate with a different network. For example, in these implementations, the computing device 600 may include a network interface module for communicating with a wired Ethernet network, a wireless 802.11 network, a cellular network, an Infiniband network, etc. In some embodiments, computing device 600 may receive a set of parameters, such as the aforementioned weight vectors for generation of forget gate factor, input factor, output factor, etc. from a server through network interface module 612. The various components and modules of the computing device 600, described above, may be implemented as discrete components, as a System on a Chip (SoC), as an ASIC, as an NPU, as an FPGA, or any combination thereof. In some embodiments, the SoC or other components may be communicatively coupled to another computing system to provide various services such as traffic monitoring, traffic shaping, computing, etc. In some embodiments of the technology, the SoC or other components may include multiple subsystems as disclosed herein. The modules described herein may be software modules, hardware modules or a suitable combination thereof. If the modules are software modules, the modules can be embodied on a non-transitory computer readable medium and processed by a processor in any of the computer systems described herein. It should be noted that the described processes and architectures can be performed either in real-time or in an asynchronous mode prior to any user interaction. The modules may be configured in the manner suggested in FIG. 6 and/or functions described herein can be provided by one or more modules that exist as separate modules and/or module functions described herein can be spread over multiple modules. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Various embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. 16987830 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jan 12th, 2015 12:00AM https://www.uspto.gov?id=US11314444-20220426 Environment-sensitive distributed data management A storage controller is implemented for controlling a storage system. The storage controller may be implemented using a distributed computer system and may include components for servicing client data requests based on the characteristics of the distributed computer system, the client, or the data requests. The storage controller is scalable independently of the storage system it controls. All components of the storage controller, as well as the client, may be virtual or hardware-based instances of a distributed computer system. 11314444 1. A computer-implemented method for accessing data storage devices, comprising: receiving, from a client entity, a request for access to data of a storage system; receiving, from the client entity, usage characteristics associated with the data, the usage characteristics including data compressibility of the data, data fragmentation of the data, size of the data range for the data, and intended frequency of data access for the data by the client entity; identifying a demand pattern of the data of the storage system and access patterns associated with use of the data by the client entity based at least in part on the usage characteristics, the demand pattern indicating that the data will be accessed at a higher rate than other data of the storage system; determining, using at least the demand pattern, that at least a subset of the stored data should be stored at a different location associated with the storage system; identifying a particular location as the different location associated with the storage system based at least in part on accessing a data map that maintains locations of the data in the storage system; causing the subset of the stored data to be moved from a first device associated with the storage system to a second device associated with the storage system based at least in part on the identification of the different location and on performance characteristics of the different location corresponding to the demand pattern of the stored data, the different location located on the second device, the second device associated with the storage system having a higher level of input/output operations per time period than the first device associated with the storage system; updating the data map to reflect that the subset of the stored data is stored at the different location; identifying that at least a portion of the data stored on the storage system is inaccessible by the client entity, wherein being inaccessible includes a failed storage node of the storage system or an abnormality of the storage system; and updating a copy of the data map maintained by another entity in response to identifying that at least the portion of the data stored in the storage system is inaccessible by the client entity, wherein the copy maintained by the another entity is maintained separate from the data map that maintains locations of the data in the storage system to enable subsequent access to the data stored in the storage system by accessing the copy maintained by the another entity in response to an indication that the data map is inaccessible. 2. The computer-implemented method of claim 1, further comprising identifying that the client entity's demand for at least the portion of the data is greater than a predetermined level based at least in part on the request and the usage characteristics. 3. The computer-implemented method of claim 1, wherein the data is located on a data volume that is accessible by at least the client entity and that comprises a plurality of storage nodes in the storage system. 4. The computer-implemented method of claim 1, further comprising identifying one or more operational characteristics of the storage system, wherein identifying the particular location as the different location associated with the storage system is further based at least in part on the one or more operational characteristics of the storage system. 5. A computer system for accessing data storage devices, comprising: one or more processors; and memory, including instructions executable by the one or more processors to cause the computer system to at least: receive, from a client entity, a request for access to data of a storage system; receive, from the client entity, usage characteristics associated with the data, the usage characteristics including data compressibility of the data, data fragmentation of the data, size of the data range for the data, and intended frequency of data access for the data by the client entity; identify a demand pattern of the data on the storage system and access patterns associated with use of the data by the client entity based at least in part on the usage characteristics; identify, by at least analyzing the demand pattern, that the data should be associated with a different location within the storage system, the demand pattern indicating that the data to be associated with the different location within the storage system will be accessed at a higher rate than other data of the storage system; request authorization from an authorizing entity to reassociate the data based at least in part on the authorizing entity accessing a data map that maintains locations of the data in the storage system; reassociate the data with the different location by moving from a first device associated with the storage system to a second device associated with the storage system based at least in part on the request for authorization from the authorizing entity and on performance characteristics of the different location corresponding to the demand pattern and the access patterns of the data, the different location located on the second device, the second device associated with the storage system having a higher level of input/output operations per time period than the first device associated with the storage system; identify that at least a portion of the data of the storage system is inaccessible by the client entity, wherein being inaccessible includes a failed storage node of the storage system or an abnormality of the storage system; and update a copy of the data map maintained by another entity in response to identifying that at least the portion of the data of the storage system is inaccessible by the client entity, wherein the copy maintained by the another entity is maintained separate from the data map that maintains locations of the data in the storage system to enable subsequent access to the data stored in the storage system by accessing the copy maintained by the another entity in response to an indication that the data map is inaccessible. 6. The computer system of claim 5, wherein the demand pattern and the access patterns for the data on the storage system includes at least one of data write requests or data read requests. 7. The computer system of claim 5, wherein the demand pattern and the access patterns of the data on the storage system includes data access history of at least the client entity. 8. The computer system of claim 5, wherein the executable instructions further cause the computer system to: determine that the client entity is unable to access a subset of the data; and update the copy of the data map maintained by the another entity in response to determining that the client entity is unable to access the subset of the data. 9. The computer system of claim 5, wherein the executable instructions further cause the computer system to: determine, by at least analyzing performance of at least a portion of the storage system, at least one characteristic of the storage system that identifies an operation of the storage system, wherein identifying that the data should be associated with the different location is based at least in part on the at least one characteristic of the storage system. 10. The computer system of claim 9, wherein the at least one characteristic of the storage system that identifies an operation of the storage system includes at least availability, to the client entity, of a plurality of data locations on the storage system. 11. One or more non-transitory computer-readable storage media having collectively stored thereon executable instructions that, when executed by one or more processors of a computing resource provider's computer system, cause the computer system to at least perform operations comprising: receiving, from a client instance of the computer system, a request for access to data of a storage system; receiving, from the client instance of the computer system, usage characteristics associated with the data, the usage characteristics including data compressibility of the data, data fragmentation of the data, size of the data range for the data, and intended frequency of data access for the data by a client entity; identifying a demand pattern of the data of the storage system and access patterns associated with use of the data by the client instance based at least in part on the usage characteristics, the access patterns indicating that the data will be accessed at a higher rate than other data of the storage system; determining that the data that is stored upon the storage system should be reassociated from a first location on the storage system to a second location on the storage system based at least in part on the demand pattern and the access patterns of the data; requesting an authorization from an authorizing entity that stores and accesses data mappings to reassociate the data in the storage system, the data mappings maintaining locations of the data within the storage system; reassociating the data from the first location to the second location by moving the data from the first location on the storage system to the second location on the storage system based at least in part on the requested authorization and on performance characteristics of the second location corresponding to the demand pattern and the access patterns of the data, the second location on the storage system having a higher level of input/output operations per time period than the first location on the storage system; identifying that at least a portion of the data of the storage system is inaccessible by the client instance, wherein being inaccessible includes a failed storage node of the storage system or an abnormality of the storage system; and updating a copy of at least one of the data mappings maintained by another entity in response to identifying that at least the portion of the data of the storage system is inaccessible by the client instance, wherein the copy maintained by the another entity is maintained separate from the data mappings that maintains locations of the data in the storage system to enable subsequent access to the data stored in the storage system by accessing the copy maintained by the another entity in response to an indication that the data mappings are inaccessible. 12. The non-transitory computer-readable storage media of claim 11, wherein the demand pattern and the access patterns of the data by the client instance includes at least one of a latency of access to the data by the client instance, availability of the data to the client instance, or a level of data throughput. 13. The non-transitory computer-readable storage media of claim 11, wherein the instructions further cause the computer system to determine whether the data should be reassociated based at least in part on a business-related requirement. 14. The non-transitory computer-readable storage media of claim 11, wherein the instructions are executed by at least the client instance. 15. The non-transitory computer-readable storage media of claim 11, wherein the reassociated data is moved from a first device within the first location to a second device within the second location. 16. The non-transitory computer-readable storage media of claim 11, wherein the data is reassociated by updating the copy of at least one of the data mappings on the client instance. 17. The non-transitory computer-readable storage media of claim 11, wherein the authorization is an acknowledgment by the authorizing entity. 18. The non-transitory computer-readable storage media of claim 11, wherein the client instance is implemented as a virtual machine instance. 18 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 13/434,551, filed on Mar. 29, 2012, entitled “ENVIRONMENT-SENSITIVE DISTRIBUTED DATA MANAGEMENT”, and issued as U.S. Pat. No. 8,935,203 on Jan. 13, 2015, which is related to and incorporates by reference for all purposes the full disclosures of co-pending U.S. patent application Ser. No. 13/434,666, filed on Mar. 29, 2012, entitled “DISTRIBUTED DATA STORAGE CONTROLLER”, co-pending U.S. patent application Ser. No. 13/434,638, filed on Mar. 29, 2012, entitled “DATA STORAGE MAPPING AND MANAGEMENT”, and co-pending U.S. patent application Ser. No. 13/434,728, filed on Mar. 29, 2012, entitled “INTELLIGENT DATA INTEGRATION”, the contents of all of which are incorporated herein by reference in their entireties. BACKGROUND As the need and demand for higher capacity, higher performance enterprise data storage solutions has increased, the complexity of such storage configurations and topologies has also increased. Accordingly, the computing resources required to track or map the location of data can be substantial, and enterprises have historically invested in often expensive dedicated resources to handle such tracking and mapping functions when requisitioning high performance, high available storage solutions. The addition or maintenance of such additional computing resources can often have a significant impact on an organization, possibly being disruptive and/or decreasing productivity. Additionally, such solutions typically do not automatically scale their capabilities up and down based on demand, additional storage capacity, and/or the like. BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which: FIG. 1 illustrates an example of an environment that can be used in accordance with at least one embodiment; FIG. 2 illustrates an example of a distributed program execution service that can be used to implement aspects of at least one embodiment; FIG. 3 illustrates an example of an environment that can be used in accordance with at least one embodiment; FIG. 4 illustrates an example of an environment that can be used in accordance with at least one embodiment; FIG. 5 illustrates an example configuration of a logical block addressing map for data, in accordance with at least one embodiment; FIG. 6 illustrates an example process for remapping data in a distributed system, in accordance with at least one embodiment; FIG. 7 illustrates an example process for merging data storage units in a distributed system, in accordance with an embodiment; FIG. 8 illustrates an example process for optimizing data access in a distributed system, in accordance with an embodiment; FIG. 9 illustrates an example client-initiated process for optimizing data access in a distributed system, in accordance with an embodiment; and FIG. 10 illustrates an environment in which various embodiments can be implemented. DETAILED DESCRIPTION Systems and methods in accordance with various embodiments of the present disclosure may overcome one or more of the aforementioned and other deficiencies experienced in conventional approaches to providing access to data in an electronic environment. In particular, various embodiments provide data storage, data access and related services to client entities via a computing resource provider that provides one or more computing resources through computing resource services, such as Web services. For example, a client or client entity may access, through such a computing resource service, data storage such that access mechanisms are implemented and provided to the client entity utilizing the computing resources. Computing resource services may include one or more computing resources accessible across a network through an application programming interface (API), user interface (UI), or other interface where the one or more computing resources are scalable and expandable as needed for the client, the number of storage nodes, and the like. In an embodiment, the client itself comprises one or more computing resources of the computing resource provider. In some embodiments, a data mapping engine is implemented by a client entity to access data residing on one or more storage servers across one or more networks. In an embodiment, the data mapping engine implements a logical block addressing (LBA) map, which acts as a translation layer between client requests for specific blocks of data and retrieving the data on the specific data storage servers or volumes upon which it resides. Such requests may include both client write requests and client read requests. In some embodiments, the data mapping engine may rely upon a placement engine for determining an appropriate storage server or servers from which to read or write data relating to incoming data requests from the client. Such an placement engine may, in some embodiments, have general or specific information about the state of the storage servers, the nature of the data requests, the nature of the data itself, the nature of the client, or any other information about the distributed system that may improve the availability engine's ability to determine an appropriate storage server or servers to service a client's data request. In some embodiments, the data mapping engine may work in conjunction with a separate map authority. The map authority retains a copy of the LBA map and may, in some embodiments, serve as a backup of the LBA map actively being used by the client to access and administer underlying data. In some embodiments, the map authority retains the active copy of the LBA map. Any combination of the data mapping engine, the map authority, and the placement engine may be implemented within the client itself. In some embodiments, the functionality of each of the data mapping engine, the map authority, and the placement engine may overlap with or duplicate functionality of another. As noted above, in some embodiments, the map authority updates the data mapping engine implemented within the client with an updated LBA map, and in other embodiments, the reverse occurs. Such updates to the map may occur for both reactive and proactive reasons. For example, the map authority may update the LBA map in reaction to notification or information that a dropped network connection, failed storage node, or other system abnormality has rendered some or all of the mapped data inaccessible. As another example, the map authority may preemptively update the LBA map if it detects or receives notification that the current data mapping is performing inefficiently. In some embodiments, the updating of the LBA map is coincident with a command from the map authority to reallocate, move, or otherwise alter the configuration or data assigned to the storage nodes. In some embodiments, some other entity than the map authority, including the storage nodes themselves, notifies the map authority of an effected or presently occurring configuration change to the location of data among the storage nodes, whereupon the map authority updates the LBA map with the new configuration. In some embodiments, a placement engine informs the data mapping engine as to an optimal or otherwise appropriate location among the storage nodes for given data. The placement engine, in some embodiments, has broader information about the current, past and predicted future status of the distributed system, including, for example, individual server node capabilities, network topology, network performance, client demand patterns, the nature of the data to be written or read, geographic location of the system's individual components, and the like. In some embodiments, the placement engine uses the information at its disposal to determine an optimal mapping configuration for a given piece of data, and in some embodiments places the data in within the determined mapping configuration in accordance with a requestor's intended usage of that data. As may be contemplated, and as previously alluded to, such functionality may in some embodiments be integrated into another entity such as the map authority or the data mapping engine, or expressed as a standalone module with which other functional entities may interface. Certain techniques for moving data may be used in order to ensure data integrity, continued performance and maximum availability in accordance with several embodiments described herein. For example, a given data block may be replicated from one storage node to another. During the replication, any changes to the data block being replicated may result in different versions of constituent chunks of the data block on the originating and receiving storage nodes. In some embodiments, techniques for identifying and categorizing such data chunks are implemented, including but not limited to write journaling, data differencing, and consistency tagging. Such techniques may be implemented by any entity described herein and may, in an embodiment, be incorporated into the LBA map itself. Upon utilizing such a technique and identifying that, for example, multiple storage nodes bear different data chunk versions within the same data block, the implementing entity may further determine, based on information about the data block, the client's intended use for the data contained within, and the nature of the storage nodes in question, the storage node or nodes upon which the entire data block the latest versions of all constituent data chunks should be replicated. Various other applications, functions, and advantages are presented below with respect to the various embodiments. It should be understood that the description and figures provide a number of examples, but the alternatives and variations possible within the scope of the various embodiments are not fully described. Alternatives and variations, however, would be apparent to one of ordinary skill in the art in light of the teachings and suggestions contained herein. FIG. 1 illustrates an example of an environment 100 for implementing aspects in accordance with various embodiments. In this example, a client 102 connects, via one or more networks 104, to a storage system 106 in order to manipulate, retrieve or store data thereon. In some embodiments, the client is comprised of one or more computing resources in a distributed computing system. Such computing resources can either be hardware-based, software-based, dedicated, virtualized or configured in any appropriate fashion. Thus, a client may correspond with actual hardware devices, virtualized computer systems as will be discussed in FIG. 2, or any other appropriate configuration. The storage system may be composed of any number of storage nodes or storage servers 116, connected to one another by any appropriate type of interface (e.g., local bus protocols such as PCI or PCI Express, storage area network protocols such as Fibre Channel Protocol or iSCSI, general network data protocols such as TCP/IP, and the like). In an embodiment, the individual storage servers are configured into master-slave pairs. In some embodiments, the topology of the storage servers is freely configurable and/or freely assignable, either by the storage system or by another entity, such as the map authority, the data mapping engine, or the placement engine. In such embodiments, it is contemplated that any server may be designated as a master, or a slave to an existing master, for a given assigned block of data. In some embodiments, the actual configuration of the storage servers with the storage systems is obfuscated from the client or client instance. The one or more network connections used to connect the client and the storage system may either be pre-existing or purpose-built, and may be of any suitable technology to implement a chosen or implemented data transport protocol. In some embodiments, the existing network infrastructure and associated application and transport protocols, for example, NTP, FTP, HTTP, TCP, UDP, and the like, may be used to implement the connection. In some embodiments, the data transport protocol used between the client and the storage system and/or the respective constituent components or nodes thereof (e.g., between an implemented data mapping engine and the data storage system) may be an Internet Protocol (IP)-based storage network standard, such as iSCSI, Global Network Block Device (GNBD), or Fibre Channel over IP. In some embodiments, more specialized and/or generalized storage area network protocols such as Fibre Channel over Ethernet or Fibre Channel Protocol may be used. In some embodiments, the client uses a subset of its allocated computing resources to implement a data mapping engine 108. For example, the data mapping engine may be implemented by using a virtualization engine on the client or client device. In some embodiments, the LBA map is operable to associate at least a portion of block addresses exposed to a client instance, such as a virtual instance of the client or a bare metal instance running directly on hardware, to one or more locations on the storage system. In this example, the data mapping engine serves at least as an abstraction layer between client data requests and storage system data requests. Such client data requests may emanate from any entity or instance of the client, such as a virtual machine instance of the client or a direct hardware-level or “bare metal” instance. In some embodiments, the abstraction layer is a logical block addressing (LBA) map that, as implemented, presents to a client or other data requester a known data organization and/or configuration, such as a data volume, file system (e.g., FAT, NTFS, HFS, ext3, and the like) or related construct. In some embodiments, the data mapping engine uses the LBA map to translate incoming reads and writes of data to the appropriate location, such as (a) particular storage node or nodes, in the storage system. The data mapping engine is, in an embodiment, configured to update the LBA map in accordance with changes to the structure, location or other configuration of the data stored upon the storage system. Such updates may be requested or initiated by several entities, including but not limited to the map authority, the storage server, the client, or the data mapping engine itself. In some embodiments, when the client writes new data to the storage system, the data mapping engine determines an appropriate location within the storage system to which to make the write, then writes to that location and updates the LBA map. In some embodiments, when the client requests data from the storage system, the data mapping engine determines an appropriate location within the storage system that is able to service the request, then retrieves the data. During these operations, the data mapping engine may track characteristics regarding the operation, such as latency, data storage availability, performance characteristics, and the like. Such tracked characteristics may be used by the data mapping engine, the map authority, or the placement engine to process and optimize future operations. In some embodiments, the map authority 110 retains a copy of the LBA map. The client 102 may, in some embodiments, be ephemeral, and it may be desirable to retain a copy of the active LBA map separately such that, the client is free to dissolve or disassociate without losing the ability, upon future instantiations or connections, to access the data stored upon the storage system in accordance with the map. Upon the client's reappearance, or upon authorization of the client to connect with the storage system, in some embodiments, the map authority provides the client with the LBA map. In addition, while the client is modifying the data and the LBA map, the map authority's copy of the LBA map may be continuously synchronized with that of the client. Conversely, in some embodiments, the map authority is able to make changes to the LBA map copy under its control if certain triggering events or conditions occur, in which case the client's copy of the LBA map may, in some embodiments, also be updated. Examples of such events and conditions include a fault of a portion of the storage system, a partial or complete disconnection of a constituent network, network partitions, changes in access latency or queue depths for certain data or storage servers, and other performance, availability, and/or business-related events and conditions. The map authority may become aware of triggering events and conditions by any appropriate method, including active monitoring by the map authority or other entity, or by notification by an entity, such as a client. Upon acquiring information of one or more triggering event or condition, the map authority may devise a mitigation plan that may be permanent, semi-permanent or temporary, in response. Such a plan may include, for example, moving data from a slow storage server to a faster one, splitting a data block stored on one storage server onto multiple storage servers, merging a data block stored upon multiple storage servers onto a single storage server, and the like. In the example given, the map authority is optionally connected to the storage system 106 via one or more networks 112. The map authority may connect with the storage server through the network 112 and directly execute the mitigation plan, then notify the client with the updated mapping. Alternatively, it may devise the plan and instruct the client to execute the plan through the client's connection with the storage system 104, implicitly updating the client's copy of the LBA map. It is contemplated that the map authority, as well as any and all entities described herein, are, in some embodiments, capable of providing all services described herein to a plurality of entities. For example, a given map authority may provide a plurality of maps to a plurality of clients connecting with, or authorized to connect with, the storage system. In addition, all entities described herein are implementable across any number of resources so as to improve availability, durability, performance or for any appropriate reason. For example, a map authority may be implemented across multiple computer systems to ensure that the LBA maps stored thereon are protected from hardware failure of one or more components of one or more computer systems. In some embodiments, the data mapping engine interacts with a placement engine 114 and, in some embodiments, with the client. In the present example, the placement engine is responsible for determining, or helping the data mapping engine to determine, optimal configurations for data placement within the storage system based at least in part on the client's data access patterns, as well as the data itself. For example, a certain chunk or block of data may be written by the client using an application known to the placement engine to use a high frequency of random accesses, and to be used frequently by the client. In this example, the placement engine may then direct the data mapping engine to write to, for example, a storage server with solid state disks (SSDs) capable of fast random access reads and writes, and as close as possible to the client in the network topology in order to reduce latency and improve throughput. Such optimizations may be self-initiated by the placement engine or requested and/or triggered by an activity or even of the client, map authority, storage system, computer system, or other entity. It is contemplated that such a placement engine may, in some embodiments, work synergistically with, or in some embodiments at least partially execute or subsume, proactive optimizations instigated by the map authority as described above. In at least one embodiment, one or more aspects of the environment 100 may incorporate and/or be incorporated into a distributed program execution service. FIG. 2 depicts aspects of an example distributed program execution service 200 in accordance with at least one embodiment. The distributed program execution service 200 provides computing services, including a virtual computer system service 202 and a virtual data store service 204, with a wide variety of computing resources interlinked by a relatively high speed data network. Such computing resources may include processors such as central processing units (CPUs), volatile storage devices such as random access memory (RAM), nonvolatile storage devices such as flash memory, hard drives and optical drives, servers, one or more data stores such as the storage system 106 of FIG. 1, as well as communication bandwidth in the interlinking network. The computing resources managed by the distributed program execution service 200 are not shown explicitly in FIG. 2 to emphasize an independence of the computing services from the computing resources that implement them. However, in some embodiments, the computing services may correspond to actual resources rather than being virtualized. The distributed program execution service 200 may utilize the computing resources to implement the computing services at least in part by executing one or more programs, program modules, program components and/or programmatic objects (collectively, “program components”) including and/or compiled from instructions and/or code specified with any suitable machine and/or programming language. For example, the computing resources may be allocated, and reallocated as necessary, to facilitate execution of the program components, and/or the program components may be assigned, and reassigned as necessary, to the computing resources. Such assignment may include physical relocation of program components, for example, to enhance execution efficiency. From a perspective of a user of the computing services, the distributed program execution service 200 may supply computing resources elastically and/or on-demand, for example, associated with a per resource unit commodity-style pricing plan. The distributed program execution service 200 may further utilize the computing resources to implement a service control plane 206 configured at least to control the computing services. In some embodiments, the service control plane may be implemented as a virtual computer system instance. The service control plane 206 may include a service administration interface 208. The service administration interface 208 may include a Web-based user interface configured at least to enable users and/or administrators of the virtualized computing services to provision, de-provision, configure and/or reconfigure (collectively, “provision”) suitable aspects of the computing services. For example, a user of the virtual computer system service 202 may provision one or more virtual computer system instances 210, 212 such as the client 102 of FIG. 1. The user may then configure the provisioned virtual computer system instances 210, 212 to execute the user's application programs. The ellipsis between the virtual computer system instances 210 and 212 indicates that the virtual computer system service 202 may support any suitable number (e.g., thousands, millions, and more) of virtual computer system instances although, for clarity, only two are shown. The service administration interface 208 may further enable users and/or administrators to specify and/or re-specify virtualized computing service policies. Such policies may be maintained and enforced by a service policy enforcement component 214 of the service control plane 206. For example, a storage administration interface 216 portion of the service administration interface 208 may be utilized by users and/or administrators of the virtual data store service 204 to specify virtual data store service policies to be maintained and enforced by a storage policy enforcement component 218 of the service policy enforcement component 214. Various aspects and/or facilities of the virtual computer system service 202 and the virtual data store service 204 including the virtual computer system instances 210, 212, the low latency data store 220, the high durability data store 222, and/or the underlying computing resources may be controlled with interfaces such as application programming interfaces (APIs) and/or Web-based service interfaces. In at least one embodiment, the control plane 206 further includes a workflow component 246 configured at least to interact with and/or guide interaction with the interfaces of the various aspects and/or facilities of the virtual computer system service 202 and the virtual data store service 204 in accordance with one or more workflows. In at least one embodiment, service administration interface 208 and/or the service policy enforcement component 214 may create, and/or cause the workflow component 246 to create, one or more workflows that are then maintained by the workflow component 246. Workflows, such as provisioning workflows and policy enforcement workflows, may include one or more sequences of tasks to be executed to perform a job, such as provisioning or policy enforcement. A workflow, as the term is used herein, is not the tasks themselves, but a task control structure that may control flow of information to and from tasks, as well as the order of execution of the tasks it controls. For example, a workflow may be considered a state machine that can manage and return the state of a process at any time during execution. Workflows may be created from workflow templates. For example, a provisioning workflow may be created from a provisioning workflow template configured with parameters by the service administration interface 208. As another example, a policy enforcement workflow may be created from a policy enforcement workflow template configured with parameters by the service policy enforcement component 214. The workflow component 234 may modify, further specify and/or further configure established workflows. For example, the workflow component 234 may select particular computing resources of the distributed program execution service 200 to execute and/or be assigned to particular tasks. Such selection may be based at least in part on the computing resource needs of the particular task as assessed by the workflow component 234. As another example, the workflow component 234 may add additional and/or duplicate tasks to an established workflow and/or reconfigure information flow between tasks in the established workflow. Such modification of established workflows may be based at least in part on an execution efficiency analysis by the workflow component 234. For example, some tasks may be efficiently performed in parallel, while other tasks depend on the successful completion of previous tasks. The virtual data store service 204 may include multiple types of virtual data store such as a low latency data store 220 and a high durability data store 222. For example, the low latency data store 220 may maintain one or more data sets 224, 226 which may be read and/or written (collectively, “accessed”) by the virtual computer system instances 210, 212 with relatively low latency. The ellipsis between the data sets 224 and 226 indicates that the low latency data store 220 may support any suitable number (e.g., thousands, millions, and more) of data sets although, for clarity, only two are shown. In some embodiments, the high durability data store may maintain one or more data sets 228, 230, with the ellipsis between data sets 228 and 230, similarly to described above, indicating support for any suitable number of data sets. In some embodiments, the data sets 228, 230 correspond to captures, for archival purposes, of data sets 224 and 226, respectively. In some embodiments, data sets 228, 230 in the high durability data store may be accessed directly by the virtual computer system instances 210, 212. Although each component of the distributed program execution service 200 may communicate utilizing the underlying network, data transfer 232 between the low latency data store 220 and the high durability data store 222 is highlighted in FIG. 2 because, in some embodiments, the contribution to utilization load on the underlying network by such data transfer can be significant. For example, the data sets 224, 226 of the low latency data store 220 may be virtual disk files (i.e., file(s) that can contain sequences of bytes that represents disk partitions and file systems) or other logical volumes. The low latency data store 220 may include a low overhead virtualization layer providing access to underlying data storage hardware. For example, the virtualization layer of the low latency data store 220 may be low overhead relative to an equivalent layer of the high durability data store 222. Systems and methods for establishing and maintaining low latency data stores and high durability data stores in accordance with at least one embodiment are known to those of skill in the art, so only some of their features are highlighted herein. In at least one embodiment, the sets of underlying computing resources allocated to the low latency data store 220 and the high durability data store 222, respectively, are substantially disjoint. In a specific embodiment, the low latency data store 220 could be a Storage Area Network target or the like. In this exemplary embodiment, the physical computer system that hosts the virtual computer system instance 210, 212 can send read/write requests to the SAN target. The low latency data store 220 and/or the high durability data store 222 may be considered non-local and/or independent with respect to the virtual computer system instances 210, 212. For example, physical servers implementing the virtual computer system service 202 may include local storage facilities such as hard drives. Such local storage facilities may be relatively low latency but limited in other ways, for example, with respect to reliability, durability, size, throughput and/or availability. Furthermore, data in local storage allocated to particular virtual computer system instances 210, 212 may have a validity lifetime corresponding to the virtual computer system instance 210, 212, so that if the virtual computer system instance 210, 212 fails or is de-provisioned, the local data is lost and/or becomes invalid. In at least one embodiment, data sets 224, 226 in non-local storage may be efficiently shared by multiple virtual computer system instances 210, 212. For example, the data sets 224, 226 may be mounted by the virtual computer system instances 210, 212 as virtual storage volumes. As described at least in connection with FIGS. 1, 3 and 4, access by the virtual computer system instances or clients to such virtual storage volumes may also be dependent on the validity of the virtual computer system instances or clients. Data stores in the virtual data store service 204, including the low latency data store 220 and/or the high durability data store 222, may be facilitated by and/or implemented with a block data storage (BDS) service 236, at least in part. The BDS service 236 may facilitate the creation, reading, updating and/or deletion of one or more block data storage volumes, such as virtual storage volumes, with a set of allocated computing resources including multiple block data storage servers. A block data storage volume, and/or the data blocks thereof, may be distributed and/or replicated across multiple block data storage servers to enhance volume reliability, latency, durability and/or availability. As one example, the multiple server block data storage systems that store block data may in some embodiments be organized into one or more pools or other groups that each have multiple physical server storage systems co-located at a geographical location, such as in each of one or more geographically distributed data centers, and the program(s) that use a block data volume stored on a server block data storage system in a data center may execute on one or more other physical computing systems at that data center. The BDS service 236 may facilitate and/or implement local caching of data blocks as they are transferred through the underlying computing resources of the distributed program execution service 200 including local caching at data store servers implementing the low latency data store 220 and/or the high durability data store 222, and local caching at virtual computer system servers implementing the virtual computer system service 202. In at least one embodiment, the high durability data store 222 is an archive quality data store implemented independent of the BDS service 236. The high durability data store 222 may work with sets of data that are large relative to the data blocks manipulated by the BDS service 236. The high durability data store 222 may be implemented independent of the BDS service 236, for example, with distinct interfaces, protocols and/or storage formats. In some embodiments, the functionality of the BDS service, or the BDS service itself, may comprise a subset of the resources of a virtual computer system instance 210, 212 or client 102, rather than reside in a part of the virtual data store service 204 or storage system 104. As may be contemplated, the functionality of the map authority, data mapping engine and placement engine may be expressed in different combinations and configurations of components of the distributed program execution service 200. FIG. 3 illustrates an environment 300 where, in accordance with some embodiments, the map authority 302 and the data mapping engine 304 are integrated. In this example, client 306 sends data requests over one or more networks 308 to the data mapping engine via the map authority. In some embodiments, the data mapping engine functions similarly to that which is described in connection with FIG. 1 by at least implementing an LBA map also similar to that which is previously described. The client and the integrated map authority may be implemented by virtual computer system instances as described in connection with FIG. 2. The data mapping engine translates the requests using the LBA map in a fashion also similar to previously described, and sends the translated requests over one or more networks 310 to storage system 312, which may be implemented as at least a part of the virtual data store service described in connection with FIG. 2. In some embodiments, the data mapping engine, by virtue of being integrated with the map authority, persists the LBA map without the need for keeping a separate copy. Thus, in some embodiments, an ephemeral client may connect and disconnect without needing to synchronize two disparate LBA maps to retain access to data. Additionally, in some embodiments, any reactive or proactive updates may occur without needing to notify the client or synchronize LBA maps with the client. The placement engine 314, which may function similarly to the placement engine described above in connection with FIG. 1, may interact either with the client, the data mapping engine, or both. In some embodiments, the data mapping engine may, upon receiving a data request from the client, query the placement engine for an appropriate location within the storage system for retrieving or writing the data. In alternative embodiments, prior to submitting a data request to the map authority, the client may directly query the placement engine for such a storage location. The client may send the data request along with the placement engine's reply to the data mapping engine, which writes or reads data in accordance with both the data request and the metadata generated by the placement engine. FIG. 4 illustrates an environment 400 where each of the data mapping engine 402, the map authority 404, and the placement engine 406 are implemented as part of a client 408, in accordance with some embodiments. The client may, in some embodiments, be a subset of resources, either virtual or physical, of a distributed computing system, as described at least in connection with FIG. 1. The data mapping engine, the map authority, and the placement engine function similarly to the mapping engine, map authority, and placement described above in connection with at least FIGS. 1 and 3. Vesting the functionality of data mapping engine, the map authority, and the placement engine in the client provides several advantages if the client is permanent or only needs the data it stores upon the storage system 410 while it exists. For example, as will be contemplated, the amount of resources of the implementing distributed system necessary to service or access the storage system is less than if the components are implemented separately from the client. In addition, in some embodiments, the overall latency in servicing a client data request is theoretically lower if all components are bundled with the client. However, as may be appreciated, if a client using a configuration similar to the instant example is destroyed, the mapping to the underlying stored upon the storage server is also destroyed, and the data is orphaned. Various techniques to recover or discard such orphaned data, such as periodic garbage collection, may be employed by the storage system or by future clients so as to preserve the data or free the storage system resources for future clients. FIG. 5 illustrates an example environment 500 for implementing an LBA map 502 in accordance with some embodiments. As previously noted in connection with FIGS. 1-4, the LBA map is implemented by a data mapping engine that may or may not reside upon a client, but is otherwise implemented by a larger distributed system or distributed program execution service 200. The storage system 504, similarly to storage systems described in connection with FIGS. 1, 3 and 4 may in some embodiments be implemented by a virtual data store service 204 as described in connection with FIG. 2. The storage nodes 506-518 may correspond to, or be implemented by, a low latency data store 220, a high durability data store 222, the block data service 248, or any combination or comprising component thereof. The storage system and storage nodes may alternatively correspond to physical storage servers or other discrete storage system. It is contemplated that at least in some embodiments, the storage nodes have differing characteristics and/or capabilities, and may incorporate aspects of both virtual data stores and physical storage systems. In the example given, a requestor 520 may make requests to the data mapping engine for data in mapped data blocks 522-528, marked A, B, C and D in the illustrated example. A data request may include data comprising multiple data blocks, a single data block, or a subset of data within a data block. The mapped data blocks are mapped to one or more storage nodes on which the corresponding data block is stored. In some embodiments, a mapped data block corresponds to a master storage node and associated slave storage node for the stored data block. In some embodiments, the storage nodes are delocalized or flexibly configurable such that any storage node may serve as the master or slave node for a given data block. Additionally, existing master nodes may become slave nodes for a given data block and vice versa, and any storage node many serve multiple roles. In the example given, data block A uses storage node 506 as the master node and 508 as the slave node, while data block B uses storage node 508 as the master node and storage node 510 as the slave. Such flexibility is useful in certain triggering scenarios, such as a disruption in access of one or more storage node, and allows other storage nodes to replace failed or inaccessible nodes to retain uninterrupted access for the requester. In the example given, if for example data storage node 508 acting as a master for data block B and a slave for block A becomes inaccessible to the data mapping engine or map authority but retains connectivity with other nodes, a map authority or the storage system itself may reassign data storage node 510 to serve as a master node for data block B, and replicate both blocks A and B to another data storage node, such as node 518, to serve as a new slave node. In some embodiments, multiple storage nodes may serve as master nodes for single data block. In the illustrated example, nodes 506, 512 and 514 serve as master nodes for data block C. As discussed in connection with FIG. 2, such a multiplexed configuration may be used to increase availability, reliability, data redundancy, performance, or some combination of these and other factors. Writes and reads relating to data block C may happen sequentially or in parallel among the assigned nodes, depending on the needs of the requestor and the characteristics of the data. It will be appreciated that updates to assigned nodes will result in the LBA map being updated. As previously mentioned in connection with FIG. 1, an LBA map may be updated by the map authority or the data mapping engine, and in some embodiments, such updating occurs in the background, e.g. by the map authority, and the updated map is pushed thereafter to an active LBA map instance, e.g. as implemented by a data mapping engine implemented by a client. FIG. 6 illustrates an example process 600 for developing a mitigation plan updating an LBA map in response to data unavailability in accordance with various embodiments. As described in connection with the map authority and the data mapping engine of FIG. 1, the map authority or data mapping engine may devise such a mitigation plan to respond to a notification of data unavailability, which in some embodiments may incorporate some of the techniques discussed at least in FIGS. 1, 4 and 5. Some or all of the process 600 (or any other processes described herein, or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. In the illustrated example, an entity receives notification that a requested range of data is unavailable 602. The receiving entity may, in an exemplary embodiment, a map authority, although in other embodiments may include a client, a placement engine, and/or a data mapping engine. The notifying entity is, in some embodiments, the client, the storage system, the map authority, or the placement engine, but may also be any other entity capable of detecting that a requested range of data cannot be accessed from that entity. For example, a client may, through its implemented data mapping engine, unsuccessfully attempt to access a certain range or extent of data upon a storage system, and may report to the map authority and/or the placement engine that the data range or extent is unavailable. It will be appreciated that the data mapping engine in this example may, in addition or in the alternative, report the missing data range to itself, or from a different perspective, receive notification from the storage server that the data range cannot be accessed. Such missing or inaccessible data may be missing or inaccessible as the result of, for example, a network partition wherein a constituent storage node within the storage system cannot be accessed by the data mapping engine or other entity. As another non-limiting example, such data unavailability may be a normal part of the normal functioning of the overall system in a “virtual memory”-type model, wherein the LBA map being actively accessed by the requestor, e.g., a client, is not updated until a requested data range or extent is reported back as inaccessible or unavailable. In some embodiments, the client or other notifying entity may notify the entity directly with an indication of one or more events that caused or would cause data to be unavailable, including but not limited to network partitions, a lack of LBA map due to, for example, recent client creation, lost network connectivity to the storage server, or some other fault. Upon receiving a notification of unavailability, the entity receiving the notification determines what storage units correspond to the unavailable data range or extent 604 and determines certain characteristics and requirements of the data 606. The storage units of step 604 may, in some embodiments, be individual storage nodes within the storage system, but may correspond to larger or smaller divisions of such nodes and/or systems or some combination of the nodes and/or systems. Such a determination may be based on the information attained by the entity in step 606 about the system topology and/or the entity's information about and/or analysis of the data in question, but may also be based on further information received from the notifying entity (for example, a data mapping engine's or storage system's own determination or notification to the entity that the data range resides on specific storage nodes). The determination of data characteristics and requirements in step 606 include, but are not limited to, customer usage patterns of the data, the size and/or fragmentation of the data extent or range, availability and persistence lifetime requirements of the data, and the like. The determination of step 606 may be executed entirely by the entity receiving the notification, such as the map authority, or in some embodiments may be conducted in at least partial reliance upon a secondary determination of another entity, such as a data mapping engine, a placement engine, a client, or a storage system. Using, in some embodiments, the determinations of steps 604 and 606 above, the entity receiving the notification restores access to the data range or extent 608, updates the LBA map to reflect the new data location 610, then optionally pushes the updated LBA map to the entity that owns the actively used copy of the LBA map 612. In some embodiments, the entity restores data access by designating an existing slave node for the data range in question as a new master node, and optionally replicating the data range to a new slave node, using techniques similar to that described in connection with FIG. 5. The entity may, in some embodiments, determine that replication or reassignment is unnecessary or impossible, for example because the inaccessible data range is not accessed frequently enough to necessitate action to restore access to it, because access to the data range cannot be restored, or because the loss of access was determined to be temporary and the time needed to restore access and/or replicate the data to a new storage node exceeded the expected time of the accessibility outage. In some embodiments, as previously noted, the update to the LBA map may occur synchronously with a read or write, upon a write or read fault as in a “virtual memory” model, or asynchronously, for example, as a batched or scheduled process. Likewise, dependent on the implementation, updates to the entity's LBA map may or may not trigger a copy of the LBA map to an entity bearing the active LBA map. In some embodiments, as previously mentioned, the entity utilizing process 600 may also bear the active LBA map, thus obviating the need for step 612. FIG. 7 illustrates an example process 700 for reassigning and requisitioning new data nodes in accordance with some embodiments. As described in connection with FIG. 6, when data is eligible to be reassigned from one storage node to another, the reassigning entity makes a determination as to whether to perform the reassignment and optional requisitioning of, e.g., new slave nodes. However, also as previously described, triggers of such reassignments may be temporary in nature, and in some cases, the responsible entity may receive notification that a previously unavailable storage node may again become available while a reassignment or replication to a new storage node is occurring 702. As will be appreciated, the entity performing example process 700 may be a map authority, a client, the data mapping engine, or any other appropriate entity with information about the both the data in question and of the storage system upon which it resides. The entity uses such information to determine whether replication of the data to the new storage node should continue or cease 704. For example, if the entity determines at the time of notification that the replication or reassignment of the data will soon be complete, the entity may decide to use the newly assigned node or continue replication of the data to the new node rather than revert to the old node. As an another example, if during the replication, the entity determines that the as yet incompletely replicated or reassigned new node contains a substantially newer set of data as compared to the older node, the entity may decide to continue the replication as the cost of reuniting the data upon old storage node with the data on the new storage node is greater than merely allowing the replication to proceed. By exemplary contrast, if the data set being reassigned is very large and the replication or reassignment will take substantially longer than reverting to the old storage node, or the entity determines that little or no data has been changed since the start of replication or reassignment, the entity may decide to unite the data upon the old storage node with that of the new storage node. These examples are not limiting in any way; an entity might consider any appropriate set of factors in making the determination of whether to continue or terminate a reassignment or replication of data to a new storage node, or whether to integrate or differentiate the data upon the respective storage nodes. In some embodiments, the entity also determines which data on the target storage node, i.e., the new node to which the data has been reassigned, has changed relative to the “clean” data residing upon the previously used storage node 706. Such a determination may occur by a variety of techniques, including but not limited to the use of a write journal on either the storage node or on the client requesting the data to track writes and data versions, post-processing the data using one of a number of appropriate data differencing algorithms, and/or the use of consistency tags on each chunk of stored data. For example, in an embodiment, stored data chunks bearing such consistency tags may be organized into a data structure such as a hash or Merkle tree. The data structure may then be traversed to determine, comparatively, which chunk's hash value indicates the newest version of the given data chunk. Such examples are exemplary and not intended to be limiting in any way. Upon determining the changed data and whether to integrate data from the old storage node, the entity merges the data upon the determined target storage node and optionally updates the LBA map to reflect the new organization 708. The data merge may take place using similar techniques as discussed in connection with step 706, such as the use of hash trees or data differencing algorithms. In addition, the LBA map update may occur using at least any technique or embodiment described in conjunction with FIG. 6, e.g., synchronously with the completion of a reassignment or replication, or not at all until a data requestor's data request results in a fault. In some embodiments, unused storage nodes resulting from the process 700 may be cleared or marked as available for subsequent data writes, reassignments, or replications. FIG. 8 illustrates an example process 800 for optimizing data access upon storage systems in accordance with some embodiments. At step 802, an entity detects, by various mechanisms, or is informed of an inefficiency or suboptimal performance and/or availability in or when accessing a range of data from the storage system described in various contexts elsewhere in this disclosure. The entity for implementing example process 800 is, in some embodiments, the map authority. In alternative embodiments, the functionality of process 800 may be implemented by any appropriate entity, including but not limited to a client, a data mapping engine, or a placement engine, as discussed at least in connection with FIG. 1 above. As may be contemplated, the inefficiency may manifest as data inaccessibility and thus may include and/or incorporate elements of process 700 as applicable. In various embodiments, the detected inefficiency may relate to the data and includes but is not limited to suboptimal access latency, low data throughput, or low input/output operations per second (IOPS) relative to customer demand for the data. In some embodiments, the inefficiency may involve suboptimal system utilization such as underutilized or overcommitted system resources, including that of the storage system, the client, or the overall distributed environment described in connection with FIG. 2. Various external entities may report the inefficiency to the entity implementing process 800, the implementing entity may make the determination of inefficiency on its own, or the detection may include both notifications from external sources as well as a determination by the implementing entity itself. In some embodiments, a requestor, such as a client, directly informs the entity, such as a data mapping engine, that, for example, higher IOPS for a given set or subset or data is required, or that a given set or subset of data is very frequently accessed relative to other stored data, and thus forms a “hot spot.” The entity determines data usage patterns relating to the data upon the system 804, determines, based at least in part on the data usage patterns, an alternative system configuration for addressing and at least partially correcting the inefficiency 806, then implements the configuration 808. The data usage pattern may, in some embodiments, be specific to a requesting client or set of clients, or may be determined in the context of the broader storage system or distributed computing system. For example, the entity may be informed that, or alternatively, determine that based on a history of a client's data access, a certain chunk of data is accessed very frequently and thus constitutes a “hot spot.” The entity may then optimize access to that data by locating a storage node or nodes with high IOPS capability, such as a storage node with a high proportion of solid state drives, then moving the data to that storage node. As another example, an entity may determine that the data stored upon the storage system is disproportionately concentrated upon a small number of storage nodes and that some storage nodes are being unnecessarily underutilized. The entity may redistribute the data according to its determination of an optimal configuration for that data. The examples presented herein are not intended to be limiting in any way. It is contemplated that the implementation of the optimal system configuration in step 808 includes, in some embodiments, techniques for updating an LBA map as discussed at least in connection with FIG. 6. FIG. 9 illustrates an example process 900 for optimizing data access and storage based on information about a requestor's usage of the data. At step 902, an entity, in some embodiments the placement engine, is consulted by a data-requesting entity, which in some embodiments is the client, to optimize data access by, e.g., moving a volume upon which data resides. In alternative embodiments, the requestor may be any entity implementing an active LBA map, such as a data mapping engine implemented outside of the client. In some embodiments, the entity implementing process 900 may be any entity appropriate for implementing the functionality described herein. For example, a map authority may consult a placement engine at step 902 when the map authority determines that stored data should be moved. For clarity of example, an embodiment where the entity described is a placement engine and the requestor is a client implementing a data mapping engine will be described herein, although not intended to be limiting in any way. Upon receiving the request, which may be either a read or a write, the placement engine determines various characteristics of the stored data 904. Such a determination may, in some embodiments, be aided by information from the client regarding its intended use for the data, or any other information as appropriate. The characteristics determined may include, but are not limited to, data compressibility, data fragmentation, size of the data range, intended usage pattern, and intended frequency of data access. For example, the client may inform the placement engine, or the placement engine may determine based on a comparison with its determined information about the client's data usage patterns, that a certain range or extent of stored data will be a frequently accessed “hot spot,” similarly to described in connection with FIG. 8. As may be appreciated, the placement engine may be able to predict, based on its tracking of a client's data access history as analyzed by an implemented set of heuristics, a client's future usage patterns for a given set of data. The placement engine may use either or both these predictions and/or a client's notification to the placement engine of its intended access pattern for determining, at step 906, what storage node or set of storage nodes would best service requests for the data in question. In some embodiments, the placement engine has information about various operational parameters of the storage system, the larger distributed system, and/or the status of various components located therein, such as network topologies, storage node capabilities, system usage patterns and availability, and the like. The placement engine may use this information to determine an optimal location for the data. For example, the placement engine may be aware that two storage nodes capable of servicing data with a high expected TOPS requirement exist within the general system, and may select the node nearest to the client to further reduce latency. As another example, the placement engine may determine all of the storage nodes upon which the requested data exists, and select the storage node able to service data requests of a type determined to be executed at a high frequency (e.g., as predicted by the placement engine or informed by the client) in a fashion mostly closely matching the intended use, e.g, highest sustained throughput, lowest latency, or the like. Upon determining an optimal location for the data, the placement engine performs the optimization, e.g., by moving or copying a subset of the data to different storage nodes and/or devices, and updates or requests the update of an LBA map with the new configuration(s) or location(s) 908. In some embodiments, the updated LBA map is submitted to the client according using the techniques described at least in conjunction with FIG. 6. In some embodiments, step 908 may constitute a request to update an LBA map residing upon the client and implemented by the data mapping engine. FIG. 10 illustrates an example of an environment 1000 for implementing aspects in accordance with various embodiments. As will be appreciated, although a Web-based environment is used for purposes of explanation, different environments may be used, as appropriate, to implement various embodiments. The environment includes an electronic client device 1002, which can include any appropriate device operable to send and receive requests, messages, or information over an appropriate network 1004 and convey information back to a user of the device. Examples of such client devices include personal computers, cell phones, handheld messaging devices, laptop computers, set-top boxes, personal data assistants, electronic book readers, and the like. The network can include any appropriate network, including an intranet, the Internet, a cellular network, a local area network, or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Communication over the network can be enabled by wired or wireless connections, and combinations thereof. In this example, the network includes the Internet, as the environment includes a Web server 1006 for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. The illustrative environment includes at least one application server 1008 and a data store 1010. It should be understood that there can be several application servers, layers, or other elements, processes, or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing, and retrieving data, which may include any combination and number of data servers, databases, data storage devices, and data storage media, in any standard, distributed, or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store, and is able to generate content such as text, graphics, audio, and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML, or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 1002 and the application server 1008, can be handled by the Web server. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein. The data store 1010 can include several separate data tables, databases, or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 1012 and user information 1016, which can be used to serve content for the production side. The data store also is shown to include a mechanism for storing log data 1014, which can be used for reporting, analysis, or other such purposes. It should be understood that there can be many other aspects that may need to be stored in the data store, such as for page image information and to access right information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 1010. The data store 1010 is operable, through logic associated therewith, to receive instructions from the application server 1008 and obtain, update, or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user, and can access the catalog detail information to obtain information about items of that type. The information then can be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 1002. Information for a particular item of interest can be viewed in a dedicated page or window of the browser. Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server, and typically will include a computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available, and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein. The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in FIG. 10. Thus, the depiction of the system 1000 in FIG. 10 should be taken as being illustrative in nature, and not limiting to the scope of the disclosure. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS, and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, and any combination thereof. In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers, and business application servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM®. The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. Preferred embodiments are described herein, including the best mode known to the inventors for carrying out various embodiments. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context. 14594965 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers
nasdaq:amzn Amazon Apr 26th, 2022 12:00AM Jun 23rd, 2017 12:00AM https://www.uspto.gov?id=US11314717-20220426 Scalable architecture for propagating updates to replicated data Scalable architecture for propagating updates may be implemented for data replicated from a data set. A node may receive updates to items in a data set that have been committed to the data set. The node may determine whether the update should be applied to a replicated portion of the data set. For updates that should be applied, the node may identify another node that hosts the replicated portion of the data set and send a request to the other node to perform a conditional atomic operation to apply the update to the item in the replicated portion of the data set. The condition may compare a version identifier associated with an update and a current version identifier for the item at the other node. If the condition evaluates true, then the update to the item in the replicated portion may be performed. 11314717 1. A system, comprising: a memory to store program instructions which, if performed by at least one processor, cause the at least one processor to perform a method to at least: receive, at a propagation mesh comprising a first tier of nodes and a second tier of nodes, one or more updates committed to a data set including an update to an item of the data set, wherein the one or more updates, including the update to the item, have been committed to the data set stored separate from the first tier of nodes, wherein the update to the item is associated with a first version identifier, wherein the first tier of nodes, including a first node of the first tier of nodes, is assigned to: identify, of the received updates, applicable ones of the one or more updates to be propagated for application to respective replicated portions of the data set; and identify a particular node of the second tier of nodes, the second tier of nodes assigned to propagate applicable ones of the updates; route applicable ones of the updates to the particular node of the second tier of nodes that are assigned to propagate updates, wherein the particular node is identified from the second tier of nodes as assigned to propagate the update to the item; determine, by the first node of the first tier that the update to the item should be applied to a respective replicated portion of the data set, the replicated portion including a replica of the item and stored at a respective storage node; in response to the determination by the first node of the first tier that the update to the item should be applied to the replicated portion of the data set: send, by the particular node of the second tier, a request to the respective storage node to perform a conditional atomic operation to apply the update to the item, wherein the request includes a condition that compares, by the respective storage node, the first version identifier to a second version identifier stored for the item at the respective storage node; and in response to an indication of satisfaction of the condition determined by the respective storage node, the indication received from the respective storage node at the particular node of the second tier, update, by the particular node of the second tier, a propagation state to identify the update as committed to the replicated portion of the data set. 2. The system of claim 1, wherein the method further includes: receive, at the first node, another update to another item that has been committed to the data set; determine that the other update to the other item should not be applied to the replicated portion of the data set; and in response to the determination that the other update to the other item should not be applied to the replicated portion of the data set, ignore the other update. 3. The system of claim 1, and wherein the method further comprises: receiving, at a processing node for the data set, the update to the item; performing, by the processing node, the update to commit the update to the data set; identifying, by the processing node, the first node as a propagation node assigned to propagate the update to the data set; and sending, from the processing node, the update to the propagation node. 4. The system of claim 1, wherein the data set is a table maintained as part of a non-relational database service, wherein the replicated portion of a data set is a secondary index for the table, and wherein the request to perform the conditional atomic operation to apply the update to the item is formatted according to an application programming interface (API) for conditional atomic operations implemented by the non-relational database service. 5. A method, comprising: receiving, at a propagation mesh comprising a first tier of nodes and a second tier of nodes, an update to an item that has been committed to a data set, wherein the data set is stored separate from the first tier of nodes and separate from the second tier of nodes, wherein the update is associated with a first version identifier, receiving, at a particular node of the second tier of nodes from a node of the first tier of nodes, the update to the item, wherein the particular node is identified by the node of the first tier of nodes that receives updates committed to the data set, including the update to the item, and routes applicable ones of the updates to the second tier of nodes that propagate updates, wherein the particular node within the second tier of nodes is assigned to propagate the update to the item; identifying, by the node of the first tier of nodes, a storage node storing a replicated portion of the data set including the item that should apply the update; and sending, by the particular node of the second tier of nodes, a request to the storage node to perform a conditional atomic operation to apply the update to the item, wherein the request includes a condition that compares, by the storage node, the first version identifier to a second version identifier stored for the item at the storage third node. 6. The method of claim 5, further comprising: receiving, at the node of the first tier of nodes, another update to another item that has been committed to the data set; evaluating a schema for the replicated portion of the data set to determine that the other update to the other item should not be applied to the replicated portion of the data set; and in response to the determination that the other update to the other item should not be applied to the replicated portion of the data set, ignoring the other update. 7. The method of claim 5, further comprising: determining that the update is committed to the replicated portion of the data set based, at least in part, on a response to the request received from the storage node at the particular node of the second tier of nodes; and updating a propagation state to identify the update as committed to the replicated portion of the data set. 8. The method of claim 7, wherein the update to the propagation state identifies the committed update as a last committed version identifier for the replicated portion, and wherein the method further comprises: accessing, by a another node, the propagation state to obtain the last committed version identifier for the replicated portion; evaluating, by the other node, one or more updates committed to the data set that occurred since the last committed version identifier to determine a recovery value for another item in the replicated portion of the data set; and sending, by the other node, a request to the storage node to update the other item in the replicated portion of the data set to the recovery value. 9. The method of claim 5, and wherein the method further comprises: receiving, at a processing node for the data set, the update to the item; performing, by the processing node, the update to commit the update to the data set. 10. The method of claim 5, wherein the update is mapped to a clock value, wherein the update to the item is a request to delete the item, wherein the application of the update stores a tombstone marker as part of the item and the clock value for the item, and wherein the method further comprises: sending, by the particular node of the second tier of nodes, a committed clock value for the replicated portion to the storage node; and based, at least in part, on a comparison of the clock value for the item with the committed clock value for the replicated portion, deleting, by the storage node, the item from the replicated portion of the data set. 11. The method of claim 5, wherein the replicated portion of the data set is stored according to a partitioning scheme for storing replicated portions of the data set amongst multiple other nodes including the storage node, wherein the update modifies an attribute of the item that results in a change in location for the item according to the partitioning scheme, and wherein the method further comprises: identifying a node of the other nodes to store an updated version of the item; sending a request to insert the item as part of the replicated portion at the node of the other nodes; wherein the requested conditional atomic operation to apply the update to the item is a request to delete the item from the replicated portion at the storage node. 12. The method of claim 5, wherein the node of the first tier evaluates a schema for the replicated portion of the data set to determine that the update to the item should be applied to the replicated portion of the data set and wherein responsive to determining that the update to the item should be applied to the replicated portion of the data set, the node of the first tier sends the update to the particular node of the second tier. 13. The method of claim 5, wherein the data set is a database table and wherein the replicated portion of a data set is a secondary index for the table. 14. One or more non-transitory, computer-readable storage media, storing program instructions that when executed on or across one or more processors cause the one or more processors to implement: receiving, at a propagation mesh comprising a first tier of nodes and a second tier of nodes, an update to an item that has been committed to a data set, wherein the data set is stored separate from the first tier of nodes and separate from the second tier of nodes, wherein the update is associated with a first version identifier, receiving, from a node of the first tier of nodes at a particular node of the second tier of nodes assigned to propagate updates, the update to the item, wherein the node of the first tier of nodes is assigned to route applicable ones of the updates to the second tier of nodes that propagate updates, wherein the particular node of the second tier of nodes is assigned to propagate the update to the item; determining that the update to the item should be applied to a replicated portion of the data set including the item; in response to determining that the update to the item should be applied to the replicated portion of the data set: identifying a storage node storing the replicated portion of the data set; and sending a request to the storage node to perform a conditional atomic operation to apply the update to the item, wherein the request includes a condition that compares, by the storage node, the first version identifier to a second version identifier stored for the item at the storage node. 15. The one or more non-transitory, computer-readable storage media of claim 14, further comprising program instructions that cause the one or more processors to further implement: determining that the update is committed to the replicated portion of the data set based, at least in part, on a response to the request received from the third storage node at the particular node; and updating a propagation state to identify the update as committed to the replicated portion of the data set. 16. The one or more non-transitory, computer-readable storage media of claim 15, wherein the update to the propagation state identifies the committed update as a last committed version identifier for the replicated portion, wherein the update to the item is a request to delete the item, wherein the application of the update stores a tombstone marker as part of the item and the version identifier of the update as the version identifier for the item, and further comprising program instructions that cause the one or more processors to further implement: sending, by the particular node of the second tier of nodes to the storage node, a deletion point for the replicated portion determined based, at least in part, on the last committed version identifier for the replicated portion; and based, at least in part, on a comparison of the version identifier for the item with the deletion point for the replicated portion, deleting, by the storage node, the item from the replicated portion of the data set. 17. The one or more non-transitory, computer-readable storage media of claim 14, wherein the update to the item is a request to delete the item, wherein the application of the update stores an entry for the data set identifying a last committed version identifier for updates from the data set as part of propagation state maintained at the second node and further comprising program instructions that cause the one or more processors to further implement: ignoring one or more additional updates received from the particular node at the storage node based, at least in part, on a comparison between the last committed version identifier for updates from the data set and respective version identifiers for the one or more additional updates. 18. The one or more non-transitory, computer-readable storage media of claim 14, wherein the update is mapped to a clock value, wherein the update to the item is a request to delete the item, wherein the application of the update stores a tombstone marker as part of the item and the clock value for the item, and further comprising program instructions that cause the one or more-processors to further implement: sending, by the particular node of the second tier of nodes, a committed clock value for the replicated portion to the storage node; and based, at least in part, on a comparison of the clock value for the item with the committed clock value for the replicated portion, deleting, by the storage node, the item from the replicated portion of the data set. 19. The one or more non-transitory, computer-readable storage media of claim 14, further comprising program instructions that cause the one or more computing devices to further implement: receiving, at the particular node of the second tier of nodes, another update to another item that has been committed to the data set; determining that the other update to the other item should not be applied to the replicated portion of the data set; and in response to determining that the other update to the other item should not be applied to the replicated portion of the data set, ignoring the other update. 20. The one or more non-transitory, computer-readable storage media of claim 14, wherein the data set is a table maintained as part of a database service, wherein the replicated portion of a data set is a secondary index for the table, and wherein the request to perform the conditional atomic operation to apply the update to the item is formatted according to an application programming interface (API) for conditional atomic operations implemented by the database service. 20 BACKGROUND Data is often distributed to scale the storage capacity or processing capacity of systems that provide access to the data. For example, database tables or other data objects can be divided into partitions in order to leverage the capacity of different hosts, such as different servers or other computing devices, to separately provide access to individual partitions. Replicating different portions of the partitioned data can further increase the complexity and costs of propagating changes to the data to other data replicas. For example, projections or views of a partitioned database table may be separately maintained. Propagating changes to the projection or views may increase the costs of processing updates at the original partitions of the database table as the original partitions of the database table may need to ensure that the appropriate projections or views of the database table are updated. Techniques that can provide scalable mechanisms for replicating updates to replicated data are thus highly desirable. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a logical block diagram illustrating a scalable architecture for propagating updates to replicated data, according to some embodiments. FIG. 2 is a logical block diagram illustrating a provider network that implements a database service and global clock service, according to some embodiments. FIG. 3 is a logical block diagram illustrating a database service that may implement a scalable architecture for propagating updates to replicated data, according to some embodiments. FIG. 4 is a logical block diagram is a global clock service that provides global clock values to clients, according to some embodiments. FIG. 5 is a sequence diagram illustrating interactions between propagation nodes and processing nodes to propagate updates, according to some embodiments. FIG. 6 is logical block diagram illustrating interactions between processing nodes storing a secondary index, a propagation node, and a propagation state that utilize tombstone markers to handle deletions of items at a secondary index, according to some embodiments. FIG. 7 is logical block diagram illustrating interactions between processing nodes storing a secondary index, a propagation node, and a propagation state that utilize tombstone markers and global clock values to handle deletions of items at a secondary index, according to some embodiments. FIG. 8 is logical block diagram illustrating interactions between processing nodes storing a secondary index and a propagation node that utilize partition state mapping to handle deletions of items at a secondary index, according to some embodiments. FIG. 9 is a high-level flowchart illustrating various methods and techniques to utilize a scalable architecture for propagating updates to replicated data, according to some embodiments. FIG. 10 is a high-level flowchart illustrating various methods and techniques to utilize a scalable architecture for propagating updates to a secondary index from a partition of a database table, according to some embodiments. FIG. 11 is a high-level flowchart illustrating various methods and techniques to send updates to propagation nodes, according to some embodiments. FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to a replicated portion of a data set, according to some embodiments. FIG. 13 is a high-level flowchart illustrating various methods and techniques to perform a failover operation to a new propagation node, according to some embodiments. FIG. 14 is a block diagram illustrating an example computing system, according to some embodiments. While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION The systems and methods described herein may be employed in various combinations and in various embodiments to implement a scalable architecture for propagating updates to replicated data, according to some embodiments. Data sets may be distributed across one or more locations in a storage system, in some embodiments. In this way, clients can access and independently update different portions of the data set at the one or more locations in the storage system, in some embodiments. The arrangement of the data set may be optimal for some access requests (e.g., queries based on indexed fields or values in a table). However, to optimally process other access requests (e.g., queries based on non-indexed fields or values in a table), portions of the data set (or the entire data set) may be replicated in one or more other locations (e.g., a different storage nodes, systems, or hosts) in a different arrangement, subset, or format that is more performant for performing the other type of access request, in some embodiments. To maintain the replicated portions of the data set to be consistent with the data set, updates made to the data set may need to be replicated to the replicated portions of the data set, in some embodiments. For example, if a replicated portion of a data set maintains a list or view of items in the data set with a common attribute value (e.g., “gender=female”), then an update that adds an item to the data set with the common attribute value may be also propagated to the replicated portion of the data set, in one embodiment. As the number and size of replicated portions of a data set and the data set itself increase, the propagation of updates to replicated data sets may take away from the capacity of nodes, storage systems, or hosts that maintain the data set to both provide access to the data set and propagate updates to replicated portions of the data set, in some embodiments. Instead of burdening the nodes, storage systems or other hosts that maintain the data set with the responsibility for ensuring that updates are appropriately propagated to the correct replicated portions of the data set, a scalable architecture for propagating updates to replicated data may be implemented that independently manages and performs the propagation of updates to the replicated portions of the data set, according to some embodiments. FIG. 1 is a logical block diagram illustrating a scalable architecture for propagating updates to replicated data, according to some embodiments. Data set 100 may be one or more data structures, objects or other groupings of data distributed amongst multiple locations for storage and access, in some embodiments. For example, as illustrated in FIG. 1, different nodes, such as nodes 110a, 110b, and 110c, may store data that is part of data set 100, such as data 112a, 112b, and 112c respectively, in one embodiment. Data 112 may be partitions or other distributions of data from data set 100, in some embodiments, such as table partitions or other data objects that make up data set 100 that is stored as part of a relational or non-relational (e.g., NoSQL) database. In at least some embodiments, data 112 may be separately replicated amongst multiple nodes (not illustrated) in order to increase the durability and availability of data 112. For example node 110a may be one of multiple nodes storing copies of data 112a, in some embodiments. However, in at least some embodiments, one of the multiple nodes (e.g., node 110a) may be responsible for sending updates to node 130 for separate replication. Replicated portion(s) of the data set 120 may also be maintained for access, in various embodiments. For example, nodes, such as nodes 130a, 130b, and 130c may respectively store data 132a, 132b, and 132c, which may be a portion of one or more different parts of data set 100, in one embodiment. As discussed below with regard to FIGS. 2-8 and 10-13, data 132 may be a secondary index, projection, or other view of data (or partitions thereof) that represents a subset of data set 100, in one embodiment, which may be stored according to a different format, schema, or other arrangement than data set 100. In at least some embodiments, data 132 may be stored, organized, or made accessible to access, such as quer(ies) 150a, 150b, and 150c differently than data 112 of data set 100. If replicated portion of data set 120 were a secondary index may, for instance, then data 132 may store entries, objects, or other information from data set 100 in a different order on storage devices, in one embodiment. Nodes, such as nodes 110a, 110b, 110c, 130a, 130b, and 130c may be one or more virtual or physical storage devices, processing devices, servers or other computing systems, such as computing system 2000 discussed below with regard to FIG. 14 that may store data for data set 100 and replicated portion of data set 120, in various embodiments. Nodes may provide access to data objects. For example, as illustrated in FIG. 1, updates, such as updates 102a, 102b, and 102c may be received and processed at nodes 110a, 110b, and 110c respectively and queries, such as quer(ies) 150a, 150b, and 150c, may be received and processed at nodes 130a, 130b, and 130c, in one embodiment. Each node may independently process access requests (e.g., updates or queries), in various embodiments. For example, node 110a may process different updates 102a (e.g., to change a record, object, or entry in data 112a), than node 110c, which may process updates 102c (e.g., to insert or delete a record, object, or entry in data 112c). Updates 102 may be any change, addition, deletion, or other modification of data set 100, in various embodiments. In at least some embodiments, nodes 130a, 130b, and 130c may provide access to data 132a, 132b, and 132c as part of replicated portion(s) of data set 120. For example, quer(ies) 150 may specify one or more predicates, search criteria, or other indication of desired data that may be applied to data 132 to be returned to a requestor, in one embodiment. A query may be specified according to a particular protocol or language (e.g., Structured Query Language (SQL)) or may be formatted according a programmatic interface (e.g., Application Programming Interface (API)), in one embodiment. In at least some embodiments, quer(ies) 150 may be identified for a consistent view with respect to data set 100 (or may be marked or identified for performance without a consistency guarantee). Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions), and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments. Propagation mesh 140 may implement one or more tiers or layers of propagation nodes 142 to collect, evaluate, and apply updates to replicated portion(s) of data set 120, in some embodiments. For example, a first tier of propagation nodes 142 may initially receive the updates 104 from nodes 110, filter out those updates that do not need to be replicated, and send the remaining updates on to another propagation node 142 that may be responsible for updating data 132 in node(s) 130. In this way, nodes 110 need not determine which nodes 130 should receive an update and track whether or not the update has been successfully performed at nodes 130, but instead may forward or otherwise send on all updates, which may prevent the imposition of additional latency into the update path for nodes 110. In order to prevent exposing the replicated updates at replicated portion of data set 120 out of order, propagation nodes 142 may perform conditional, atomic operations 108 to apply updates 106, in various embodiments, to enforce a logical ordering of updates as they are committed to data 100. A conditional, atomic operation may atomically check a condition included in the request to apply the update and if the condition evaluates true apply the update, in some embodiments. In this way, no changes to an item or part of data 132 being considered for an update 106 may intervene between the evaluation of the condition and the application of the update (if the condition evaluates true). In various embodiments, the condition for replicated updates may include a version identifier for the replicated update 106 (e.g., a sequence number, timestamp, or other identifier that provides a logical ordering for updates to the data set as performed or otherwise committed at nodes 110). A current version identifier for the item or part of data 132 being considered for an update 106 may also be maintained at or accessible to node 130 (e.g., as a field, attribute, or metadata value), in some embodiments. The condition may compare the version identifier of the update with the current version identifier for the item or part of data 132 and evaluate true if the version identifier of the update is later than the current version of the item or part of data 132 (e.g., a newer timestamp value, higher sequence number or other indication that the version identifier of the update occurs after the current version identifier in the logical ordering of updates to the data set), in some embodiments. In this way, the update may only be applied and thus exposed to queries 150 if they move the state of the item or other part of the data 132 forward in the logical ordering of updates to the data set. Because propagation nodes may fail, lose connections, or otherwise fail to complete replicated update(s) 106 (e.g., due to node 130 failures), conditional atomic operations to apply updates according to version identifiers may prevent replicated update(s) that are replayed or otherwise processed out of order from being applied out of order at data 132, moving the state of an item or other part of data 132 backward in the logical ordering of changes. For example, an update to insert an item that occurs before an update to delete the item in the logical ordering could be potentially replayed (e.g., due to a failure of a propagation node) and thus attempt to re-insert the item which had been deleted, making that deleted item visible at a replicated portion of data set 120. If the condition describing the version identifier is evaluated, however, before applying the update, then only those updates that do not represent a prior state in the logical ordering may be applied, in some embodiments. In at least some embodiments, the current version identifier may be stored as part of data 132 (e.g., as attribute or field value of an item). In such scenarios, deletions of the item could present scenarios where a replay of an early insert may be mistaken as a “new” insert of the item, as there may be no existing version identifier to perform the comparison. Various techniques discussed below with regard to FIGS. 6-8 and 13 may be implemented to ensure that deleted items are not re-inserted into a replicated portion of a data set 120 as a result of an out of order update. Please note that previous descriptions of implementing a scalable architecture for propagating updates to replicated data are not intended to be limiting, but are merely provided as logical examples. The number of nodes or partitions of data set 100 may be different as may be the number of nodes storing replicated portion of data set 130 or propagation nodes 142, for example. This specification begins with a general description of a provider network that may implement a database service that may implement secondary indexes that are replicated portions of a database table stored as part of the database service. The database service may implement a scalable architecture for propagating updates to replicated data to propagate updates to the database table at the secondary indexes, in one embodiment. Then various examples of a database service and other services are discussed, including different components/modules, or arrangements of components/module, that may be employed as part of implementing the database service, in one embodiment. A number of different methods and techniques to implement a scalable architecture for propagating updates to replicated data are then discussed, some of which are illustrated in accompanying flowcharts. Finally, a description of an example computing system upon which the various components, modules, systems, devices, and/or nodes may be implemented is provided. Various examples are provided throughout the specification. FIG. 2 is a logical block diagram illustrating a provider network that implements a database service and global clock service, according to some embodiments. Provider network 200 may be a private or closed system, in one embodiment, or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to clients 250, in another embodiment. In one embodiment, provider network 200 may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system 2000 described below with regard to FIG. 14), needed to implement and distribute the infrastructure and storage services offered by the provider network 200. In one embodiment, provider network 200 may implement various computing resources or services, such as database service(s) 210 (e.g., relational or non-relational (NoSQL) database query engines), or other data processing services as part of other services 230, such as map reduce processing, data warehouse, data flow processing, and/or other large scale data processing techniques), data storage services (e.g., an object storage service, block-based storage service, or data storage service that may store different types of data for centralized access), virtual compute services, and/or any other type of network based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services not illustrated). In at least some embodiments, provider network 200 may implement global clock service 220 to provide global clock values for propagating updates to replicated portions of data, such as secondary indexes, in database service(s) 210 or other services 230. In various embodiments, the components illustrated in FIG. 2 may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components of FIG. 2 may be implemented by a system that includes a number of computing nodes (or simply, nodes), in one embodiment, each of which may be similar to the computer system embodiment illustrated in FIG. 14 and described below. In one embodiment, the functionality of a given system or service component (e.g., a component of database service(s) 210) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one data store component). Database service(s) 210 may include various types of database services, in one embodiment, (both relational and non-relational) for storing, querying, and updating data. Such services may be enterprise-class database systems that are highly scalable and extensible. In one embodiment, queries may be directed to a database in database service(s) 210 that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. The database system may work effectively with database schemas of various types and/or organizations, in different embodiments. In one embodiment, clients/subscribers may submit queries in a number of ways, e.g., interactively via a SQL interface to the database system. In other embodiments, external applications and programs may submit queries using Open Database Connectivity (ODBC) and/or Java Database Connectivity (JDBC) driver interfaces to the database system. In one embodiment, database service(s) 210 may provide a RESTful programmatic interface in order to submit access requests (e.g., to get, insert, delete, or query data). In one embodiment, database service(s) 210 may also be any of various types of data processing services to perform different functions (e.g., query or other processing engines to perform functions such as anomaly detection, machine learning, data lookup, or any other type of data processing operation). For example, in at least one embodiment, database service(s) 210 may include a map reduce service that creates clusters of processing nodes that implement map reduce functionality over data stored in one of data storage services 220. Various other distributed processing architectures and techniques may be implemented by database service(s) 210 (e.g., grid computing, sharding, distributed hashing, etc.) in another embodiment. In one embodiment, clients 250 may encompass any type of client configurable to submit network-based requests to provider network 200 via network 260, including requests for database service(s) 210 (e.g., to perform a consistent query at a secondary index for a database hosted in database service 210) or data storage service(s) (e.g., a request to create, read, write, obtain, or modify data in data storage service(s), etc.). For example, in one embodiment a given client 250 may include a suitable version of a web browser, or may include a plug-in module or other type of code module that executes as an extension to or within an execution environment provided by a web browser. Alternatively in a different embodiment, a client 250 may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of storage resources in data storage service(s) to store and/or access the data to implement various applications. In one embodiment, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client 250 may be an application that interacts directly with provider network 200, in one embodiment. In one embodiment, client 250 may generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In one embodiment, a client 250 may provide access to provider network 200 to other applications in a manner that is transparent to those applications. For example, client 250 may integrate with a database on database service(s) 210. In such an embodiment, applications may not need to be modified to make use of the storage system service model. Instead, the details of interfacing to the database service(s) 210 may be coordinated by client 250. Client(s) 250 may convey network-based services requests to and receive responses from provider network 200 via network 260, in one embodiment. In one embodiment, network 260 may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients 250 and provider network 200. For example, network 260 may encompass the various telecommunications networks and service providers that collectively implement the Internet. In one embodiment, network 260 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client 250 and provider network 200 may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network 260 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client(s) 250 and the Internet as well as between the Internet and provider network 200. It is noted that in one embodiment, client(s) 250 may communicate with provider network 200 using a private network rather than the public Internet. FIG. 3 is a logical block diagram illustrating a database service that may implement a scalable architecture for propagating updates to replicated data, according to some embodiments. Database service 210 may implement request routing 310, in one embodiment. Request routing may 310 receive, authenticate, parse, throttle and/or dispatch service or other access requests, among other things, in one embodiment. In one embodiment, database service 210 may implement control plane 320 to implement one or more administrative components, such as automated admin instances which may provide a variety of visibility and/or control functions). In one embodiment, database service 210 may also implement a plurality of processing nodes 330, each of which may manage one or more partitions 370 of a data set (e.g., a database) on behalf of clients/users or on behalf of the data storage service (and its underlying system) which may be stored in storage (on storage devices attached to processing nodes 330). In at least one embodiment, database service 210 may implement propagation nodes 380 which may propagate or otherwise replicate updates to partition(s) 370 from one processing node 330 to another processing node 330, which may store a replicated portion of a database table (e.g., a global secondary index). Control plane 320 may provide visibility and control to system administrators, detect split events for processing nodes, and/or anomaly control, resource allocation, in one embodiment. In one embodiment, control plane 320 may also include an admin console, through which system administrators may interact with database service 210 (and/or the underlying system). In one embodiment, the admin console may be the primary point of visibility and control for database service 210 (e.g., for configuration or reconfiguration by system administrators). For example, the admin console may be implemented as a relatively thin client that provides display and control functionally to system administrators and/or other privileged users, and through which system status indicators, metadata, and/or operating parameters may be observed and/or updated. Control plane 320 may provide an interface or access to information stored about one or more detected control plane events, such as split requests to be processed, at database service 210, in one embodiment. Control plane 320 may direct the performance of different types of control plane operations among the nodes, systems, or devices implementing database service 210, in one embodiment. For instance, control plane 320 may communicate with processing nodes to initiate the performance of various control plane operations, such as moves, splits, update tables, delete tables, create indexes, etc. . . . . In one embodiment, control plane 320 may update a task registry (or some other table or data structure) with the status, state, or performance information of the control plane operations currently being performed. Control plane 320 may detect, direct, or otherwise instigate different partition movement operations. In at least some embodiments, control plane 320 may direct the creation, allocation, maintenance, and otherwise management of secondary indexes for database tables, such as global secondary indexes. In one embodiment, request routing 310 may support handling requests formatted according to an interface to support different types of web services requests. For example, in one embodiments, database service 210 may implement a particular web services application programming interface (API) that supports a variety of operations on tables (or other data objects) that are maintained and managed on behalf of clients/users by the data storage service system (and/or data stored in those tables). In one embodiment, database service 210 may support different types of web services requests. For example, in one embodiments, database service 210 may implement a particular web services application programming interface (API) that supports a variety of operations on tables (or other data objects) that are maintained and managed on behalf of clients/users by the data storage service system (and/or data stored in those tables). In one embodiment, request routing 310 may perform parsing and/or throttling of service requests, authentication and/or metering of service requests, dispatching service requests, and/or maintaining a partition assignments that map processing nodes to partitions. Processing nodes 330 may implement database management 340, in one embodiment. Database management 340 may create, update, define, query, and/or otherwise administer databases, in one embodiment. For instance, database management 340 may maintain a database according to a database model (e.g., a relational or non-relational database model). In one embodiment, database management 340 may allow a client to manage data definitions (e.g., Data Definition Language (DDL) requests to describe column definitions, requests to add item attributes, etc.). In one embodiment, database management 340 may handle requests to access the data (e.g., to insert, modify, add, or delete data as well as requests to query for data by generating query execution plans to determine which partitions of a database may need to be evaluated or searched in order to service the query). In one embodiment, database management 340 may also perform other management functions, such as enforcing access controls or permissions, concurrency control, or recovery operations. In one embodiment, database management 340 may send requests to storage engine 350 to access partitions 370 in order to process access requests (e.g., requests to read or scan different partitions or particular items or parts of data within a partition). Database management 340 may also obtain global clock value mappings, as discussed below with regard to FIG. 7, in order to determine the mapping of local time at the processing nodes 330 to global clock values, in some embodiments. Database management 340 may associate or otherwise identify, access requests, such as queries or updates to a partition based on the local time mapping (e.g., mapping a query or update received with a 10 millisecond local clock interval to a global clock value), in some embodiments. Database management 340 may send messages or other indications of updates to other processing nodes 340 or propagation nodes 380 in order to replicate changes, in some embodiments. The messages may include an indication of the global clock value associated with the updates, in some embodiments. In one embodiment, processing nodes 330 may implement storage engine 350 to access storage 360 (e.g., internal or external to processing nodes 330) that store partitions 370. Storage engine 350 may perform requests on behalf of database management to create, read, update and delete (CRUD) data in a partition, in one embodiment. Storage engine 350 may implement buffers, caches, or other storage components to reduce the number of times storage is accessed, in one embodiment. Storage engine 350 may implement various storage interfaces to access storage 360. For example, in those embodiments where external storage is a network-based data storage service, like another data storage service in provider network 200 in FIG. 2, then storage engine 350 may establish a network connection with the service as part of obtaining access to a storage unit (e.g., by submit requests formatted according to a protocol or API to establish the connection). In another embodiment, storage engine 350 may access internal storage using storage protocols (e.g., Small Computer Systems Interface (SCSI)) over a bus or other interconnect that directly connects a host implementing storage engine 350 with storage 360). In one embodiment, database service 210 may provide functionality for creating, accessing, and/or managing tables processed at nodes within a single-tenant environment than those that provide functionality for creating, accessing, and/or managing tables maintained in nodes within a multi-tenant environment. In another embodiment, functionality to support both multi-tenant and single-tenant environments may be included in any or all of the components illustrated in FIG. 3. Note also that in one embodiment, one or more processing nodes 330 process access requests on behalf of clients directed to tables. Some of these processing nodes may operate as if they were in a multi-tenant environment, and others may operate as if they were in a single-tenant environment. In one embodiments, processing nodes 330 that operate as in a multi-tenant environment may be implemented on different processing nodes (or on different virtual machines executing on a single host) than processing nodes that operate as in a single-tenant environment. In addition to dividing or otherwise distributing data sets across processing nodes 330 in separate partitions, processing nodes 330 may also be used to may be implementing or maintaining resources in multiple different arrangements, some of which may be part of larger collections or groups of resources. A replica group, for example, may be composed of a number of processing nodes maintaining a replica of particular portion of data (e.g., a partition of a table) for the database service 210. Moreover, different replica groups may utilize overlapping nodes, where a processing node 330 may be a member of multiple replica groups, maintaining replicas for each of those groups whose other processing node 330 members differ from the other replica groups. Thus if, for example replica group 1 has processing nodes A, B, and C, replica group 2 may have processing nodes B, D, and E. Besides differing groups of processing nodes, in various embodiments, processing nodes may have different relationships to other processing nodes. Continuing with the above example, for replica group 1, storage node A may be a master node, performing special functions with regard to access requests directed toward the partition maintained by replica group 1. For replica group 2, however, processing node B may be the master node. Therefore, a processing node's relationship to other processing nodes may be different depending on the particular grouping evaluated. These various examples of different arrangements of resources among processing nodes highlight the various different ways that control plane operations may interact with resources that are not solely devoted to one particular (though they may be) function, data replica, etc. In at least some embodiments, the systems underlying the database service 210 described herein may store data on behalf of storage service clients (e.g., client applications, users, and/or subscribers) in tables containing items that have one or more attributes. In some embodiments, database service 210 may present clients/users with a data model in which each table maintained on behalf of a client/user contains one or more items, and each item includes a collection of attributes. The attributes of an item may be a collection of name-value pairs, in any order. In some embodiments, each attribute in an item may have a name, a type, and a value. Some attributes may be single valued, such that the attribute name is mapped to a single value, while others may be multi-value, such that the attribute name is mapped to two or more values. In some embodiments, the name of an attribute may always be a string, but its value may be a string, number, string set, or number set. The following are all examples of attributes: “ImageID”=1, “Title”=“flower”, “Tags”={“flower”, “jasmine”, “white” }, “Ratings”={3, 4, 2}. The items may be managed by assigning each item a primary key value (which may include one or more attribute values), and this primary key value may also be used to uniquely identify the item. In some embodiments, a large number of attributes may be defined across the items in a table, but each item may contain a sparse set of these attributes (with the particular attributes specified for one item being unrelated to the attributes of another item in the same table), and all of the attributes may be optional except for the primary key attribute(s). In other words, the tables maintained by the database service 210 (and the underlying storage system) may have no pre-defined schema other than their reliance on the primary key. Note that in some embodiments, if an attribute is included in an item, its value cannot be null or empty (e.g., attribute names and values cannot be empty strings), and, and within a single item, the names of its attributes may be unique. However, in at least some other embodiments, pre-defined database schemes may be employed, such as the various types of relational databases implemented using Server Query Language (SQL). In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index. Partitions of a secondary index may be stored on a processing node 330 as discussed above, which may provide read-only access to the secondary index. Changes to the secondary index may be received as updates propagated to the secondary index partition(s) at a processing node 330 receive from propagation nodes 380. Propagation nodes may be selected or assigned responsibility for propagating updates, as discussed below with regard to FIG. 13, in some embodiments. Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 (e.g., another data store system within database service 210 or implemented as part of another storage service in provider network 200). Propagation state 382 may include various information for tracking the state of operations to propagate updates, as discussed below with regard to FIGS. 6-8 and 13. Propagation management 390 may detect the failure of propagation nodes 380, provision additional propagation nodes 380 or recover the failed propagation nodes 380, in some embodiments. Propagation management 390 may assign propagation responsibility in response to receive requests from processing nodes 330 for a propagation endpoint. In some embodiments, propagation management 390 may assign propagation responsibility for propagation node(s) 380 to propagate updates to secondary index(es) (or partitions thereof), in some embodiments. Database service 210 may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs. The control plane APIs provided by database service 210 (and/or the underlying system) may be used to manipulate table-level entities, such as tables and indexes and/or to re-configure various tables These APIs may be called relatively infrequently (when compared to data plane APIs). In some embodiments, the control plane APIs provided by the service may be used to create tables or secondary indexes for tables at separate processing nodes, import tables, export tables, delete tables or secondary indexes, explore tables or secondary indexes (e.g., to generate various performance reports or skew reports), modify table configurations or operating parameter for tables or secondary indexes (e.g., by modifying the amount of throughput capacity, adding storage capacity for additional read replicas, splitting partitions or moving partitions), and/or describe tables or secondary indexes. In some embodiments, control plane APIs that perform updates to table-level entries may invoke asynchronous workflows to perform a requested operation. Methods that request “description” information (e.g., via a describeTables API) may simply return the current known state of the tables or secondary indexes maintained by the service on behalf of a client/user. The data plane APIs provided by database service 210 (and/or the underlying system) may be used to perform item-level operations, such as storing, deleting, retrieving, and/or updating items and/or their attributes, or performing index-based search-type operations across multiple items in a table, such as queries and scans. The APIs provided by the service described herein may support request and response parameters encoded in one or more industry-standard or proprietary data exchange formats, in different embodiments. For example, in various embodiments, requests and responses may adhere to a human-readable (e.g., text-based) data interchange standard, (e.g., JavaScript Object Notation, or JSON), or may be represented using a binary encoding (which, in some cases, may be more compact than a text-based representation). In various embodiments, the system may supply default values (e.g., system-wide, user-specific, or account-specific default values) for one or more of the input parameters of the APIs described herein. As noted above, the control plane APIs supported by the service may include APIs that perform updates on tables (e.g., a CreateTable API and/or a DeleteTable API) or secondary indexes (e.g., a CreateSecondaryIndex API and/or a DeleteSecondaryIndex API). In various embodiments, these APIs may invoke asynchronous workflows to perform the requested operation. In addition, the service may support methods that return the current known state (e.g., a DescribeTables API) or that return various skew metrics or reports (e.g., an ExploreTable API). In some embodiments, a common use model may be for a client to request an action (e.g., using a CreateTable API), and then to poll on its completion via the corresponding description API (e.g., DescribeTables). Other supported methods may be used to modify table configurations or parameters, e.g., an UpdateTable API (which may be used to increase provisioned throughput capacity for a given table or create a secondary index for the table), a PurchaseTableCapacity API (which may be used to increase the provisioned storage capacity for a given table), a SplitTable API (which may be used to explicitly invoke an operation to split a table or any of its partitions), or a MoveTable API (which may be used to explicitly invoke an operation to move one or more tables, partitions of a table, or partition replicas of a table). Database service 210 may include support for some or all of the following operations on data maintained in a table (or index) by the service on behalf of a storage service client: put (or store) an item, get (or retrieve) one or more items having a specified primary key, delete an item, update the attributes in a single item, query for items using an index, and scan (e.g., list items) over the whole table, optionally filtering the items returned, or conditional variations on the operations described above that are atomically performed (e.g., conditional put, conditional get, conditional delete, conditional update, etc.). For example, the data storage service (and/or underlying system) described herein may provide various data plane APIs for performing item-level operations, such as a PutItem API, a GetItem (or GetItems) API, a DeleteItem API, and/or an UpdateItem API, as well as one or more index-based seek/traversal operations across multiple items in a table, such as a Query API and/or a Scan API. FIG. 4 is a logical block diagram is a global clock service that provides global clock values to clients, according to some embodiments. Global clock service 220 may implement a plurality of different clock nodes, such as leader clock node 410 and clock nodes 420a, 420b, 420c, and 420n, to determine and provide global clock value mappings in response to requests. Clock nodes may elect a leader, such as leader clock node 410 (which may selected according to various leader election schemes). Leader node 410 may, in some embodiments, publish clock values 412 (e.g., multiple times in 1 second) to transaction log 430. Clock nodes 420 may provide heartbeat(s) 422a, 422b, 422c, and 422n respectively to transaction log 430 (e.g., periodically). Based on the clock values (e.g., 424a, 424b, 424c, and 424n) and the heartbeat(s) in transaction log 430, clock nodes 420 can determine stop times for each global clock value in local clock terms, in various embodiments. For example, each clock value 412 published by leader clock node 410 may contain global clock values at 20 millisecond intervals for the next 1 second. Clock nodes 420 can individually determine a clock node's start and end times in leader clock node's local clock time (e.g., Unix times), in various embodiments. Processing nodes, propagation nodes, or other clients of global clock service 220 can submit requests to clock nodes 420, such as global clock value requests 442a, 442b, 442c, and 442n, in some embodiments. In response, clock nodes 420 may send global clock value mappings, specific to each requesting client (e.g., specific to the requesting processing node or propagation node), such as global clock value mappings 444a, 444b, 444c, and 444n. Transaction log 430 may provide a fault tolerant, high performance, durable, log storage. Transaction log 430 may be used as a commit log underlying global clock service. Transaction log 430 may provide strong consistency guarantees and support constraints between committed records, to enable features like deduplication, sequencing, and read-write conflict detection. Transactions in transaction log (e.g., proposed clock values 412 and heartbeats 422) may be ordered according to transaction sequence numbers, which may be monotonically increasing to reference the state of the log. As noted above, database service 210 may, in some embodiments, store data sets as tables of items (which may include one or more data values or attributes) that may be stored, managed and accessed according to a key value pair which uniquely identifies an item in the table. Generally, this may allow for fast throughput to update data in items or store new items. However, in some scenarios, locating items that have particular attributes may be resource intensive, in some embodiments. For instance, identifying items with a particular attribute value that is within a range of values would likely instigate a scan of an entire table even though the range of may be only a small portion of possible values for the particular attribute or the particular attribute may not exist for some items in the table, in one embodiment. One example of a replicated portion of data may be a secondary index, in some embodiments. Secondary indexes may be created for a table in order to provide an alternative access schema for items in addition to a unique key value pair, in some embodiments. For instance, a table that includes items for registered users may include a user identifier, which is unique and primary key for the item, along with a first name attribute, last name attribute, gender attribute, and age attribute. A secondary index may be generated for the table which can index items according to other values than the key value pair, such as gender and age. For example, the secondary index may be generated so that all items with male attribute values are stored together according to age attribute value order. Similarly, all items with female attribute values are stored together according to age attribute value order. In this way, an access request for data that specifies a particular range of ages for males or females may be quickly obtained without performing a scan of the entire table of items, in some embodiments. Other attribute values may also be included in the secondary index, such as first and last name attribute values. In at least some embodiments, the key value pair that uniquely identifies the item may be included in the secondary index (e.g., the user identifier). Once created, a secondary index can be updated with any changes made to the original table. As secondary indexes may be updated according to changes made or performed at several different partitions, the changes of a partition may be propagated to appropriate secondary index(es) via propagation nodes. FIG. 5 is a sequence diagram illustrating interactions between propagation nodes and processing nodes to propagate updates, according to some embodiments. Table 510 may be stored in one or more partitions, such as partition 514, stored at processing node 512 (or in a replica group of processing nodes as discussed above), in one embodiment. Secondary index(es) 520 may be generated as different index(es) of some of the item attributes from table 510 and may be stored in one or more partitions, such as partition 524 at processing node 522 (which may be part of a replica group of processing nodes), in one embodiment. Propagation nodes, such as propagation nodes 530 and 540, may be assigned responsibility to propagate updates from table 510 to secondary index(es) 520, in various embodiments. Updates that are performed and committed with respect to items in partition 514 in table 510 (e.g., acknowledged to a client that submitted the update as successfully completed and/or otherwise durably persisted to table 510) may sent to propagation node 530. The updates may include the updated version of the item corresponding to the update and a version identifier (e.g., a logical sequence number (LSN), timestamp, or other identifier of a logical ordering of updates to the table In at least some embodiments, all updates may be sent to propagation node 530, without further determination on the part of processing node 512 as to whether the update needs to be propagated in some embodiments. Propagation node 530 may evaluate update(s) 550 to identify those updates that need to be propagated (e.g., make changes to items stored in or to be stored in a secondary index) based on secondary index schema information for secondary index(es) 520 (e.g., describing the arrangement of items in secondary indexes 520). Those updates 550 which are not to be replicated may be dropped, ignored, or other otherwise filtered out of the propagation mesh of propagation nodes. Propagation node 530 may identify propagation node 540 as the propagation node responsible for secondary index(es) 520 (e.g., according to mapping or routing information maintained for propagation nodes to secondary index(es)), in some embodiments, and send the identified updates for replication 552 to propagation node 540. Propagation node 540 may send one or more conditional update requests 554 to processing node 522 to apply the identified updates to the appropriate items in partition 524 of secondary index(es) 520. The conditional requests 554 may include the updated item and the version identifier as part of a condition that compares the version identifier to the current version identifier for the item in the secondary index. If the item is an insert, then the conditional request may indicate or otherwise specify that if no item exists, then the version comparison may not be performed, in some embodiments. In at least some embodiments, propagation node 540 may track the status of outstanding conditional update requests (e.g., what nodes have been sent a request, what response has been received, etc.). Processing node 522 may send acknowledgements of successful completion of the request or failures 556 to propagation node 540, in some embodiments. Based on the results of the acknowledgments or failures 556, the propagation node 540 may determine whether the update was successful. As discussed below with regard to FIG. 10, in scenarios where the same update needs to be applied to multiple secondary indexes, the update may not be considered successful unless all secondary indexes acknowledge the successful completion of the update. Propagation node 540 may periodically update 558 propagation state 382 with the last committed version identifier of an update that completed successfully across all secondary index(es) 520, in some embodiments, as discussed below. In this way, if another propagation node takes over replication for propagation node 540, propagation state 382 may be accessed to determine the progress of propagation as identified according to the last committed version identifier of an update that completed successfully across all secondary index(es), in some embodiments. Note that although FIG. 5 illustrates multiple tiers of propagation nodes (nodes 530 and 540) propagating updates, a single tier of propagation nodes may be implemented in one embodiment so that, for instance, propagation node 540 may perform the techniques described above with regard to propagation node 530 in addition to those described for 540. Thus, the previous discussion is not intended to be limiting as to the number of propagation nodes implemented for propagating updates. Updates may include or cause the deletion of items from a secondary index (or partition thereof). Deletion requests may, for instance, remove an attribute or item from a database table. Similarly, updates that change a value of an attribute of an item upon which a secondary index is indexed may change the location of the item (e.g., from one partition to another), which would result in the deletion of the item from one partition and the write of the item to another partition of the secondary index. To prevent out of order updates to an item that occur before a deletion from recreating a deleted item at a secondary index, different techniques may be employed. FIG. 6 is logical block diagram illustrating interactions between processing nodes storing a secondary index, a propagation node, and a propagation state that utilize tombstone markers to handle deletions of items at a secondary index, according to some embodiments. Propagation nodes 610 may send requests to apply updates, including delete(s) 640, to processing node(s) 620 for respective item(s) 622. While a version identifier may be maintained for each item, for deleted item(s) 622, a current version identifier 624 may be maintained along with a tombstone marker 626, which may indicate that the item has been deleted and should not be visible to queries to the processing node(s) 620. Processing node(s) 620 may acknowledge the deletion requests to propagation node 610 (as well as other updates). Propagation node 610 may maintain local state 612 which tracks the committed index partition version identifier(s) 614 for each partition of each secondary index to which the propagation nodes sends updates. For example, the committed index partition version identifier(s) may be the LSN or other version identifier of the latest update for an index partition up to which all prior updates have been applied (e.g., LSN 101,293, indicating that updates with LSNs less than 101,293 have all been committed to the index partition—consistent with the schema of the index). These version(s) 614 may be maintained for each index partition and the minimum committed version identifier across all of the partitions 634 (e.g., the smallest LSN value of 614 for each partition of a secondary index) may be stored 646 as part of state 632 for propagation in propagation state 382. Propagator node 610 may then send a message to 648 identifying a deletion point for item(s) with a tombstone marker 648. For example, processing node(es) 620 may receive an LSN value as the deletion point and may delete all items with tombstone markers 626 that have version identifiers 624 older (e.g., less than) the deletion point LSN value. As the minimum committed version identifier 634 is saved in propagation state 382, even if propagator node 610 crashes or otherwise fails, a new propagation node that takes over will not send updates older than the deletion point, preventing, for instance, an older update that inserts the item from re-inserting the deleted item. If a processing node 620 fails and a new processing node (e.g. a new master node) assumes responsibility for the secondary index, then propagation node 610 may resend deletion point 648 to the new processing node so that it can delete items with tombstone markers 626 and older version identifiers 624 which the failed processing node did not complete. In at least some embodiments, processing nodes may maintain items with tombstone markers 626 (or pointers to) in a B-Tree or other data structure for quickly identifying which items have tombstone markers. In some embodiments, if a number of tombstone markers exceeds a threshold amount, then processing node(s) may send an indication to the processing nodes of the database table to throttle updates. FIG. 7 is logical block diagram illustrating interactions between processing nodes storing a secondary index, a propagation node, and a propagation state store, that utilize tombstone markers and global clock values to handle deletions of items at a secondary index, according to some embodiments. Global clock service 220 may provide clock value mappings to a processing node 710. Based on the global clock value mappings 750, processing node 710 may determine global clock values for the updates and provide the updates 760 to propagation node 720 with mapped clock values (along with version identifiers). Propagation node 720 may send requests to apply the updates, including delete(s) 762, to processing node(s) 730 for respective item(s) 732. While a version identifier may be maintained for each item, for deleted item(s) 732, a current version identifier 734 may be maintained along with a tombstone marker 736, which may indicate that the item has been deleted and should not be visible to queries to the processing node(s) 730 and the clock value 738 mapped to the update. Processing node(s) 730 may acknowledge the deletion requests to propagation node 764 (as well as other updates). Propagation node 720 may maintain local state 722 which tracks the committed index partition version identifier(s) 724 for each partition of each secondary index to which the propagation nodes sends updates. As discussed above with regard to FIG. 6, the committed index partition version identifier(s) may be the LSN or other version identifier of the latest update for an index partition up to which all prior updates have been applied, in some embodiments. These version(s) 724 may be maintained for each index partition. Additionally, a clock value 726 for each partition of each secondary index to which the propagation node 720 sends updates may be maintained. Clock value 726 may be the clock value up to which all prior updates have been applied (e.g., based on acknowledgements from processing node(s) 730. Propagation node 720 may determine the minimum committed version identifier across all of the partitions 744 along with the corresponding clock value 746 to be stored 766 as part of state 742 for propagation in propagation state 382. Propagation node 720 may send a committed clock value 768 to processing nodes 730, in some embodiments. The committed clock value 768 may indicate that the propagation node 720 has sent all updates for the committed clock value (and has received acknowledgement of successful completion for them as well), in some embodiments. Processing node(s) 730 may compute the minimum clock value from all propagator nodes from which they receive updates and then delete all items with tombstone markers and clock values less than minimum committed clock value 768. If processing node(s) 730 receive an update to insert an item mapped to a clock value less the computed minimum clock value, the update can be identified as out-of-order and ignored or be dropped, in some embodiments. In some embodiments, if a processing node for database table is unavailable (e.g., failed or prevented from communicating due to another type of failure or outage, such as a network partition) or if the processing node for the database table is unable to get clock values, then the calculated minimum clock value might not move forward for a while. To avoid accumulating too many tombstones, an exception list for database tables (or partitions therefore) that have not updated may be saved (which may allow some of those tombstones to be removed). FIG. 8 is logical block diagram illustrating interactions between processing nodes storing a secondary index and a propagation node that utilizes partition state mapping to handle deletions of items at a secondary index, according to some embodiments. Propagation node 810 may send requests to apply updates, including delete(s) 840, to processing node(s) 820 for respective item(s) 822. While a version identifier may be maintained for each item, for deleted item(s) 822, a current version identifier 824 may be maintained. State information 826 that maps information for partitions of a database table that are a source of information for the secondary partition may also be maintained. The state information 826 may be updated when delete(s) 840 are received (but not for other updates), in some embodiments, to include a last committed version identifier for the source table partition from which the delete is received. Processing node(s) 820 may acknowledge the deletion requests 842 to propagation node 810 (as well as other updates). Propagation node 810 may maintain local state 812 which tracks the committed index partition version identifier(s) 814 for each partition of each secondary index to which the propagation nodes sends updates. For example, the committed index partition version identifier(s) may be the LSN or other version identifier of the latest update for an index partition up to which all prior updates have been applied. These version(s) 814 may be maintained for each index partition and the minimum committed version identifier across all of the partitions 834 (e.g., the smallest LSN value of 814 for each partition of a secondary index) may be stored 846 as part of state 832 for propagation in propagation state 382. Propagator node 810 may then send a message 848 to identifying a deletion point for source table partition entries in state 826. Processing node(s) 820 can delete those entries for partitions from state 826 if the last committed version identifier is less than the deletion point. If processing node(s) 820 receive an update to insert an item from a partition of the database table with a version identifier less than the last committed version identifier 828 for the source table partition in state 826, the update can be identified as out-of-order and ignored or be dropped, in some embodiments. Note that the techniques described above (and below) with respect to database tables (and partitions thereof) and secondary indexes (and partitions thereof) may be equally applicable to other data sets and replicated portions. The examples of a scalable architecture for propagating updates to replicated data as discussed in FIGS. 2-8 above have been given in regard to a database service (e.g., a non-relational or NoSQL database service or a relational database service). However, various other types of data access, management, or control systems or data processing systems may implement a scalable architecture for propagating updates to replicated data, in other embodiments. FIG. 9 is a high-level flowchart illustrating various methods and techniques to utilize a scalable architecture for propagating updates to replicated data, according to some embodiments. These techniques, as well as the techniques discussed with regard to FIGS. 10-13, may be implemented using components or systems as described above with regard to FIGS. 2-8, as well as other types of databases, storage engines, systems, or clients and thus the following discussion is not intended to be limiting as to the other types of systems that may implement the described techniques. As indicated at 910, an update to an item that has been committed to a data set may be received at a first node, in various embodiments. For example, the updates may add items, entries, values, attributes or other information, change, modify, or delete items, entries, values, attributes or other information in the data set. The update may be associated with an identifier, such as a timestamp, logical sequence number or other identifier for a logical ordering for updates to the data set (which may be determined by processing nodes, like processing nodes 330 above in FIG. 3, which perform/commit the update to the data set). The update may be received from a node hosting the data set (or partition therefore like processing nodes 330 in FIG. 3) or a propagation node (as discussed above with regard to FIG. 5). In at least some embodiments, the received update may include a version of the item that is updated according to the update (e.g., as opposed to a difference value or other description of the change made to the item which would be dependent upon knowledge of a prior value of the item to be determined). As indicated at 920, a second node storing a replicated portion of the data set including the item that should apply the update may be identified, in some embodiments. For example, data formats, schemas, or other mapping information that describes what portions of the data set should be replicated, such as which items and what attributes, values, or portions of the item, as well as to what node (as multiple nodes may host different replicated portions) may be evaluated to identify the second node. FIG. 10, discussed below, provides further examples of techniques for determining whether an update is applicable and thus should be replicated, in some embodiments. In some embodiments, multiple nodes may be identified for receiving the update. For example, if the update is a change to an attribute value for the item that would cause a change in location for the item to another node (e.g., according to a partitioning scheme), then the update may result in two nodes being identified, one for deleting the current item and one for inserting the updated item. As indicated at 930, a request to perform a conditional atomic operation to apply the update to the item may be sent to the second node. The request may include a condition that compares the version identifier associated with the update to a current version identifier for the item at the second node. For example, as discussed below with regard to FIG. 12, if the version identifier of the update is later than the current version, the update may be performed. In this way, updates that fail to perform are updates that include a version of the item that is “older” than the version of the item in the replicated portion of the data, preventing updates that arrive or are otherwise processed out of order from being made available for querying at the replicated portion of the data set at the second node. FIG. 10 is a high-level flowchart illustrating various methods and techniques to utilize a scalable architecture for propagating updates to a secondary index from a partition of a database table, according to some embodiments. As indicated at 1010, an update to an item that has been committed to a partition of a table may be received. As discussed above, the updates may add items, entries, values, attributes or other information, change, modify, or delete items, entries, values, attributes or other information in the partition of the table. The update may be associated with an identifier, such as a timestamp, logical sequence number (LSN) or other identifier for a logical ordering for updates to the table. The version identifier may be assigned or mapped to an update by the processing node (e.g., the master node for a replica group of processing nodes) that performed the update. The update may be received from a node hosting the partition (like processing nodes 330 in FIG. 3) or a propagation node (as discussed above with regard to FIG. 5). In at least some embodiments, the received update may include a version of the item that is updated according to the update (as opposed to the difference between versions of the item). As indicated at 1020, a determination may be made as to whether the update is applicable to a secondary index, in some embodiments. Secondary index schema, for instance, may be evaluated with respect to the update to the item. If the updated item has an attribute, value, or other information included by the secondary index schema to be stored as part of a secondary index, then the update may be applicable, in some embodiments. For example, item A may have multiple attributes (e.g., Attribute AA, BB, CC, DD, EE, FF, and so on). A secondary index may include items where the value of Attribute AA=“2017” and may also include the values of Attributes DD and EE. If the updated item has changed the value of AA, DD, or EE, then the update may be applicable (including updates that would result in the removal of an item form the secondary index). Updates not applicable to a secondary index may be dropped, filtered out or otherwise ignored. Secondary index schema evaluations may be performed multiple times for each update as multiple secondary indexes may be maintained for the same database table and the update could be applicable to some of the secondary indexes and not others. As indicated at 1030, node(s) storing partition(s) of the secondary index(es) of the table to apply the update to the item(s) in the partition(s) of the secondary index(es) may be identified, in some embodiments. Mapping information or a partitioning scheme (e.g., a hashing technique) may identify which nodes host the partitions including respective copies of the item, in some embodiments. As indicated at 1040, request(s) may be sent to the identified node(s) to perform conditional atomic operations to apply the update to the item may be sent, in various embodiments. As noted above, the request may include a condition that compares the first version identifier associated with the update to respective second version identifier(s) for the item at the identified node(s). For example, as discussed below with regard to FIG. 12, if the version identifier of the update is later than the current version, the update may be performed. As indicated at 1050, response(s) may be received for the request(s), in some embodiments. The responses may acknowledge the success of the request or a failure (e.g., a condition check failure). Based on the response(s), a determination may be made as to whether the update succeeded, as indicated at 1060. For example, if all identified node(s) acknowledged the update as successful, then the update may be considered to have succeeded. If one or more nodes failed to perform the update, then the update may have not succeeded. As indicated at 1070, propagation state may be updated to identify the update as committed to the partition(s) of the secondary index(es), in some embodiments. The propagation state may be propagation state maintained on propagation nodes and/or in separate propagation store, as discussed above with regard to FIGS. 6-8, in some embodiments. FIG. 11 is a high-level flowchart illustrating various methods and techniques to send updates to propagation nodes, according to some embodiments. As indicated at 1110, a propagation node may be identified to receive updates for a partition of a source table, in some embodiments. For example, mapping information that maps secondary indexes (or other replicated data sets) that replicate from the source table partition to propagation nodes may be evaluated to determine which propagation node receives updates for the source table partition. In some embodiments, such as a multi-tier propagation node architecture, random assignment or other techniques for assigning a first tier node may be performed in order to collect or aggregate requests from one or multiple nodes regardless of whether the nodes store partitions of the same table may be performed as first tier propagation nodes may be capable of handling update requests from any processing node (as first tier propagation nodes can route the update to other propagation nodes that can direct applicable updates to the correct processing hosts for replicated data. As indicated at 1120, updates to the partition of the source table may be received. As discussed above with regard to FIG. 3, updates may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update. As indicated at 1130, the updates to the partition of the source table may be performed, in some embodiments. For example, various commitment, synchronization, or other request processing techniques may be performed to durably perform and store the update to the partition of the database table (e.g., two-phase commit among processing nodes that form a replica group). As indicated at 1140, once performed or otherwise committed to the partition of the source table, updates may be sent to the identified propagation node. For example, a push-based propagation technique may be implemented that periodically (or as they are committed) sends out updates to the propagation node. In some embodiments, the propagation node may send a request for updates and the processing node for the source table partition may send the updates in response to the request according to a pull model. In at least some embodiments, a failure of the propagation node may be detected, as indicated at 1150. For example, the propagation node may fail to send a heartbeat or other acknowledgement to the source table partition node. In embodiments implementing pull-based updates, the propagation node may fail to request updates within a period of time. Once detected, a new propagation node may be identified, as indicated at 1110, in some embodiments. FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to a replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate that the update is conditional, in some embodiments. The condition to evaluate may be included in the request, in some embodiments. For example, comparison operators (<, >, =, !=, etc.) may be used to describe conditions. In at least some embodiments, the condition in the conditional, atomic update request for the item may describe a comparison between the version identifier and a current version identifier for the item. As indicated at 1220, a comparison of the version identifier of the request and the current version identifier may be performed, in some embodiments. As indicated by the negative exit from 1230, if the version identifier is not later than the current version identifier, then an error indication of a condition check failure may be sent to the propagation node, as indicated at 1260, in some embodiments. As indicated by the positive exit from 1230, if the version identifier is later than the current version identifier, then the update may be applied to the item (e.g., the item may be overwritten with the updated version of the item, the item may be inserted, or the item may be deleted or marked for deletion with a tombstone marker), as indicated at 1240, in some embodiments. An acknowledgment of the update to the item may be sent to the propagation node, as indicated at 1250, in some embodiments. FIG. 13 is a high-level flowchart illustrating various methods and techniques to perform a failover operation to a new propagation node, according to some embodiments. As indicated at 1310, propagation responsibility for partition(s) of a table to update partition(s) of secondary index(es) may be assigned to a propagation node. For example, a control plane or other management function may track propagation assignments and determine that a failed or underperforming propagation node needs to be replaced. In some embodiments, propagation responsibility may be shifted or reassigned in order to balance propagation workload amongst propagation nodes. A processing node that hosts a data set that is replicated may request or requisition a propagation node, in some embodiments. Similarly, a processing node hosting replicated data may request or requisition a propagation node, in some embodiments. As indicated at 1320, a propagation state data store may be accessed to obtain a last committed version identifier for updates to the partition(s) of the table performed at the partition(s) of the secondary index(es), in some embodiments. As discussed above with regard to FIGS. 6-8, propagation nodes may track the last committed version identifier(s) for each secondary index (e.g., partition 1=LSN 11,201, partition 2=LSN 15,321, partition 3=LSN 12,704, and so on), which may be the LSN or other version identifier of the latest update for an index partition up to which all prior updates have been applied. The minimum committed version identifier across all of the partitions (e.g., the smallest LSN value out of all of the LSN values maintained for each partition of a secondary index) may be identified and stored in propagation state (e.g., propagation state store 328), which may be separate from the propagation node that identified the minimum committed version identifier. In this way, the minimum committed version identifier survives the failure of the propagation node that determined it. This minimum committed version identifier may be the obtained last committed version identifier, in various embodiments. As indicated at 1330, update(s) committed at the partitions of the table that have occurred since the last committed version identifier may be evaluated to determine recovery value(s) for item(s) in the partitions of the secondary index, in some embodiments. If, for instance, the last committed version identifier has an LSN value of 22,933, then updates that have been committed to the source data set (e.g., partition(s) of a database table) with greater LSN values (e.g., >22,933) may be evaluated. The update(s) may be obtained from processing nodes hosting the source partitions or other locations or data stores that log committed updates to the table partitions, in some embodiments. The recovery value(s) may be determined by searching the updates for the latest value of items in the committed updates (including deletions of items), in some embodiments. For example, as noted above updates may include a version of the item to which the update is directed, so locating the latest value may include finding the last update directed to an item. In this way, there need not be a replay of the entire set of updates to an item to determine the item's value, as the updates do not merely describe differences or changes made by an update. For instance, updates to item A may be described as “LSN 11315 item A=12, LSN 11942 item A=15, LSN 12001 item A=22,” so that the last update to item A is found at LSN 120001. The value of item A, “22,” does not have to be calculated, whereas updates describing differences would have to be calculated dependent on prior values (LSN 11315 item A=12, LSN 11942 item A=+3, LSN 12001 item A=+7). The recovery values may be sent as part of requests to the node(s) hosting the partition(s) of the secondary index(es) to update the item(s) to the recovery value(s), as indicated at 1340. In some embodiments, the same techniques for conditional, atomic operations may be applied, as discussed above with regard to FIGS. 9 and 10, to update the items with recovery values. Then, the propagation node may begin performing additional update(s) committed to the partition(s) of the database table received from node(s) hosting the partition(s) of the table, in various embodiments, as indicated at 1350. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as in FIG. 14) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein (e.g., the functionality of various servers and other components that implement the distributed systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Embodiments to implement a scalable architecture for propagating updates to replicated data as described herein may be executed on one or more computer systems, which may interact with various other devices. One such computer system is illustrated by FIG. 14. In different embodiments, computer system 2000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or compute node, computing device or electronic device. In the illustrated embodiment, computer system 2000 includes one or more processors 2010 coupled to a system memory 2020 via an input/output (I/O) interface 2030. Computer system 2000 further includes a network interface 2040 coupled to I/O interface 2030, and one or more input/output devices 2050, such as cursor control device, keyboard, and display(s). Display(s) may include standard computer monitor(s) and/or other display systems, technologies or devices, in one embodiment. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system 2000, while in other embodiments multiple such systems, or multiple nodes making up computer system 2000, may host different portions or instances of embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system 2000 that are distinct from those nodes implementing other elements. In various embodiments, computer system 2000 may be a uniprocessor system including one processor 2010, or a multiprocessor system including several processors 2010 (e.g., two, four, eight, or another suitable number). Processors 2010 may be any suitable processor capable of executing instructions, in one embodiment. For example, in various embodiments, processors 2010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2010 may commonly, but not necessarily, implement the same ISA. In some embodiments, at least one processor 2010 may be a graphics processing unit. A graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device, in one embodiment. Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms. For example, a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU). In various embodiments, graphics rendering may, at least in part, be implemented by program instructions for execution on one of, or parallel execution on two or more of, such GPUs. The GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s), in one embodiment. System memory 2020 may store program instructions 2025 and/or data accessible by processor 2010, in one embodiment. In various embodiments, system memory 2020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those described above are shown stored within system memory 2020 as program instructions 2025 and data storage 2035, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 2020 or computer system 2000. A computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 2000 via I/O interface 2030. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2040, in one embodiment. In one embodiment, I/O interface 2030 may be coordinate I/O traffic between processor 2010, system memory 2020, and any peripheral devices in the device, including network interface 2040 or other peripheral interfaces, such as input/output devices 2050. In some embodiments, I/O interface 2030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2020) into a format suitable for use by another component (e.g., processor 2010). In some embodiments, I/O interface 2030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 2030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of I/O interface 2030, such as an interface to system memory 2020, may be incorporated directly into processor 2010. Network interface 2040 may allow data to be exchanged between computer system 2000 and other devices attached to a network, such as other computer systems, or between nodes of computer system 2000, in one embodiment. In various embodiments, network interface 2040 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. Input/output devices 2050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 2000, in one embodiment. Multiple input/output devices 2050 may be present in computer system 2000 or may be distributed on various nodes of computer system 2000, in one embodiment. In some embodiments, similar input/output devices may be separate from computer system 2000 and may interact with one or more nodes of computer system 2000 through a wired or wireless connection, such as over network interface 2040. As shown in FIG. 14, memory 2020 may include program instructions 2025, that implement the various embodiments of the systems as described herein, and data store 2035, comprising various data accessible by program instructions 2025, in one embodiment. In one embodiment, program instructions 2025 may include software elements of embodiments as described herein and as illustrated in the Figures. Data storage 2035 may include data that may be used in embodiments. In other embodiments, other or different software elements and data may be included. Those skilled in the art will appreciate that computer system 2000 is merely illustrative and is not intended to limit the scope of the embodiments as described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including a computer, personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, network device, internet appliance, PDA, wireless phones, pagers, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. Computer system 2000 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-readable medium separate from computer system 2000 may be transmitted to computer system 2000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. This computer readable storage medium may be non-transitory. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent example embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense. 15632266 amazon technologies, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:31AM Apr 27th, 2022 08:31AM Amazon Consumer Services General Retailers

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.