FB

Facebook

- NASDAQ:FB
Last Updated 2024-04-26

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Nov 9th, 2020 12:00AM https://www.uspto.gov?id=USD0949900-20220426 Display panel of a programmed computer system with a graphical user interface D949900 What is claimed is the ornamental design for a display panel of a programmed computer system with a graphical user interface, as shown and described. 1 FIG. 1 is a face view of a first state of a first embodiment of a display panel of a programmed computer system with a graphical user interface. FIG. 2 is a face view of a second state thereof. FIG. 3 is a face view of a first state of a second embodiment of a display panel of a programmed computer system with a graphical user interface; and, FIG. 4 is a face view of a second state thereof. The broken lines in the drawings are included for the purpose of illustrating environmental structure and form no part of the claimed design. The appearance of the transitional image sequentially transitions between the images shown in FIGS. 1-2 and between the images shown in FIGS. 3-4. The process or period in which one image transitions to another image forms no part of the claimed design. 29757700 meta platforms, inc. USA S1 Design Patent Open D14/486 15 Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Dec 23rd, 2019 12:00AM https://www.uspto.gov?id=US11313898-20220426 Quad small form-factor pluggable thermal test vehicle A mechanism for performing thermal testing is described. The system for performing thermal testing may include a housing, a heating element and a processor. The housing is configured to be compatible with a plurality of different types of transceiver form factors. The heating element is configured to be at a location within the housing to approximate an integrated circuit chip heat source of the plurality of different types of transceiver form factors. The processor is configured to automatically conduct a thermal test and provide thermal test results. 11313898 1. A system, comprising: a housing configured to be compatible with a plurality of different types of transceivers; a heating element configured to be at a location within the housing to approximate an integrated circuit chip heat source of the plurality of different types of transceivers; a thermal sensor; and a processor configured to automatically conduct a thermal test including by being configured to receive temperature readings from the thermal sensor and provide thermal test results, wherein the thermal test results include a temperature of a thermal mass in an apparatus coupled to the system. 2. The system of claim 1, wherein the processor is further configured to provide a test traffic load. 3. The system of claim 1, wherein the thermal test results further include a steady-state temperature of the thermal mass based on the temperature readings. 4. The system of claim 3, wherein the processor being configured to provide thermal test results includes the processor being configured to: curve fit the temperature readings to calculate the steady-state temperature of the thermal mass before an actual steady-state temperature is reached by the thermal mass. 5. The system of claim 1, wherein the housing is configured in a quad small form-factor pluggable configuration. 6. The system of claim 1, wherein the processor being configured to automatically conduct a thermal test further includes the processor being configured to control the heating element to provide a heat profile of the integrated circuit chip heat source. 7. The system of claim 1, wherein the processor being configured to automatically conduct a thermal test further includes the processor being configured to control the heating element to provide a plurality of amounts of energy per unit time. 8. The system of claim 1, further comprising: a light box for indicating status of the system. 9. The system of claim 1, further comprising: a pull handle coupled with the housing. 10. The system of claim 1, further comprising: a heat spreader coupled with the heating element. 11. A system, comprising: a housing configured to be compatible with a plurality of different types of transceiver form factors; a heating element configured to be at a location within the housing to approximate an integrated circuit chip heat source of the plurality of different types of transceiver form factors; a heat spreader coupled with the heating element; a thermal sensor; and a processor configured to automatically conduct a thermal test and provide thermal test results including a steady-state temperature of a thermal mass, the processor being configured to provide the thermal test results including the processor being configured to receive temperature readings from the thermal sensor; and curve fit the temperature readings to calculate the steady-state temperature of the thermal mass before an actual steady-state temperature is reached by the thermal mass. 12. A method, comprising: plugging a test vehicle into an apparatus having a thermal mass, the test vehicle including a housing, a heating element and a thermal sensor, the housing configured to be compatible with a plurality of different types of transceivers, the heating element configured to be at a location within the housing to approximate an integrated circuit chip heat source of the plurality of different types of transceivers; controlling the heating element to provide a heat profile corresponding to the integrated circuit chip heat source; receiving temperature readings from the thermal sensor, and providing thermal test results based on the temperature readings, the thermal test results including a temperature of the thermal mass in the apparatus coupled to the test vehicle. 13. The method of claim 12, wherein the test vehicle further includes a processor and wherein the controlling, receiving and providing steps are performed by the processor. 14. The method of claim 13, further comprising: utilizing the processor to provide a test traffic load to the apparatus corresponding to the thermal mass. 15. The method of claim 13, wherein the providing the thermal test results further includes: determining a steady-state temperature of the thermal mass based on the temperature readings received. 16. The method of claim 15, wherein the determining the steady-state temperature further includes: curve fitting the temperature readings to calculate the steady-state temperature of the thermal mass before an actual steady-state temperature is reached by the thermal mass. 17. The method of claim 13, wherein the controlling the heating element further includes: controlling the heating element to provide a plurality of amounts of energy per unit time. 18. The method of claim 13, further comprising: controlling the heating element based on the temperature readings. 19. The method of claim 18, wherein the heating element is further configured to be shut off based on the temperatures readings reaching a threshold. 19 BACKGROUND OF THE INVENTION As part of design and testing of computer systems, power dissipation is desired to be accounted for. For example, when developing a computing device such as a server, the server is desired to be tested to be certain that the central processing unit (CPU), graphics processing unit (GPU) and/or analogous components can be adequately cooled during usage. Similarly, when developing chassis to hold computing equipment in a data center, the chassis is designed to ensure that all of the computing equipment retained therein can be sufficiently cooled. Without such testing, the server, chassis or other computing apparatus may be unable to provide sufficient cooling for the corresponding components to stay within their temperature specifications. This may adversely affect performance of the deployed computing apparatus, for example reducing processor or communication speed, and may result in failure of the components. Accordingly, a mechanism for thermally testing computing apparatus is desired. BRIEF DESCRIPTION OF THE DRAWINGS Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings. FIGS. 1A-1B are diagrams depicting an embodiment of a computing apparatus under test and a thermal test vehicle used in the test. FIGS. 2A-2B are diagrams depicting an embodiment of a thermal test vehicle. FIG. 3 is a flow chart depicting an embodiment of a method for thermally testing a computing apparatus. FIG. 4 is a flow chart depicting an embodiment of a method for thermally testing a computing apparatus. FIG. 5 is a graph depicting an embodiment of a thermal property determined using a thermal test vehicle. FIGS. 6A-6B are diagrams depicting an embodiment of a thermal test vehicle. DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. Thermal management of computing equipment is an integral part of the development process. Managing heat dissipation may be the limiting factor in not only the speeds attainable, but also reliability of components. For example, a computing apparatus such as a server is desired to be tested to ensure that the central processing unit (CPU), graphics processing unit (GPU), application specific integrated circuit (ASIC) and/or analogous components can be adequately cooled during usage. If the CPU and GPU are not sufficiently cooled, processing speed may be throttled and, in extreme cases, the processors damaged. Similarly, in planning a data center, components (e.g. CPUs and/or GPUs) for multiple servers within a chassis are desired to have sufficient cooling to function as desired. Other computing apparatus are desired to be capable of handling the heat generated by components used therein. For example, switches are desired to be capable of adequately dissipating the heat generated by transceivers therein in order to maintain the desired speed of communications. If thermal management is not incorporated into the development, the processor, the server, chassis, switch, networking device and/or other computing apparatus may be unable to adequately dissipate the heat from their components or other computing devices. This may result in poor performance or failures of the equipment that is deployed. In order to thermally test power dissipation of components in a computing apparatus such as a server, switch or chassis, the components themselves may be used to provide the desired thermal profile for testing. For example, to determine whether a server can adequately cool a CPU or GPU, the server may be tested by running the CPU with a desired load. Similarly, a switch used in a data center may be tested utilizing quad small form-factor pluggable (QSFP) transceivers plugged into the appropriate sockets of the switch and various loads provided. Although such methods allow for thermal testing of computing apparatus, there may be drawbacks. For example, the QSFP transceiver, CPU, GPU or other component may fail during the test. Such components may be expensive, increasing development costs. In addition, for the components to be tested, the computing apparatus must be capable of running the components. Thus, both hardware and software used are present in the computing apparatus. Consequently, testing may take place after the computing apparatus is nearing the end of the development phase. This makes any changes to the design of the computing apparatus based on the results of thermal testing more challenging to implement. Further, such methods may not easily allow for testing next generation components with current generation computing apparatus. Accordingly, a mechanism for improving thermal testing of computing apparatuses is desired. A mechanism for thermal testing is described. A system for thermal testing (also termed a thermal test vehicle) includes a heating element, a thermal sensor and a processor. The processor is configured to control the heating element to output an amount of the energy. The processor also receives temperature readings from the thermal sensor and determines a thermal property associated with a thermal mass based at least in part the amount of the energy output by the heating element and the temperature readings received. The thermal mass is part of a computing apparatus under test and may be utilized in managing temperature of the computing apparatus and attached devices. Thus, the system may be used to test a computing apparatus based on the thermal mass of the computing apparatus. The computing apparatus can but need not have its functional components, such as processors and software, provided during testing. The processor may control the heating element to provide the amount of energy in a particular manner. For example, the power output by the heating element may be varied and/or otherwise controlled to provide a particular amount of energy over a particular time interval. In some embodiments, the heating element may be controlled based upon the temperature readings. For example, the heating element may be turned off if the temperature readings indicate the temperature meets and/or exceeds a threshold temperature. Thus, the heating element may be prevented from failing during the test. In some embodiments, the power provided by the heating element is reduced or otherwise limited based on increases in temperature. In some embodiments, the processor is further configured to provide a test processing load. For example, the processor may mimic expected traffic to the computing apparatus, which the computing apparatus may process if components are present in the computing apparatus. Thus, the thermal load from the system (the amount of energy provided by the heating element) and an additional thermal load for the computing apparatus can be simulated. Various thermal properties may be determined using the system and method described herein. In some embodiments, the mass multiplied by the thermal capacity of the thermal mass is determined based on the energy and change in temperature. In some embodiments, a steady-state temperature of the thermal mass is determined based on the energy and/or power (energy per unit time). In some embodiments, the steady-state temperature may be determined by fitting the temperature readings to a curve. Thus, the steady-state temperature of the thermal mass may be calculated before an actual steady-state temperature is reached by the thermal mass. Consequently, the method and system described herein may reduce the time required for testing the computing apparatus. In some embodiments, the system may include a housing configured to be compatible with a plurality of different types of form factors. Thus, the system may be able to fit and communicatively with the computing apparatus. In some embodiments, the housing may be considered to include a circuit board on which the processor and heating elements reside. The housing also includes connectors for at least one of the form factors. For example, the housing may be compatible with various connectors used for CPUs, GPUs, transceivers and/or other computing devices. In some embodiments, the heating element resides at a location in the housing corresponding to a power-consuming circuit element in a component for test. For example, the heating element may reside at or near the position of a CPU, GPU or ASIC on a circuit board. Further, in some embodiments, a heat spreader is thermally coupled to the heating element. Consequently, the desired thermal footprint may be provided by the system. In some embodiments, the system (i.e. the thermal test vehicle) is plugged into a computing apparatus having a thermal mass. The heating element is controlled to output an amount of the energy. The thermal sensor provides temperature readings and a thermal property associated with the thermal mass determined based at least in part the amount of the energy and the temperature readings. In some embodiments, the processor controls the heating element, receives the temperature readings and determines the thermal property. In some embodiments, some or all of these functions may be provided by another device, such as a host computer system coupled to the test vehicle. As discussed above, this thermal testing of the computing apparatus may include varying the energy and/or energy per unit time output by the heating element, providing a test processing load to the computing apparatus and/or determining the steady-state temperature of the thermal mass. Thus, a computing apparatus may undergo thermal testing. For example, the system for performing thermal testing may include a housing, a heating element and a processor. The housing is configured to be compatible with a plurality of different types of transceiver form factors. For example, the housing may be configured in a quad small form-factor pluggable (QSFP) configuration. The heating element is configured to be at a location within the housing to approximate an integrated circuit chip heat source of the plurality of different types of transceiver form factors. The processor is configured to automatically conduct a thermal test and provide thermal test results. Thus, the processor may control the heating element to provide a heat profile of the integrated circuit chip heat source. In some embodiments, the processor may further configured be to provide a test traffic load. The system may also include a light box for indicating status of the system, a pull handle coupled with the housing and/or a heat spreader coupled with the heating element. FIGS. 1A-1B are diagrams depicting an embodiment of a thermal test vehicle 100 and a computing apparatus 130 under test. Although certain components are shown, in some embodiments, other and/or additional components may be present. Computing apparatus 130 includes thermal mass 132 and connector or socket 134. Thermal mass 132 may include a heat sink, the frame of computing apparatus 130 and/or other mechanism used in managing heat in computing apparatus 130. Socket/connector 134 is used to allow for connection to components. Socket/connector may be a portion of a board to which a component such as a processor and/or other board may be attached or may be an external connector for additional devices. In some embodiments, other components 136 may be included in computing apparatus 130. For example, processor(s), memory, sensor(s) and/or other components that are desired to be part of computing apparatus 130 may be included. However, in some embodiments, some or all of such components may be omitted. Thus, computing apparatus 130 may be very early in the design and prototyping process. Thermal test vehicle 100 is an embodiment of a system for performing thermal testing of computing apparatus 130. Thermal test vehicle 100 includes one or more processors 110, connectors 114, one or more thermal sensors 116 and one or more heating elements 120. For simplicity, thermal test vehicle 100 is described in the context of a single processor 110, a single thermal sensor 116 and a single heating element 120. Also shown is memory 112 that may store instructions for processor 110. In some embodiments, additional components may be included. For example, in addition to thermal sensor 116, thermal test vehicle 100 may include current sensors, voltage sensors and/or other sensors. In some embodiments, thermal test vehicle 100 may include a connector for the coupling to a host (not shown). Heating element 120 is a ceramic heater in some embodiments. Thus, heating element 120 may effectively be a resistor that can output a desired amount of power. In some embodiments, heating element 120 is capable of outputting an amount of heat per unit time that simulates the actual component designed to be connected to socket/connector 134. For example, heating element 120 may be used to the mimic the heat profile of a CPU, GPU, transceiver, and/or other component. The combination of one or more heating element 120 may output an amount of energy that may vary widely. For example, if a particular heating element 120 may provide zero through five hundred watts, a combination of five heating elements 120 might output fifty Watts through two thousand five hundred Watts. In other embodiments, other powers may be possible utilizing various configurations of heating elements 120. In some embodiments, processor 110 controls heating element 120. Thus, the power provided by heating element 120 may change over time. For example, processor 110 may control heating element 120 to increase or decrease the power output over time. In some embodiments, limits maybe placed on the power output or time over which the power is output. In addition, heating element 120 may be turned off or driven at a lower thermal output by processor 110, for example if a measure of the temperature of computing apparatus 130 reaches or exceeds a threshold temperature. Thermal sensor 116 measures temperature and provides the temperature readings to processor 110. Because thermal test vehicle 100 is plugged into socket/connector 134 and because of the presence of thermal mass 132, the output of thermal sensors 116 not only indicates the temperature of thermal test vehicle 100, but also the temperature of computing apparatus 130. Processor 110 may also determine thermal properties associated with thermal mass 132 based on temperature readings received from thermal sensors 116. For example, processor 110 may control heating element 120 to output a particular amount of energy. In some embodiments, this amount of energy may be output over a particular amount of time or may correspond to a known variation in power over time. Processor 110 may calculate the thermal capacity multiplied by the mass by calculating the integral of the power over time and dividing by the temperature difference. For a known mass of thermal mass 132, processor 110 can determine the thermal capacity. Processor 110 may determine the steady-state temperature of thermal mass 132 based on the power being output by heating elements 20 and temperature readings from thermal sensor 116. For example, processor 110 may curve fit the readings received by thermal sensor 116 to a known curve. Other thermal properties of computing apparatus 130 may be determined by processor 110 in some embodiments. Thus, processor 110 may determine one or more thermal properties of computing apparatus 130. Using thermal test vehicle 100, thermal properties of computing and networking apparatus 130 may be determined. Computing apparatus may include devices such as servers, switches, chassis, networking devices and/or analogous device. Heating element 120 simulates the power dissipated by a CPU, GPU or other component to be connected to computing apparatus 130. Consequently, the actual CPU, GPU or other component need not be used. Therefore, thermal testing using thermal test vehicle 100 may be more cost effective and less wasteful. Because the actual CPU, GPU or other component are not required for thermal testing, software and other devices used to drive such components need not be included in computing apparatus 130. Thus, testing using thermal test vehicle may occur earlier in the development process. Consequently, desired changes to computing apparatus 130 may be more easily incorporated. For example, additional thermal mass may be added, active cooling (e.g. more fans) might be integrated and/or components having reduced power generation may be selected for use. Because processor 110 can determine thermal properties such as steady state temperature and/or thermal capacity, testing may be made more time efficient and simpler. Thermal test vehicle 100 may also simulate power generated by future generation components. For example, CPUs having higher speeds and requiring more power dissipation than are currently available may be mimicked using heating element 120. The ability of computing apparatus 130 to dissipate heat for future generation components may thus be determined. Thermal test vehicle 100 may also be modular. For example, in order to test computing apparatus 130 for a different component, additional heating element(s) may be added to thermal test vehicle 100 or heating element may be driven in a different manner. Consequently, thermal test vehicle 100 improves testing and development of computing apparatus 130. FIGS. 2A-2B are side and plan views, respectively, of an embodiment of thermal test vehicle 200. FIGS. 2A and 2B are not to scale. Thermal test vehicle 200 is analogous to thermal test vehicle 100. Thermal test vehicle 200 may be used in connection with testing power management capabilities of a computing apparatus such as computing apparatus 130. Although certain components are shown, in some embodiments, other and/or additional components may be present. Thermal test vehicle 200 includes circuit board 202, heating elements 210, 212, 214 and 216, processor(s) 220, insulator 230, heat spreader 240 and connectors 250. Circuit board 202 and connectors 250 may be considered part of a housing that is configured to fit the form factor of the connector/socket for the corresponding computing apparatus. For example, connectors 250 may be used to mount thermal test vehicle 200 to the baseboard of a computing apparatus, such as computing apparatus 130 to be tested. However, the computing apparatus being tested need not include software or all components typically present on the baseboard in order to use thermal test vehicle 200 in investigating the computing apparatus' thermal properties. In some embodiments, insulator 230, which surrounds at least a portion of the constituents of thermal test vehicle 200, may also be considered part of the housing. Heating elements 210, 212, 214 and 216 are selected to provide sufficient power to simulate the component(s) which are to be connected to the computing apparatus and for which heat is desired to be dissipated. In some embodiments, the location of heating elements 210, 212, 214 and 216 on circuit board 202 corresponds to the location of the component(s) to be connected to the computing apparatus. For example, heating elements 210, 212, 214 and 216 may be at the location on circuit board 202 of the CPU(s), GPU(s) and/or ASIC(s) desired to be connected to the computing apparatus. Heat spreader 240 is included in the embodiment shown. Although one heat spreader 240 is shown, in other embodiments multiple heat spreaders may be included. For example, heat spreader 240 may be a copper plate, which may be anodized. In some embodiments, heat spreader is attached to circuit board 202 via screws (not shown). However, other attachment mechanisms may be utilized. Heat spreader 240 may be used to more evenly distribute heat generated by heating elements 210, 212, 214 and 216. Thus, individual heading elements 210, 212, 214 and 216 may have a heat profile of a single component at the location of and having substantially the same size as heat spreader 240. Thus, the location and power provided by heating elements 210, 212, 214 and 216 and heat spreader 240 simulate the power-consuming circuit element desired to be coupled with the computing apparatus being tested. Insulator 230 thermally and electrically insulates heating elements 210, 212, 214 and 216. Insulator 230 may be subjected to variable heat output from heating elements 210, 212, 214 and 216. For example, during testing, the heat output may vary from 0 W to 2.5 kW or more. Insulator 230 is desired to undergo thermal cycling due to such high powers substantially without deforming or cracking. For example, insulator 230 may be garolite or an analogous material. Insulator 230 may also reduce or prevent the tendency of heat generated by heating elements 210, 212, 214 and 216 to travel toward circuit board 202. Consequently, heat generated by heating elements 210, 212, 214 and 216 may be more likely to move toward heat spreader 240. Thus, circuit board 202 may be less likely to be damaged and heat may be more efficiently driven toward heat spreader 240. Heat spreader 240 may be thermally coupled with the thermal mass (not shown in FIGS. 2A-2B) for the computing apparatus (not shown in FIGS. 2A-2B). Thus, heat generated by heating elements 210, 212, 214 and 216 may be managed using the thermal mass of the computing apparatus being tested. Sensor(s) 222 include thermal sensors that read the temperature of heating elements 210, 212, 214 and 216 and/or the temperature of the thermal mass to which heating elements 210, 212, 214 and 216 are thermally coupled. Sensor(s) 222 may also include current and/or voltage sensors that measure the signals driving heating element(s) 210, 212, 214 and/or 216. In some embodiments, other sensors may be included. Thus, various characteristics of thermal test vehicle 200 and the computing apparatus to which thermal test vehicle 200 is connected may be measured. Processor(s) 220 control the heat generated by heating elements 210, 212, 214 and 216. Thus, processor(s) 220 may control the current through and/or voltage across heating elements 210, 212, 214 and 216. For example, processor 220 may drive heating elements 210, 212, 214 and 216 such that they generate the same or different powers, for the same or different times, to vary in the same or different manners, to ramp up to maximum power at the same or different rates, or otherwise be managed to provide the desired temperature profile. Thus, heating elements 210, 212, 214 and 216 may simulate the power generated by component(s) desired to be connected to a computing apparatus. In some embodiments, processor(s) 220 control the total amount of energy (e.g. heat) generated by heating elements 210, 212, 214 and 216. In some embodiments, processor(s) 220 control the power (energy per unit time) generated by heating elements 210, 212, 214 and 216. In some embodiments, processor(s) 220 control both the power and the amount of energy generated by heating elements 210, 212, 214 and 216. Processor(s) 220 receive readings from sensor(s) 222. In some embodiments, processor(s) 220 control heating elements 210, 212, 214 and/or 216 based upon the readings. For example, processor(s) 220 may reduce or terminate the current driving and/or voltage across one or more of heating elements 210, 212, 214 and 216 if thermal sensors indicate that the temperature has met or exceeded a threshold. Similarly, processor(s) 220 may increase or initiate the current and/or voltage driving one or more of heating elements 210, 212, 214 and 216 if thermal sensors indicate that the temperature has met or dropped below another threshold. Further, processor(s) 220 may also control heating elements 210, 212, 214 and/or 216 to simulate particular components. Utilizing the readings provided by sensor(s) 222, processor(s) 220 may also determine thermal properties of a thermal mass that is part of a computing system. For example, the mass multiplied by the thermal capacity, the thermal capacity and/or the steady-state temperature may be determined by processor(s) 220 based upon the readings from sensor(s) 222 received by processor(s) 220. Thus, thermal test vehicle 200 may be used to thermally test computing apparatus. Thermal test vehicle 200 shares the benefits of thermal test vehicle 100. Thus, the thermal properties of the computing apparatus being investigated may be determined more cost effectively, earlier in the development process, in a more time efficient manner, and/or without wasting of components. Further, the ability of the computing apparatus to be utilized with next generation components may be determined. Thus, thermal testing of a computing apparatus may be improved. FIG. 3 is a flow chart depicting an embodiment of method 300 for thermally testing a computing apparatus. Method 300 is described in the context of thermal test vehicle 100 and computing apparatus 130. However, method 300 may be utilized with other thermal test vehicles and/or other computing apparatus. The processes of method 300 may include substeps. The processes of method 300 are also shown in a particular order, but may be performed in another order including in parallel. The thermal test vehicle is plugged into a computing apparatus having a thermal mass and for which thermal management is desired to be tested, at 302. Thus, the connectors and/or housing of the thermal test vehicle are attached to the appropriate portion of the computer apparatus. The thermal test vehicle includes heating element(s), thermal sensor(s), processor(s) and, optionally, other components such as current and/or voltage sensors. The heating element(s) are controlled to output an amount of energy, at 304. In some embodiments, the heating element(s) are controlled at 304 using the processor(s). Thus, at 304 the desired heat profile that simulates the component that is to be plugged into the connector of the computing apparatus is provided. Temperature readings are received, at 306. In some embodiments, the temperature readings from the thermal sensor(s) are received at the processor(s). Thus, the temperature of the thermal test vehicle and, therefore, the thermal mass to which the thermal test vehicle is thermally connected, is tracked. One or more thermal properties for the thermal mass are determined based upon the temperature readings received, at 308. In some embodiments, the thermal properties are also determined using the amount of energy provided by the heating element(s) and/or the amount of the energy per unit time. In some embodiments, the thermal properties are automatically determined at 308. For example, thermal test vehicle 100 may be plugged into connectors 134, at 302. Processor 110 can control heating element 120 to output energy, at 304. At least while the element 120 is energized, thermal sensor 116 collects temperature data. In some embodiments, thermal sensor 116 collects temperature data before heating element 120 is energized and/or after heating element 120 has been shut down. Thermal sensor 116 provides these temperature readings to processor 110. In some embodiments, the temperature readings are provided to processor 110 substantially in real time. In other embodiments, temperature readings may be stored and received at processor 110 at a later time. Using these temperature readings, processor 110 may automatically calculate one or more thermal properties of the thermal mass. In some embodiments, another processor, for example on a host system (not shown) determines the thermal properties and/or controls heating elements 110. Using method 300, the thermal properties of the thermal mass may be determined. Thus, the benefits of thermal test vehicle 100 may be realized. FIG. 4 is a flow chart depicting an embodiment of method 400 for thermally testing a computing apparatus. Method 400 is described in the context of thermal test vehicle 100 and computing apparatus 130. However, method 400 may be utilized with other thermal test vehicles and/or other computing apparatus. The processes of method 400 may include substeps. The processes of method 400 are also shown in a particular order, but may be performed in another order including in parallel. Method 400 also commences after the thermal test vehicle is plugged into a computing apparatus having a thermal mass and for which thermal management is desired to be tested. The heating element(s) are controlled to output the desired heat profiles, at 402. In some embodiments, the heating element(s) are controlled at 402 using the processor(s). The desired heat profiles simulate the component that is to be plugged into the connector of the computing apparatus. The processing load or traffic is optionally provided, at 404. For example, the processor may send particular signals or instructions to a processor that has been incorporated into the computing apparatus being tested. Sensor readings are taken at least while the heating element(s) are energized. In some embodiments, the sensor readings are also collected before the heating element(s) are energized and/or after heating element(s) have been shut down. The sensor readings include temperature readings collected by thermal sensor(s). In some embodiments, other sensor readings may also be taken. For example, current and/or voltage to each of the heating element(s) may be read. Sensor readings are received, at 404. In some embodiments, the sensor readings s) are received at the processor(s). The received sensor readings include at least temperature data. In some embodiments, current, voltage and/or other characteristics measured may also be received at 404. Thus, the temperature of the thermal test vehicle and, therefore, the thermal mass to which the thermal test vehicle is thermally connected, is tracked. The heating element(s) are optionally shut off if the temperature readings indicate that a threshold has been reached or exceeded, at 408. Thus, the heating element(s) may be prevented from being burned out. The temperature readings are fit to curve(s) to determine one or more thermal properties of the thermal mass, at 410. For example, the steady-state temperature of the thermal mass may be determined. FIG. 5 depicts an embodiment of graph 500 indicating temperature versus time for the temperature readings. Temperature readings are indicated by black circles. These data have been fit to a curve, shown by the solid line. Although a certain number of temperature readings are shown, in other embodiments, another number of readings may be used. In some embodiments, only enough data to reliably (e.g. to within at least five percent or ten percent) fit the temperature readings to the curve is collected. As can be seen in FIG. 5, the curve approaches the steady-state temperature, SST, for the heat profiles provided using the heaters. However, as can be seen in graph 500, temperature data need not be collected throughout the times shown. Instead, via curve fitting, the steady state temperature can be determined for a fewer number of points and a shorter time interval. Thus, method 400 need not be continued until the thermal mass actually reaches (or closely approaches) the steady-state temperature. For example, thermal test vehicle 100 may be plugged into connectors 134. Processor 110 used to control heating element 120 to output sufficient energy to provide the desired heat profile(s), at 402. At least while the element 120 is energized, thermal sensor 116 collects temperature data. In some embodiments, the temperature readings are provided to processor 110 substantially in real time. In other embodiments, temperature readings may be stored and received at processor 110 at a later time. Processor 110 fits the received temperature readings to a curve, at 410. In some embodiments, another processor, for example on a host system (not shown) performs the curve fitting and/or controls heating elements 110. Using method 400, the thermal properties of the thermal mass may be determined. Thus, the benefits of thermal test vehicle 100 may be realized. FIGS. 6A-6B are exploded and assembled views, respectively, of an embodiment of thermal test vehicle 600. FIGS. 6A and 6B are not to scale. Thermal test vehicle 600 is analogous to thermal test vehicles 100 and/or 200. Thermal test vehicle 600 may be used in connection with testing power management capabilities of a computing apparatus such as computing apparatus 130. More specifically, thermal test vehicle 600 may be utilized in testing computing apparatus, such as switches, that incorporate transceivers. Thus, thermal test vehicle 600 may be a quad small form-factor pluggable (QSFP) thermal test vehicle and is described in the context of developing a switch. Although certain components are shown, in some embodiments, other and/or additional components may be present. QSFP thermal test vehicle 600 includes circuit board 602, heating element(s) 610, processor(s) 620, and sensor(s) 622. Also shown are top housing 630 and bottom housing 632 (collectively housing 630/632). For ease of explanation, top housing 630 is shown as transparent in FIG. 6B. Circuit board 602 may be considered part of the housing 630/632 in that circuit board 602 retains and allows electrical connection to heating element(s) 610, processor(s) 620 and sensor(s) 630. QSFP thermal test vehicle 600 may be plugged into a socket of a switch (not shown in FIGS. 6A-6B). In some embodiments, housing 630/632 is configured to fit multiple QSFP form factors. In some cases, multiple QSFPs may be plugged into a particular switch. Thus, in some cases multiple QSFP thermal test vehicles may be plugged into the switch under test. However, the switch being tested need not include software or all components typically present in a switch in order to use QSFP thermal test vehicle 600 in investigating the switch's thermal properties. QSFP thermal test vehicle 600 also includes pull handle 650 and light box 640 in the embodiment shown. Light box 640 indicates the status of the QSFP thermal test vehicle 600. For example, lights in light box 640 may be lit when QSFP thermal test vehicle is properly connected and/or being used. For example, light box 640 may be energized when heating elements 610 are driven. Pull handle 650 aids in inserting QSFP thermal test vehicle 600 into or removing QSFP thermal test vehicle from the socket of a switch being tested. In other embodiments, light box 640 and/or pull handle 650 may be configured differently or omitted. Heating element(s) 610 are selected to provide sufficient power to simulate the integrated circuits that are part of a corresponding QSFP module desired to be connected to the switch and for which heat is desired to be dissipated. In some embodiments, the location of heating element(s) 610 on circuit board 602 and within housing 630/632 corresponds to the location of the integrated circuit(s) to be connected to the switch. Heating element(s) 610 may be driven simulate the power generated for current traffic carried by QSFP modules currently in use, such as 40-50G. Because heating element(s) 610 may be controlled, the power generated for other traffic speeds may be simulated. For example, in some embodiments, 100G, 150C, 200G and up to 400G may be simulated using QSFP thermal test vehicle 600. QSFP thermal test vehicle 600 may also include a heat spreader thermally coupled to the heating element(s) 610. The heat spreader is utilized to provide a thermal footprint consistent with integrated circuits used in QSFP modules. Thus, the location and power provided by heating element(s) 610 and any corresponding heat spreader simulate the integrated circuit element desired to be coupled with the switch being tested. In some embodiments, an insulator analogous insulator 230 surrounds at least a portion of the constituents of QSFP thermal test vehicle 600, such as heating elements 610. Further, the insulator described above may be configured to undergo thermal cycling to high powers consistent with QSFP modules substantially without deforming or cracking. For example, the insulator may be garolite or an analogous material. Such an insulator may also aid in directing heat generated by heating elements 610 to travel away from circuit board 602 and toward the heat spreader (if any). Thus, circuit board 602 may be less likely to be damaged. Heating element(s) 610 and any heat spreader may be thermally coupled with the thermal mass (not shown in FIGS. 6A-6B) for the switch (not shown in FIGS. 6A-6B). Thus, heat generated by heating element(s) 610 may be managed using the thermal mass of the switch being tested. Sensor(s) 622 include thermal sensors that read the temperature of heating element(s) 610 and/or the temperature of the thermal mass of the switch being tested. Sensor(s) 622 may also include current and/or voltage sensors that measure the signals driving heating element(s) 610. In some embodiments, other sensors may be included. Thus, various characteristics of QSFP thermal test vehicle 600 and the switch to which QSFP thermal test vehicle 600 is connected may be measured. Processor(s) 620 may be used to automatically conduct a thermal test and provide thermal test results. Thus, processor(s) 620 drive heating element(s) 610, receive temperature readings and/or other data from sensor(s) 622, and determine the thermal properties of the switch being tested. For example, processor(s) 620 may determine the thermal capacity and/or steady state temperature of the switch for control the heat generated by heating element(s) 610. Thus, processor(s) 620 may control the current through and/or voltage across heating elements 610. For example, processor 620 may drive multiple heating element(s) 610 such that they generate the same or different powers, for the same or different times, to vary in the same or different manners, to ramp up to maximum power at the same or different rates, or otherwise be managed to provide the desired temperature profile. Thus, heating element(s) 610 may simulate the power generated by component(s) desired to be connected to a switch. In some embodiments, processor(s) 620 control the total amount of energy (e.g. heat) generated by heating elements 610. In some embodiments, processor(s) 620 control the power (energy per unit time) generated by heating element(s) 610. In some embodiments, processor(s) 620 control both the power and the amount of energy generated by heating element(s) 610. Processor(s) 620 receive readings from sensor(s) 622. In some embodiments, processor(s) 620 control heating elements 610 based upon the readings. For example, processor(s) 620 may reduce or terminate the current driving and/or voltage across one or more of heating elements 610 if thermal sensors indicate that the temperature has met or exceeded a threshold. Similarly, processor(s) 620 may increase or initiate the current and/or voltage driving one or more of heating element(s) 610 if thermal sensors indicate that the temperature has met or dropped below another threshold. Further, processor(s) 620 may also control heating element(s) 610 to simulate particular components. Utilizing the readings provided by sensor(s) 622, processor(s) 620 may also determine thermal properties of a thermal mass that is part of the switch. For example, the mass multiplied by the thermal capacity, the thermal capacity and/or the steady-state temperature may be determined by processor(s) 620 based upon the readings from sensor(s) 622 received by processor(s) 620. Thus, thermal test vehicle 600 may be used to thermally test a switch. Thermal test vehicle 600 shares the benefits of thermal test vehicle(s) 100 and/or 200. Thus, the thermal properties of the switch being investigated may be determined more cost effectively, earlier in the development process, in a more time efficient manner, and/or without wasting of components. Further, the ability of the switch to be utilized with future generation components may be determined. Thus, thermal testing of a switch may be improved. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive. 16726015 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Sep 4th, 2020 12:00AM https://www.uspto.gov?id=US11315301-20220426 Rendering post-capture artificial-reality effects based on artificial-reality state information In one embodiment, a method includes retrieving a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream, where each frame of the video stream comprises a real-world scene without the first artificial-reality effect, retrieving an artificial-reality state information stream corresponding to the video stream, where the artificial-reality state information stream comprises state information associated with the first artificial-reality effect, retrieving one or more contextual data streams corresponding to the video stream, where the first artificial-reality effect displayed on the video stream was rendered based on at least a portion of the one or more contextual data streams, rendering a second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams, and displaying the second artificial-reality effect on the video stream. 11315301 1. A method comprising, by a computing device: accessing a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream, the first artificial-reality effect having one or more first non-deterministic features generated based on a randomness data stream generated using a randomness model; retrieving the randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect displayed on the video stream; rendering a second artificial-reality effect using the retrieved randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect, wherein the retrieved randomness data stream is used for generating one or more second non-deterministic features of the second artificial-reality effect; and displaying the second artificial-reality effect on the video stream. 2. The method of claim 1, further comprises: retrieving one or more contextual data streams corresponding to the video stream, wherein the first artificial-reality effect displayed on the video stream was rendered based on at least a portion of the one or more contextual data streams, wherein the one or more contextual data streams comprise a sensor data stream generated by one or more sensors while the video stream is being captured. 3. The method of claim 2, wherein the one or more sensors comprises one or more of: an inertial measurement unit (IMU); an accelerometer; a device orientation sensor; a motion sensor; a velocity sensor; a device position sensor; a microphone; a light sensor; a touch sensor; a stylus sensor; a depth sensor; a temperature sensor; a GPS sensor; or a user input sensor. 4. The method of claim 2, wherein the one or more contextual data streams comprise a computed data stream generated by an object tracking algorithm performed on content of the video stream. 5. The method of claim 4, wherein the computed data comprises one or more of: face recognition data; face tracking points; person segmentation data; object recognition data; object tracking points; object segmentation data; body tracking points; world tracking points; a depth; a point in a three-dimensional space; a line in a three-dimensional space; a surface in a three-dimensional space; or a point cloud. 6. The method of claim 2, wherein the second artificial-reality effect is the first artificial-reality effect, wherein the computing device renders the second artificial-reality effect identical to the first artificial-reality effect based on at least the randomness data stream and a portion of the one or more contextual data streams, and wherein one or more non-deterministic features of the second artificial-reality effect are generated based at least on the randomness data stream. 7. The method of claim 2, further comprising: receiving an indication, from one or more input sensors associated with the computing device, that a user associated with the computing device wants to switch to a third artificial-reality effect in a middle of replaying the video stream; stopping rendering the second artificial-reality effect on the video stream; rendering the third artificial-reality effect based on at least the randomness data stream and a portion of the one or more contextual data streams; and displaying the third artificial-reality effect on the video stream. 8. The method of claim 1, wherein the second artificial-reality effect is determined based on an input of a user associated with the computing device. 9. The method of claim 8, wherein determining the second artificial-reality effect based on the input of the user comprises: presenting choices for the second artificial-reality effect to the user; receiving an indication of a user choice from one or more input sensors associated with the computing device; and determining the second artificial-reality effect based on the indication of the user choice. 10. The method of claim 9, wherein the choices for the second artificial-reality effect comprise no artificial-reality effect, the first artificial-reality effect, or one or more artificial-reality effects different from the first artificial-reality effect. 11. The method of claim 1, wherein the one or more non-deterministic features comprise: a size of a rain drop; a path of a rain drop; a timing of a rain drop; a size of a snowflake; a path of a snowflake; a timing of a snowflake; a direction of a flying arrow; a trajectory of a flying arrow; a timing of a flying arrow; a size of a bubble; a moving path of a bubble; or a moving speed of a bubble. 12. The method of claim 1, wherein the first artificial-reality effect comprises one or more of: a virtual object; a three-dimensional effect; an interaction effect; a displaying effect; a sound effect; a lighting effect; or a tag. 13. The method of claim 1, wherein each frame of the video stream comprises a real-world scene without the first artificial-reality effect. 14. One or more computer-readable non-transitory storage media embodying software that is operable when executed to: access a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream, the first artificial-reality effect having one or more first non-deterministic features generated based on a randomness data stream generated using a randomness model; retrieve the randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect while displayed on the video stream; render a second artificial-reality effect using the retrieved randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect, wherein the retrieved randomness data stream is used for generating one or more second non-deterministic features of the second artificial-reality effect; and display the second artificial-reality effect on the video stream. 15. The media of claim 14, wherein the software is further operable when executed to: retrieve one or more contextual data streams corresponding to the video stream, wherein the first artificial-reality effect displayed on the video stream was rendered based on at least a portion of the one or more contextual data streams, wherein the one or more contextual data streams comprise a sensor data stream generated by one or more sensors while the video stream is being captured. 16. The media of claim 15, wherein the one or more sensors comprises one or more of: an inertial measurement unit (IMU); an accelerometer; a device orientation sensor; a motion sensor; a velocity sensor; a device position sensor; a microphone; a light sensor; a touch sensor; a stylus sensor; a depth sensor; a temperature sensor; a GPS sensor; or a user input sensor. 17. The media of claim 15, wherein the one or more contextual data streams comprise a computed data stream generated by an object tracking algorithm performed on content of the video stream. 18. The media of claim 17, wherein the computed data comprises one or more of: face recognition data; face tracking points; person segmentation data; object recognition data; object tracking points; object segmentation data; body tracking points; world tracking points; a depth; a point in a three-dimensional space; a line in a three-dimensional space; a surface in a three-dimensional space; or a point cloud. 19. The media of claim 14, wherein each frame of the video stream comprises a real-world scene without the first artificial-reality effect. 20. A system comprising: one or more processors; and a non-transitory memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to: access a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream, the first artificial-reality effect having one or more first non-deterministic features generated based on a randomness data stream generated using a randomness model; retrieve the randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect while displayed on the video stream; render a second artificial-reality effect using the retrieved randomness data stream used for generating the one or more first non-deterministic features of the first artificial-reality effect, wherein the retrieved randomness data stream is used for generating one or more second non-deterministic features of the second artificial-reality effect; and display the second artificial-reality effect on the video stream. 20 PRIORITY This application is a continuation under 35 U.S.C. § 120 of U.S. patent application Ser. No. 16/216,217, filed 11 Dec. 2018. TECHNICAL FIELD This disclosure generally relates to artificial reality, and in particular, to rendering post-capture artificial-reality effects based on artificial-reality effect state information. BACKGROUND Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers. SUMMARY OF PARTICULAR EMBODIMENTS In particular embodiments, a first computing device may capture an artificial-reality state information stream when the first computing device records a video stream while a first artificial-reality effect is being displayed on the video stream. The first computing device may capture one or more contextual data streams corresponding to the video stream. With legacy solutions, a computing device may fuse an artificial-reality effect into the recorded video stream for recording a video stream with the artificial-reality effect. In such cases, the artificial-reality effect cannot be changed after the video stream is recorded. Furthermore, when an artificial-reality effect is added to a post-capture video stream with no artificial-reality effect based on post-capture image processing, certain artificial-reality effects may not be feasible due to lack of artificial-reality state information data associated with the video stream or due to lack of contextual data associated with the video stream. The first computing device may enable richer, more dynamic post-capture editing (e.g., removing, changing, creating) of artificial-reality effects by capturing the artificial-reality state information stream and one or more contextual data streams separate from the video stream. An artificial-reality state information data corresponding to a frame of the video stream may comprise state information associated with the first artificial-reality effect displayed on the frame of the video stream. An artificial-reality state information data may be captured for each frame of the video stream. The captured artificial-reality state information stream may be compressed and stored separately from the video stream data. An artificial-reality state information data in the artificial-reality state information stream may comprise a timestamp that may correlate the artificial-reality state information data with a corresponding frame of the video. The artificial-reality state information data may comprise randomness data used for generating one or more non-deterministic features (e.g., rain drop sizes, positions, and paths, timing and trajectories for shooting arrows, bubble sizes and moving paths, firework trajectories, etc.) of artificial-reality effect on the video stream. The randomness data may be generated by a randomness model of the artificial-reality effect and may have different values each time the randomness model is re-run. The one or more contextual data streams may comprise a sensor data stream generated by one or more sensors while the video stream is being captured. The one or more sensors may comprise an accelerometer, a gyro, a motion sensor, a depth sensor, a temperature sensor, a microphone, or any suitable sensor. The one or more contextual data streams may comprise a computed data stream generated by an object tracking algorithm performed on content of the video stream. The computed data related to the video content may comprise face tracking data, person/object segmentation data, world tracking data, point cloud data, feature point data, or any suitable computed data generated by an object tracking algorithm. The one or more contextual data streams may be compressed and stored separately from the video stream data. A contextual data of the one or more contextual data stream may comprise a timestamp that may correlate the contextual data with a corresponding frame of the video. In particular embodiments, a second computing device may replay the video stream with the second artificial-reality effect using the video stream, the artificial-reality state information stream, and one or more contextual data streams. In particular embodiments, the second artificial-reality effect may be identical to the first artificial-reality effect. In particular embodiments, the second computing device may be identical to the first computing device. The second computing device may render the second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams. The second computing device may display the second artificial-reality effect on the video stream. In particular embodiments, the second computing device may remove the artificial-reality effect from the video stream. In particular embodiments, the second computing device may replace the second artificial-reality effect with a third artificial-reality effect by rendering the third artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams and displaying the third artificial-reality effect on the video stream. The second computing device may render a new artificial-reality effect without regenerating or re-capturing the artificial-reality state information or the one or more contextual data. Therefore, the second computing device may reduce the power consumption for applying a new artificial-reality effect on the video stream. A computing device may retrieve a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream. Each frame of the video stream may comprise a real-world scene without the first artificial-reality effect. The computing device may retrieve an artificial-reality state information stream corresponding to the video stream. The artificial-reality state information stream may comprise state information associated with the first artificial-reality effect while it was being displayed on the video stream. The computing device may retrieve one or more contextual data streams corresponding to the video stream. The first artificial-reality effect displayed on the video stream may have been rendered based on at least a portion of the one or more contextual data streams. The computing device may render a second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams. The computing device may display the second artificial-reality effect on the video stream. The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However, any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an example artificial-reality system. FIG. 2 illustrates an example artificial-realty effect displayed on a live-captured video stream. FIG. 3 illustrates an example framework for an artificial realty effect based on artificial-reality state information stream and one or more contextual data streams. FIG. 4 illustrates an example artificial-realty effect displayed on a replayed video stream. FIG. 5 illustrates an example method for rendering an artificial-reality effect on a post-capture video stream. FIG. 6 illustrates an example network environment associated with a social-networking system. FIG. 7 illustrates an example computer system. DESCRIPTION OF EXAMPLE EMBODIMENTS FIG. 1 illustrates an example artificial-reality system 100. In particular embodiments, the system 100 may include one or more computing devices (e.g., 110, 150, 152) and one or more servers 140. In particular embodiments, a computing device may be a desktop computer, a laptop computer, a tablet computer, a mobile phone, a camera, an artificial-reality headset, a wearable computing device, a portable computing device, a user terminal device, or any suitable computing device. The computing devices and the servers may be connected through a cloud 130. In particular embodiments, the computing device 110 may include one or more processors 126, a memory 122, a storage 124, a display 128, an input/output interface 120, a communication module 129, etc. In particular embodiments, the computing device 110 may include or be coupled to a number of sensors including, for example, but not limited to, an inertial measurement unit (IMU) 112 (which may include accelerometers, gyroscopes, motion sensors, velocity sensors, orientation sensor, etc.), one or more camera sensors 114, other sensors 116 (e.g. microphones, GPS sensors, light sensors, infrared sensors, distance sensors, position sensors, light sensors, touch sensors, stylus sensors, controller sensors, temperature sensors, gesture sensors, user input sensors, etc.). The computing devices (e.g., 110 151, 152) may be connected to the cloud 130 through wired or wireless connections (e.g., 131, 151) and may be connected to the servers 140 through the cloud 130 and a wired or wireless connection 141. With legacy artificial-reality effect recording solutions, a computing device may fuse an artificial-reality effect into the recorded video stream for recording a video stream with the artificial-reality effect. In such cases, the artificial-reality effect cannot be changed after the video stream is recorded. Furthermore, when an artificial-reality effect is added to a post-capture video stream with no artificial-reality effect based on post-capture image processing, certain artificial-reality effects may not be feasible due to lack of artificial-reality state information data associated with the video stream or due to lack of contextual data associated with the video stream. If a computing device captures contextual data separately from the video stream data in order to mitigate such problems, the replaying computing device may not be able to produce exactly identical artificial-reality effect on the replayed video stream due to lack of artificial-reality state information comprising adjustable deterministic parameters for artificial-reality effect such as a position and a size of an added virtual object at a given time, or randomness data for non-deterministic parameters for artificial-reality effect. A method In particular embodiments, a first computing device 110 may capture an artificial-reality state information stream when the first computing device 110 records a video stream while a first artificial-reality effect is being displayed on the video stream. The first computing device 110 may capture one or more contextual data streams corresponding to the video stream. The first computing device 110 may enable richer, more dynamic post-capture editing (e.g., removing, changing, creating) of artificial-reality effects by capturing the artificial-reality state information stream and one or more contextual data streams separate from the video stream. An artificial-reality state information data corresponding to a frame of the video stream may comprise state information associated with the first artificial-reality effect displayed on the frame of the video stream. An artificial-reality state information data may be captured for each frame of the video stream. The captured artificial-reality state information stream may be compressed and stored separately from the video stream data. An artificial-reality state information data in the artificial-reality state information stream may comprise a timestamp that may correlate the artificial-reality state information data with a corresponding frame of the video. The artificial-reality state information data may comprise randomness data used for generating one or more non-deterministic features (e.g., rain drop sizes, positions, and paths, timing and trajectories for shooting arrows, bubble sizes and moving paths, firework trajectories, etc.) of artificial-reality effect on the video stream. The randomness data may be generated by a randomness model of the artificial-reality effect and may have different values each time the randomness model is re-run. The one or more contextual data streams may comprise a sensor data stream generated by one or more sensors while the video stream is being captured. The one or more sensors may comprise an accelerometer, a gyro, a motion sensor, a depth sensor, a temperature sensor, a microphone, or any suitable sensor. The one or more contextual data streams may comprise a computed data stream generated by an object tracking algorithm performed on content of the video stream. The computed data related to the video content may comprise face tracking data, person/object segmentation data, world tracking data, point cloud data, feature point data, or any suitable computed data generated by an object tracking algorithm. The one or more contextual data streams may be compressed and stored separately from the video stream data. A contextual data of the one or more contextual data stream may comprise a timestamp that may correlate the contextual data with a corresponding frame of the video. In particular embodiments, a second computing device may replay the video stream with the second artificial-reality effect using the video stream, the artificial-reality state information stream, and one or more contextual data streams. In particular embodiments, the second artificial-reality effect may be identical to the first artificial-reality effect. In particular embodiments, the second computing device may be identical to the first computing device. The second computing device may render the second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams. The second computing device may display the second artificial-reality effect on the video stream. In particular embodiments, the second computing device may remove the artificial-reality effect from the video stream. In particular embodiments, the second computing device may replace the second artificial-reality effect with a third artificial-reality effect by rendering the third artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams and displaying the third artificial-reality effect on the video stream. The second computing device may render a new artificial-reality effect without regenerating or re-capturing the artificial-reality state information or the one or more contextual data. Therefore, the second computing device may reduce the power consumption for applying a new artificial-reality effect on the video stream. FIG. 2 illustrates an example artificial-realty effect displayed on a live-captured video stream. In particular embodiments, the first computing device 110 may capture a video data stream, artificial-reality state information stream and one or more contextual data streams (e.g., sensor data stream, computed data stream) of a scene. For example, the first computing device 110 may capture a video data stream of a scene that includes a table 202. In particular embodiments, the video data stream may be a raw video data stream or a video stream in any suitable formats as captured by a camera sensor. In particular embodiments, the one or more contextual data streams may comprise a sensor data stream including sensor data from one or more sensors associated with first computing device 110. The one or more sensors may comprise IMU sensors 112, camera sensors 114, and any other sensors 116. The first computing device 110 may also use one or more microphones to capture the audio data stream associated with the video data stream. In particular embodiments, the one or more contextual data streams may comprise one or more computed data streams (e.g., object recognition data, object feature recognition data, face recognition data, face tracking data, etc.) based on the captured video data stream. For example, the first computing device 110 may use an object recognition algorithm to recognize the table 202, the surface 203, and other object features, such as, surfaces, corners, edges, lines, shapes etc. In particular embodiments, the first computing device 110 may render an artificial-reality effect in the scene of the captured video stream. For example, the first computing device 110 may render a virtual object 202 on the surface 203 of the table 202. The first computing device 110 may also render a number of virtual bubbles 206A, 206B, 206C and 206D floating in the air. A size, a position, and a moving direction of each bubble at a given moment of time may be determined based on randomness data that the first computing device 110 may generate using a randomness model. The first computing device 110 may capture an artificial-reality state information stream that comprises artificial-reality state information data comprising identifications of rendered artificial-reality effects and parameters applied to the rendered artificial-reality effects including the randomness data for each frame of the video stream. The rendered virtual object 202 and the virtual bubbles 206A-206D may be displayed to a user on a display, such as, a screen, a head-mounted display, etc. Although this disclosure describes rendering artificial-reality effects on a live video stream in a particular manner, this disclosure contemplates rendering artificial-reality effects on a live video stream in any suitable manner. As an example and not by way of limitation, a user may use the first computing device 110 to capture and record a video of the scene which includes a table 202. The first computing device 110 may have a camera sensor 114. The user may move around the table 202 while recording the video. The camera sensor 114 be initially at a first position and may move to a second position during the video recording process while the user walks around the table 202. During the video recording process, the first computing device 110 may display the captured video stream on a display 128 (e.g., a display screen, a head-mounted display (HMD)) in real-time to the user. At the same time, the first computing device 110 may render an artificial-reality effect for display with the captured video stream. For instance, the artificial-reality headset may render a virtual object 204 on the table 202 in the scene displayed by the first computing device to the user. When the user looks at the scene displayed by the first computing device 110, the user may see both the images of the real-world objects (e.g., the table 202) and the artificial-reality effect (e.g., the virtual object 204) rendered by the first computing device 110. In particular embodiments, the first computing device 110 may use a camera sensor 114 to capture a video stream of a scene. The captured video stream may be raw video stream or in any suitable compressed or uncompressed video formats. The video formats may include, for example, but are not limited to, audio video interleave (AVI), flash video format (FLV), windows media video (WMV), QuickTime movie (MOV), moving picture expert group 4 (MP4), etc. In particular embodiments, the captured video data stream may be compressed by a live compression algorithm. The computing device may capture one or more contextual data streams associated with the video data streams including, for example, but not limited to, one or more sensor data streams (e.g., raw sensor data streams, IMU data, accelerometer data, gyroscope data, motion data, device orientation data), one or more computed data streams (e.g., face recognition data, face tracking points, person segmentation data, object recognition data, object tracking points, object segmentation data, body tracking points, world tracking points, optical flow data for motion, depth of scene, points in 3D space, lines in 3D space, surfaces in 3D space, point cloud data), or one or more sound data streams. The captured video data stream and contextual data streams may be serialized and stored in a storage, which may be associated with the computing device, the cloud, the servers, or other computing devices, for post-capture editing or replaying. In particular embodiment, the serialized data stream may allow the recorded scene to be simulated or produced deterministically regardless of the type of computing devices that are used for replaying. In particular embodiments, the computer system may capture video data stream and the contextual data streams of a scene only without rendering artificial-reality effects in the scene while capturing the video and contextual data streams. In particular embodiments, the artificial-reality effect may be rendered based on computed data generated by a tracking algorithm (e.g., object recognition algorithm, face recognition algorithm). The computed data may include, for example, but is not limited to, face recognition data, face tracking points, person segmentation data, object recognition data, object tracking points, object segmentation data, body tracking points, world tracking points, depth of scene, points in 3D space, surfaces in 3D space, point cloud data, optical field data for motion, etc. For example, the first computing device 110 may use an object recognition algorithm to identify the surface 203 of the table 202 and may render the virtual object 204 on the surface 203 based on the object recognition data. As another example, the computing device may use a face recognition algorithm to identify and track a user face and render a virtual mask on the user face based on face recognition data. As another example, the computing device may use a tracking algorithm to track the relative position (e.g., distance, angle, orientation) of the surface 203 in the scene to the camera sensor 114 and may render the virtual object 204 on the surface 203 based on the relative position data (e.g., with different view angles). In particular embodiment, the artificial-reality effect may be rendered based on the sensor data stream generated by one or more sensors associated with the first computing device 110. The sensor data stream may be generated by one or more sensors of the first computing device 110 when the video is being captured or when the artificial-reality effect is being rendered during a relaying process. In particular embodiments, the sensor data streams may be generated by one or more sensors associated with the first computing device 110 including, for example, but not limited to, an inertial measurement unit (IMU), an accelerometer, a device orientation sensor, a motion sensor, a rotation sensor, a velocity sensor, a device position sensor, a microphone, a light sensor, a touch sensor, a stylus sensor, a controller sensor, a depth sensor, a distance sensor, a temperature sensor, a GPS sensor, a camera sensor, a gesture sensor, a user input sensor, a point cloud sensor, etc. For example, the virtual object 204 may be rendered with different view angles to the user according to the camera sensor's position so that the virtual object 204 may appear to be statically on the table 202 as viewed by the user from the display when the user moves around the table 202. As another example, an interaction effect (e.g., rotating, moving, lifting up, putting down, hiding, etc.) of the virtual object 204 may be rendered by the first computing device 110 based on the real-time user inputs from one or more user input sensors (e.g., a touch sensor, a controller sensor, a moving sensor, an accelerometer, a microphone, a camera sensor, a gesture sensor or any suitable user input sensors). In particular embodiments, the sensor data stream may include information related to the camera sensor 114, for example and not limited to, position, orientation, view angle, distance to the real-world object (e.g., the table 202), depth of view, moving speed, moving direction, acceleration, etc. The sensor data stream may further include information related to lighting condition, sound, user inputs (e.g., through touch sensors, stylus sensors, controller sensors, etc.), temperature, location (e.g., through GPS sensor), etc. FIG. 3 illustrates an example framework 300 for an artificial realty effect based on artificial-reality state information stream and one or more contextual data streams. In particular embodiments, the first computing device 110 may capture a video data stream 311, an artificial-reality state information stream 312 and one or more contextual data streams 313 of a scene 301. The first computing device 110 may capture a video data stream 311 of the scene 301 which includes a table 302. In particular embodiments, the video data stream 311 may be a raw video data stream or a video stream in any suitable formats as captured by a camera sensor 114. In particular embodiments, the artificial-reality state information stream 312 may comprise one or more identifiers for rendered artificial-reality effects on the scene and parameters applied to the rendered artificial-reality effects. In the example illustrated in FIG. 3, the displayed scene 301 may comprise a virtual object 304 placed on top of the table 302 and a plurality of virtual bubbles 306. The artificial-reality state information data of the artificial-reality state information stream 312 may comprise identifiers for the virtual object 304 and the virtual bubbles 306. The artificial-reality state information data of the artificial-reality state information stream 312 may comprise parameters associated with the virtual object 304 including the size and the position of the virtual object 304. The artificial-reality state information data of the artificial-reality state information stream 312 may also comprise parameters associated with the virtual bubbles 306 including sizes of bubbles, locations of bubbles, and moving directions of bubbles. Because a size and a moving direction of a floating bubble may not be randomly determined when the bubble is rendered, the first computing device 110 may generate randomness data for those non-deterministic features by using a randomness model. The artificial-reality state information data of the artificial-reality state information stream 312 may also comprise the generated randomness data. In particular embodiments, the one or more contextual data streams 313 may comprise a sensor data stream including sensor data from one or more sensors associated with computing device. The one or more sensors associated with the first computing device 110 may comprise IMU sensors, orientation sensors, motion sensors, velocity sensors, device position sensors, or any suitable sensors. The first computing device 110 may also use one or more microphones to capture the audio data stream associated with the video data stream 311. In particular embodiments, the one or more contextual data streams 313 may also comprise one or more computed data streams (e.g., object recognition data, object feature recognition data, face recognition data, face tracking data, etc.) based on the captured video data stream 311. The first computing device 110 may use an object recognition algorithm to recognize the table 304, the surface, and other object features, such as, surfaces, corners, edges, lines, shapes etc. Although this disclosure describes capturing artificial-reality state information stream and one or more contextual data streams while rendering artificial-reality effects on a live video stream in a particular manner, this disclosure contemplates capturing artificial-reality state information stream and one or more contextual data streams while rendering artificial-reality effects on a live video stream in any suitable manner. In particular embodiments, the first computing device 110 may send the captured video data stream 311, the artificial-reality state information stream 312 and the contextual data streams 313 to a serializer 310. The serializer 310 may serialize the video data stream 311 and the artificial-reality state information stream 312 and the contextual data streams 313 and store the streams into a storage 320 (e.g., a local storage of the computing system, a cloud, a server, an associated storage, a storage of another computing system, etc.). The serializer 310 may be a part of the first computing device 110. In particular embodiments, the computing system may compress the serialized data stream 221 into a compressed formant before storing it in the storage 220. Although this disclosure describes serializing and storing data streams for artificial-reality effects on a video stream in a particular manner, this disclosure contemplates serializing and storing data streams for artificial-reality effects on a video stream in any suitable manner. In particular embodiments, a second computing device 110 may retrieve a video stream 331 that was recorded while a first artificial-reality effect was being displayed on the video stream from the storage 320. The second computing device 110 may extract a video data stream 331, an artificial-reality state information stream 332, and one or more contextual data streams 333 by using a de-serializer 330. The de-serializer 330 may de-serialize the retrieved serialized data stream into the video data stream 331, the artificial-reality state information stream 332, and the one or more contextual data streams 333. The de-serializer 330 may be a part of the second computing device 110. In particular embodiments, the second computing device 110 may be the first computing device 110. In particular embodiments, the second computing device 110 may be different from the first computing device 110. Each frame of the video stream 331 may comprise a real-world scene without the first artificial-reality effect. The first artificial-reality effect may comprise a virtual object, a three-dimensional effect, an interaction effect, a displaying effect, a sound effect, a lighting effect, or a tag. As an example and not by way of limitation, the video data stream 331 may comprise a scene with the table 302 without having any artificial-reality effect. Although this disclosure describes retrieving a video stream in a particular manner, this disclosure contemplates retrieving a video stream in any suitable manner. In particular embodiments, the second computing device 110 may retrieve an artificial-reality state information stream 332 corresponding to the video stream 331. The artificial-reality state information stream 332 may comprise state information associated with the first artificial-reality effect while it was being displayed on the video stream. The artificial-reality state information may comprise an identifier for the rendered artificial-reality effect. The artificial-reality state information may comprise applied parameters associated with the rendered artificial-reality effect. The artificial-reality state information may comprise randomness data used for generating one or more non-deterministic features of artificial-reality effect on the video stream 331. The one or more non-deterministic features may comprise a size of a rain drop, a path of a rain drop, a timing of a rain drop, a size of a snowflake, a path of a snowflake, a timing of a snowflake, a direction of a flying arrow, a trajectory of a flying arrow, a timing of a flying arrow, a size of a bubble, a moving path of a bubble, a moving speed of a bubble, or any suitable features that may not be determined in advance. As an example and not by way of limitation, the artificial-reality state information of the artificial-reality state information stream 332 may comprise an identifier for the virtual object 304 and an identifier for the virtual bubbles 306. The artificial-reality state information may comprise the size and the location of the virtual object 304. The artificial-reality state information may also comprise randomness data used for determining the size, location and moving direction for each of the rendered virtual bubble 306. Although this disclosure describes retrieving an artificial-reality state information stream in a particular manner, this disclosure contemplates retrieving an artificial-reality state information stream in any suitable manner. In particular embodiments, the second computing device 110 may retrieve one or more contextual data streams 333 corresponding to the video stream 331. The first artificial-reality effect displayed on the video stream may have been rendered based on at least a portion of the one or more contextual data streams 333. In particular embodiments, the one or more contextual data streams 333 may comprise one or more sensor data streams generated by one or more sensors while the video stream is being captured. The one or more sensors may comprise an inertial measurement unit (IMU), an accelerometer, a device orientation sensor, a motion sensor, a velocity sensor, a device position sensor, a microphone, a light sensor, a touch sensor, a stylus sensor, a depth sensor, a temperature sensor, a GPS sensor, or a user input sensor. Although this disclosure describes retrieving a sensor data stream in a particular manner, this disclosure contemplates retrieving a sensor data stream in any suitable manner. In particular embodiments, the one or more contextual data streams 333 may comprise a computed data stream generated by an object tracking algorithm performed on content of the video stream. The computed data may comprise face recognition data, face tracking points, person segmentation data, object recognition data, object tracking points, object segmentation data, body tracking points, world tracking points, a depth, a point in a three-dimensional space, a line in a three-dimensional space, a surface in a three-dimensional space, or a point cloud. Although this disclosure describes retrieving a computed data stream in a particular manner, this disclosure contemplates retrieving a computed data stream in any suitable manner. In particular embodiments, the second computing device 110 may determine the second artificial-reality effect to be displayed on the video stream while the video stream is replayed. In particular embodiments, the second computing device 110 may determine the second artificial-reality effect identical to the first artificial-reality effect by default. In particular embodiments, the second computing device 110 may determine the second artificial-reality effect based on an input of a user associated with the second computing device 110. The second computing device 110 may present choices for the second artificial-reality effect to the user in order to determine the second artificial-reality effect based on the input of the user. The choices for the second artificial-reality may comprise no artificial-reality effect, the first artificial-reality effect, or one or more artificial-reality effects different from the first artificial-reality effect. The second computing device 110 may receive an indication of a user choice from one or more input sensors associated with the second computing device 110. The second computing device 110 may determine the second artificial-reality effect based on the received indication of the user choice. As an example and not by way of limitation, the second computing device 110 may present choices to the user associated with the second computing device 110. The user may choose the first artificial-reality effect that was displayed on the video stream while the video stream was being captured. The second computing device 110 may determine that the second artificial-reality effect is identical to the first artificial-reality effect. As another example and not by way of limitation, the second computing device 110 may present choices to the user. The user may choose that no artificial-reality effect should be displayed on the video stream. As yet another example and not by way of limitation, the second computing device 110 may present choices to the user. The user may choose a new artificial-reality effect for the second artificial-reality effect. Although this disclosure describes determining a second artificial-reality effect to be displayed on the replayed video stream in a particular manner, this disclosure contemplates determining a second artificial-reality effect to be displayed on the replayed video stream in any suitable manner. In particular embodiments, the artificial-reality effect rendering module 340 of the second computing device 110 may render a second artificial-reality effect based on at least a portion of the artificial-reality state information stream 332 and a portion of the one or more contextual data streams 333. The second artificial-reality effect may be identical to the first artificial-reality effect. In such cases, the artificial-reality effect rendering module 340 may render the second artificial-reality effect identical to the first artificial-reality effect based on at least a portion of the artificial-reality state information stream 332 and a portion of the one or more contextual data streams 333. The artificial-reality effect rendering module 340 may generate one or more non-deterministic features of the second artificial-reality effect based at least on the randomness data in the artificial-reality state information stream 332. The second computing device 110 may display the second artificial-reality effect on the video stream. As an example and not by way of limitation, if the second computing device 110 determined that the second artificial-reality effect is identical to the first artificial-reality effect, the artificial-reality effect rendering module 340 of the second computing device 110 may identify the virtual object 304 based on the artificial-reality effect state information in the artificial-reality state information stream 332. The artificial-reality effect rendering module 340 may identify the surface of the table 302 based on computed data of the one or more contextual data streams 333. The artificial-reality effect rendering module 340 may also determine an orientation of the surface based on sensor data of the one or more contextual data streams 333. The artificial-reality effect rendering module 340 may determine the size and the location of the virtual object 304 based on the artificial-reality state information of the artificial-reality state information stream 332, and render the virtual object 304 of the determined size to the determined location. The artificial-reality effect rendering module 340 of the second computing device 110 may identify virtual bubbles 306 based on the artificial-reality effect state information in the artificial-reality state information stream 332. The artificial-reality effect rendering module 340 may determine the size, location and moving direction of each of the one or more virtual bubbles 306 based on randomness data from the artificial-reality state information stream 332. The artificial-reality rendering module 340 may render the one or more bubbles 306 based on determined size, location and moving direction of each of the one or more bubbles. As the same randomness data may be used, the second artificial-reality effect may be exactly identical to the first artificial-reality effect. Although this disclosure describes rendering an artificial-reality effect on the replayed video stream identical to the artificial-reality effect rendered to the live-captured video stream in a particular manner, this disclosure contemplates rendering an artificial-reality effect on the replayed video stream identical to the artificial-reality effect rendered to the live-captured video stream in any suitable manner. In particular embodiments, the second computing device 110 may replay the video stream on the screen associated with the second computing device 110 without rendering any artificial-reality effect if the second artificial-reality effect is determined to be null. Although this disclosure describes replaying the video stream without artificial-reality effect in a particular manner, this disclosure contemplates replaying the video stream without artificial-reality effect in any suitable manner. In particular embodiments, the artificial-reality effect rendering module 340 of the second computing device 110 may render the second artificial-reality effect based on at least a portion of the artificial-reality state information stream 332 and a portion of the one or more contextual data streams 333 if the second artificial-reality effect is different from the first artificial-reality effect. The artificial-reality effect rendering module 340 of the second computing device 110 may utilize the randomness data in the artificial-reality state information stream for one or more non-deterministic features of the second artificial-reality effect 307. The second computing device 110 may display the second artificial-reality effect 305, 307 on the video stream to construct a displayed scene 303. FIG. 4 illustrates an example artificial-realty effect displayed on a replayed video stream. As an example and not by way of limitation, illustrated in FIG. 4, the second computing device 110 may determine that the user wants to render a new artificial-reality effect on the replayed video stream. Based on the user input, the second computing device 110 may determine that a new virtual object 404 needs to be place on the surface 203 of the table 202. The artificial-reality effect rendering module 340 may identify the surface 203 based on the computed data of the computed data stream. The artificial-reality effect rendering module 340 may determine orientation of the surface 203 based on sensor data of the sensor data stream. The artificial-reality effect rendering module 340 may render the new virtual object 404 on the surface of the table 202 accordingly. In particular embodiments, the artificial-reality effect rendering module 340 may utilize the artificial-reality state information associated with the previously rendered virtual object 204 to render the new virtual object 404. Based on the user input, the second computing device 110 may determine that a plurality of virtual balloons 406A, 406B, 406C, and 406D needs to be rendered. The artificial-reality effect rendering module 340 may use randomness data of the artificial-reality state information stream 332 to determine a size, location, and moving direction of each of the plurality of virtual balloons 406A-406D. The artificial-reality effect rendering module 340 may render the plurality of virtual balloons 406A-406D based on the determined size, location and moving direction for each of the plurality of balloons 406A-406D. The second computing device 110 may display the virtual object 404 and virtual balloons 406A-406D on the video stream. Although this disclosure describes rendering a new artificial-reality effect on the replayed video stream in a particular manner, this disclosure contemplates rendering a new artificial-reality effect on the replayed video stream in any suitable manner. In particular embodiments, the second computing device 110 may receive an indication that a user associated with the second computing device 110 wants to switch to a third artificial-reality effect in the middle of replaying the video stream from one or more input sensors associated with the second computing device 110. The second computing device 110 may stop rendering the second artificial-reality effect on the video stream. The second computing device 110 may render the third artificial-reality effect based on at least a portion of the artificial-reality state information stream 332 and a portion of the one or more contextual data streams 333. The second computing device 110 may display the third artificial-reality effect on the video stream. The second computing device 110 may utilize the randomness data in the artificial-reality state information stream 332 for one or more non-deterministic features of the third artificial-reality effect. Although this disclosure describes switching an artificial-reality effect in the middle of replaying a video stream in a particular manner, this disclosure contemplates switching an artificial-reality effect in the middle of replaying a video stream in any suitable manner. FIG. 5 illustrates an example method 500 for rendering an artificial-reality effect on a post-capture video stream. The method may start at step 510, wherein a computing device may retrieve a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream. Each frame of the video stream may comprise a real-world scene without the first artificial-reality effect. At step 520, the computing device may retrieve an artificial-reality state information stream corresponding to the video stream. The artificial-reality state information stream may comprise state information associated with the first artificial-reality effect while it was being displayed on the video stream. At step 530, the computing device may retrieve one or more contextual data streams corresponding to the video stream. The first artificial-reality effect displayed on the video stream may have been rendered based on at least a portion of the one or more contextual data streams. At step 540, the computing device may render a second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams. At step 550, the computing device may display the second artificial-reality effect on the video stream. Particular embodiments may repeat one or more steps of the method of FIG. 5, where appropriate. Although this disclosure describes and illustrates particular steps of the method of FIG. 5 as occurring in a particular order, this disclosure contemplates any suitable steps of the method of FIG. 5 occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for rendering an artificial-reality effect on a post-capture video stream including the particular steps of the method of FIG. 5, this disclosure contemplates any suitable method for rendering an artificial-reality effect on a post-capture video stream including any suitable steps, which may include all, some, or none of the steps of the method of FIG. 5, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method of FIG. 5, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method of FIG. 5. Network Environment FIG. 6 illustrates an example network environment 600 associated with a social-networking system. Network environment 600 includes a client system 630, a social-networking system 660, and a third-party system 670 connected to each other by a network 610. Although FIG. 6 illustrates a particular arrangement of client system 630, social-networking system 660, third-party system 670, and network 610, this disclosure contemplates any suitable arrangement of client system 630, social-networking system 660, third-party system 670, and network 610. As an example and not by way of limitation, two or more of client system 630, social-networking system 660, and third-party system 670 may be connected to each other directly, bypassing network 610. As another example, two or more of client system 630, social-networking system 660, and third-party system 670 may be physically or logically co-located with each other in whole or in part. Moreover, although FIG. 6 illustrates a particular number of client systems 630, social-networking systems 660, third-party systems 670, and networks 610, this disclosure contemplates any suitable number of client systems 630, social-networking systems 660, third-party systems 670, and networks 610. As an example and not by way of limitation, network environment 600 may include multiple client system 630, social-networking systems 660, third-party systems 670, and networks 610. This disclosure contemplates any suitable network 610. As an example and not by way of limitation, one or more portions of network 610 may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. Network 610 may include one or more networks 610. Links 650 may connect client system 630, social-networking system 660, and third-party system 670 to communication network 610 or to each other. This disclosure contemplates any suitable links 650. In particular embodiments, one or more links 650 include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links 650 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link 650, or a combination of two or more such links 650. Links 650 need not necessarily be the same throughout network environment 600. One or more first links 650 may differ in one or more respects from one or more second links 650. In particular embodiments, client system 630 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system 630. As an example and not by way of limitation, a client system 630 may include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, augmented/virtual reality device, other suitable electronic device, or any suitable combination thereof. This disclosure contemplates any suitable client systems 630. A client system 630 may enable a network user at client system 630 to access network 610. A client system 630 may enable its user to communicate with other users at other client systems 630. In particular embodiments, client system 630 may include a web browser 632, such as MICROSOFT INTERNET EXPLORER, GOOGLE CHROME or MOZILLA FIREFOX, and may have one or more add-ons, plug-ins, or other extensions, such as TOOLBAR or YAHOO TOOLBAR. A user at client system 630 may enter a Uniform Resource Locator (URL) or other address directing the web browser 632 to a particular server (such as server 662, or a server associated with a third-party system 670), and the web browser 632 may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to client system 630 one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request. Client system 630 may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such pages may also execute scripts such as, for example and without limitation, those written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup language and scripts such as AJAX (Asynchronous JAVASCRIPT and XML), and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate. In particular embodiments, social-networking system 660 may be a network-addressable computing system that can host an online social network. Social-networking system 660 may generate, store, receive, and send social-networking data, such as, for example, user-profile data, concept-profile data, social-graph information, or other suitable data related to the online social network. Social-networking system 660 may be accessed by the other components of network environment 600 either directly or via network 610. As an example and not by way of limitation, client system 630 may access social-networking system 660 using a web browser 632, or a native application associated with social-networking system 660 (e.g., a mobile social-networking application, a messaging application, another suitable application, or any combination thereof) either directly or via network 610. In particular embodiments, social-networking system 660 may include one or more servers 662. Each server 662 may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. Servers 662 may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server 662 may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by server 662. In particular embodiments, social-networking system 660 may include one or more data stores 664. Data stores 664 may be used to store various types of information. In particular embodiments, the information stored in data stores 664 may be organized according to specific data structures. In particular embodiments, each data store 664 may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system 630, a social-networking system 660, or a third-party system 670 to manage, retrieve, modify, add, or delete, the information stored in data store 664. In particular embodiments, social-networking system 660 may store one or more social graphs in one or more data stores 664. In particular embodiments, a social graph may include multiple nodes—which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes. Social-networking system 660 may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network via social-networking system 660 and then add connections (e.g., relationships) to a number of other users of social-networking system 660 to whom they want to be connected. Herein, the term “friend” may refer to any other user of social-networking system 660 with whom a user has formed a connection, association, or relationship via social-networking system 660. In particular embodiments, social-networking system 660 may provide users with the ability to take actions on various types of items or objects, supported by social-networking system 660. As an example and not by way of limitation, the items and objects may include groups or social networks to which users of social-networking system 660 may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use, transactions that allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects. A user may interact with anything that is capable of being represented in social-networking system 660 or by an external system of third-party system 670, which is separate from social-networking system 660 and coupled to social-networking system 660 via a network 610. In particular embodiments, social-networking system 660 may be capable of linking a variety of entities. As an example and not by way of limitation, social-networking system 660 may enable users to interact with each other as well as receive content from third-party systems 670 or other entities, or to allow users to interact with these entities through an application programming interfaces (API) or other communication channels. In particular embodiments, a third-party system 670 may include one or more types of servers, one or more data stores, one or more interfaces, including but not limited to APIs, one or more web services, one or more content sources, one or more networks, or any other suitable components, e.g., that servers may communicate with. A third-party system 670 may be operated by a different entity from an entity operating social-networking system 660. In particular embodiments, however, social-networking system 660 and third-party systems 670 may operate in conjunction with each other to provide social-networking services to users of social-networking system 660 or third-party systems 670. In this sense, social-networking system 660 may provide a platform, or backbone, which other systems, such as third-party systems 670, may use to provide social-networking services and functionality to users across the Internet. In particular embodiments, a third-party system 670 may include a third-party content object provider. A third-party content object provider may include one or more sources of content objects, which may be communicated to a client system 630. As an example and not by way of limitation, content objects may include information regarding things or activities of interest to the user, such as, for example, movie show times, movie reviews, restaurant reviews, restaurant menus, product information and reviews, or other suitable information. As another example and not by way of limitation, content objects may include incentive content objects, such as coupons, discount tickets, gift certificates, or other suitable incentive objects. In particular embodiments, social-networking system 660 also includes user-generated content objects, which may enhance a user's interactions with social-networking system 660. User-generated content may include anything a user can add, upload, send, or “post” to social-networking system 660. As an example and not by way of limitation, a user communicates posts to social-networking system 660 from a client system 630. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music or other similar data or media. Content may also be added to social-networking system 660 by a third-party through a “communication channel,” such as a newsfeed or stream. In particular embodiments, social-networking system 660 may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, social-networking system 660 may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. Social-networking system 660 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, social-networking system 660 may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. As an example and not by way of limitation, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing.” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external). A web server may be used for linking social-networking system 660 to one or more client systems 630 or one or more third-party system 670 via network 610. The web server may include a mail server or other messaging functionality for receiving and routing messages between social-networking system 660 and one or more client systems 630. An API-request server may allow a third-party system 670 to access information from social-networking system 660 by calling one or more APIs. An action logger may be used to receive communications from a web server about a user's actions on or off social-networking system 660. In conjunction with the action log, a third-party-content-object log may be maintained of user exposures to third-party-content objects. A notification controller may provide information regarding content objects to a client system 630. Information may be pushed to a client system 630 as notifications, or information may be pulled from client system 630 responsive to a request received from client system 630. Authorization servers may be used to enforce one or more privacy settings of the users of social-networking system 660. A privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged by social-networking system 660 or shared with other systems (e.g., third-party system 670), such as, for example, by setting appropriate privacy settings. Third-party-content-object stores may be used to store content objects received from third parties, such as a third-party system 670. Location stores may be used for storing location information received from client systems 630 associated with users. Advertisement-pricing modules may combine social information, the current time, location information, or other suitable information to provide relevant advertisements, in the form of notifications, to a user. Computer System FIG. 7 illustrates an example computer system 700. In particular embodiments, one or more computer systems 700 perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems 700 provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 700 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 700. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems 700. This disclosure contemplates computer system 700 taking any suitable physical form. As example and not by way of limitation, computer system 700 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system 700 may include one or more computer systems 700; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 700 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 700 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 700 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system 700 includes a processor 702, memory 704, storage 706, an input/output (I/O) interface 708, a communication interface 710, and a bus 712. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor 702 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 702 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 704, or storage 706; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 704, or storage 706. In particular embodiments, processor 702 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 702 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 702 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 704 or storage 706, and the instruction caches may speed up retrieval of those instructions by processor 702. Data in the data caches may be copies of data in memory 704 or storage 706 for instructions executing at processor 702 to operate on; the results of previous instructions executed at processor 702 for access by subsequent instructions executing at processor 702 or for writing to memory 704 or storage 706; or other suitable data. The data caches may speed up read or write operations by processor 702. The TLBs may speed up virtual-address translation for processor 702. In particular embodiments, processor 702 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 702 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 702 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 702. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory 704 includes main memory for storing instructions for processor 702 to execute or data for processor 702 to operate on. As an example and not by way of limitation, computer system 700 may load instructions from storage 706 or another source (such as, for example, another computer system 700) to memory 704. Processor 702 may then load the instructions from memory 704 to an internal register or internal cache. To execute the instructions, processor 702 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 702 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 702 may then write one or more of those results to memory 704. In particular embodiments, processor 702 executes only instructions in one or more internal registers or internal caches or in memory 704 (as opposed to storage 706 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 704 (as opposed to storage 706 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 702 to memory 704. Bus 712 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 702 and memory 704 and facilitate accesses to memory 704 requested by processor 702. In particular embodiments, memory 704 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 704 may include one or more memories 704, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage 706 includes mass storage for data or instructions. As an example and not by way of limitation, storage 706 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 706 may include removable or non-removable (or fixed) media, where appropriate. Storage 706 may be internal or external to computer system 700, where appropriate. In particular embodiments, storage 706 is non-volatile, solid-state memory. In particular embodiments, storage 706 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 706 taking any suitable physical form. Storage 706 may include one or more storage control units facilitating communication between processor 702 and storage 706, where appropriate. Where appropriate, storage 706 may include one or more storages 706. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface 708 includes hardware, software, or both, providing one or more interfaces for communication between computer system 700 and one or more I/O devices. Computer system 700 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 700. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 708 for them. Where appropriate, I/O interface 708 may include one or more device or software drivers enabling processor 702 to drive one or more of these I/O devices. I/O interface 708 may include one or more I/O interfaces 708, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface 710 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 700 and one or more other computer systems 700 or one or more networks. As an example and not by way of limitation, communication interface 710 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 710 for it. As an example and not by way of limitation, computer system 700 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 700 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 700 may include any suitable communication interface 710 for any of these networks, where appropriate. Communication interface 710 may include one or more communication interfaces 710, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus 712 includes hardware, software, or both coupling components of computer system 700 to each other. As an example and not by way of limitation, bus 712 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 712 may include one or more buses 712, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. 17012856 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Mar 22nd, 2019 12:00AM https://www.uspto.gov?id=USD0949907-20220426 Display screen with an animated graphical user interface D949907 The ornamental design for a display screen with an animated graphical user interface, as shown and described. 1 FIG. 1 is a front view of a first image in a sequence for a display screen with an animated graphical user interface showing the claimed design; and, FIG. 2 is a front view of a second image thereof. The appearance of the transitional image sequentially transitions between the images shown in FIGS. 1-2. The process or period in which one image transitions to another image forms no part of the claimed design. The broken lines in the figures showing a display screen, electronic device, and portions of the graphical user interface are for illustrative purposes only and form no part of the claimed design. 29684610 meta platforms, inc. USA S1 Design Patent Open D14/487 15 Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Aug 24th, 2018 12:00AM https://www.uspto.gov?id=USD0949909-20220426 Display screen with animated graphical user interface D949909 The ornamental design for a display screen with animated graphical user interface, as shown and described. 1 FIG. 1 is a front view of a first image in the sequence for a display screen with animated graphical user interface, showing our new design; FIG. 2 is a front view of a second image thereof; FIG. 3 is a front view of a third image thereof; FIG. 4 is a front view of a fourth image thereof; FIG. 5 is a front view of a fifth image thereof; and, FIG. 6 is a front view of a sixth image thereof. The broken lines illustrate the display screen and portions of the graphical user interface and form no part of the claimed design. The appearance of the transitional graphical user interface sequentially transitions between images shown in FIGS. 1 through 6. The process or period in which one image transitions to another image forms no part of the claimed design. 29661131 meta platforms, inc. USA S1 Design Patent Open D14/488 15 Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Jan 27th, 2020 12:00AM https://www.uspto.gov?id=US11314329-20220426 Neural decoding with co-learning for brain computer interfaces Embodiments relate to decoding architecture that rapidly decodes light-derived signals to extract predicted user actions or intents (e.g., commands) in relation to interactions with objects (e.g., virtual objects, physical objects), such that the user can manipulate the objects or otherwise receive assistance without manually interacting with an input device (e.g., touch input device, audio input device, etc.). The decoding architecture thus enables a neural decoding process with a neural signal stream as an input, and provides feedback to the user, where the feedback is used to train the neural decoding algorithm and user behavior. The neural signals can be blood oxygenation level dependent (BOLD) signals associated with activation of different articulators of the motor cortex, and signals can characterize both actual and imagined motor cortex-related behaviors. With training of the decoding algorithm, rapid calibration of the BCI for new users can be achieved. 11314329 1. A method comprising: responsive to detecting a set of light signals from a head region of a user as a user interacts with an object in an environment, generating a neural data stream capturing brain activity of the user; extracting a predicted user action upon processing the neural data stream with a neural decoding model, wherein the predicted user action comprises at least one of an actual action and an imagined action performed in relation to the object, wherein the neural decoding model extracts the predicted user action based on a combination of an empirical probability of performance of the predicted user action by the user, a light signal-decoded probability of performance of the predicted user action by the user, and a confidence value associated with the light signal-decoded probability; providing a feedback stimulus to the user based upon the predicted user action; and generating an updated neural decoding model based upon a response of the user to the feedback stimulus. 2. The method of claim 1, wherein the set of light signals comprises diffuse optical tomography (DOT) signals. 3. The method of claim 1, wherein detecting the set of light signals comprises transforming light from the head region into a set of electrical signals. 4. The method of claim 1, wherein the predicted user action comprises a command intended to manipulate a state of the object. 5. The method of claim 4, wherein extracting the predicted user action comprises transforming the neural data stream into a set of speech components mapped to a set of motor cortex articulators associated with the head region. 6. The method of claim 5, wherein extracting the predicted user action comprises transforming a sequence of activated motor cortex articulators, captured in the set of light signals, into a phoneme chain representative of the command, wherein the phoneme chain is a trained representation of a spoken version of the command. 7. The method of claim 1, wherein the object is a digital object in an electronic content environment, and wherein providing the feedback stimulus to the user comprises modulating a state of the digital object. 8. The method of claim 1, wherein the object is a physical object in a physical environment of the user, and wherein providing the feedback stimulus to the user comprises generating control instructions in a computer-readable medium for modulating a state of the physical object. 9. The method of claim 1, further comprising: responsive to detecting a second set of light signals from a head region of a user as a user interacts with the object in response to the feedback stimulus, generating a second neural data stream capturing brain activity of the user. 10. The method of claim 9, further comprising extracting a second predicted user action upon processing the second neural data stream with the updated neural decoding model. 11. The method of claim 1, wherein the neural decoding model extracts the predicted user action by: determining the empirical probability and a light signal-decoded probability of performance of the predicted user action by the user, determining the confidence value associated with the light signal-decoded probability, and upon determining satisfaction of a threshold condition by the confidence value, preferentially extracting the predicted user action based on the light signal-decoded probability. 12. The method of claim 1, wherein the neural decoding model extracts the predicted user action by: determining the empirical probability and a light signal-decoded probability of performance of the predicted user action by the user, determining the confidence value associated with the light signal-decoded probability, and upon determining dissatisfaction of a threshold condition by the confidence value, preferentially extracting the predicted user action based on the empirical probability. 13. The method of claim 1, wherein the extracting of the predicted user action is performed within a time threshold that is referenced to a time point associated with a change in the environment. 14. A system comprising: a light detector coupled to an interface configured to be worn a head region of a user; an electronics subsystem coupled to the light detector; and a computing subsystem in communication with the electronics subsystem and comprising a non-transitory computer-readable storage medium containing computer program code for operating in: a neural data stream-generating mode that outputs a neural data stream in response to generation of a set of electrical signals by the light detector and to conditioning of the set of electrical signals by the electronics subsystem, the neural data stream associated with an interaction between the user and an object; an action prediction mode that outputs a predicted user action upon processing the neural data stream with a neural decoding model, wherein the neural decoding model extracts the predicted user action based on a combination of an empirical probability of performance of the predicted user action by the user, a light signal-decoded probability of performance of the predicted user action by the user, and a confidence value associated with the light signal-decoded probability; a feedback mode that outputs a feedback stimulus for the user based upon the predicted user action; and a model updating mode that generates an updated neural decoding model based upon a response of the user to the feedback stimulus. 15. The system of claim 14, wherein the light detector comprises an array of complementary metal oxide semiconductor (CMOS) pixels optically coupled to the interface by an array of optical fibers. 16. The system of claim 14, wherein, in the action prediction mode, the computing subsystem comprises architecture for transforming a sequence of activated motor cortex articulators, captured in the set of electrical signals, into a phoneme chain representative of the command, where the phoneme chain is a trained representation of a spoken version of the command. 17. The system of claim 16, wherein the object is a digital object in an electronic content environment, and wherein, in relation to the feedback mode, the computing subsystem comprises architecture for generating control instructions for modulating a state of the digital object. 18. The system of claim 14, wherein, in the model updating mode, the computing subsystem comprises architecture for generating a second neural data stream capturing brain activity of the user, in response to generation of a second set of electrical signals as a user interacts with the object in response to the feedback stimulus, and extracting a second predicted user action upon processing the second neural data stream with the updated neural decoding model. 19. The system of claim 14, wherein the computing subsystem comprises architecture for determining the empirical probability and the light signal-decoded probability of performance of the predicted user action by the user, determining the confidence value associated with the light signal-decoded probability, and upon determining satisfaction of a threshold condition by the confidence value, preferentially extracting the predicted user action based on the light signal-decoded probability. 20. The system of claim 14, further comprising an output device in communication with the computing subsystem and operable, in the feedback mode, to render the feedback stimulus to the user. 20 CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 62/797,578, filed Jan. 28, 2019, which is incorporated by reference in its entirety. BACKGROUND This disclosure relates generally to brain computer interface systems, and specifically to a wearable brain computer interface system with an increased dynamic range sensor. Communication via physical actions, such as textual entry or manipulation of a user interface on a mobile or other device is a key form of interaction amongst individuals today. Additionally, certain online systems, such as online social networks, thrive on the network of users that frequent the online social network on a consistent basis. One component of online social networks is the ability of a user to interact with objects (e.g., electronically provided content) in an online or virtual setting. In many scenarios, detection of interactions requires the user to type or enter words and phrases through a physical means (e.g., a keyboard or clicking on a virtual keyboard) and/or to audibly provide commands. Physically entering words and phrases or providing audible commands may be cumbersome or impossible for certain individuals. Additionally, and more generally, physical entry of words and phrases for all individuals is often an inefficient way to communicate, as typing or otherwise manipulating various user interfaces can be cumbersome. Brain computer interface (BCI) systems are being explored in relation to some of these problems. However, traditional brain computer interface (BCI) systems typically implement electrical signal detection methods to characterize brain activity. Such systems are typically used in clinical or academic settings, and often are not designed for use by users during their normal daily lives. In relation to user factors, such systems often lack features that allow users to properly position sensing components in a repeatable and reliable manner, as well as to maintain contacts between sensing components and desired body regions as a user moves throughout his or her daily life. Miniaturization of such BCI systems also provides challenges. Additionally, fields exploring other sensing regimes for detection and decoding of brain activity are nascent, and traditional sensors used for other sensing regimes have insufficient dynamic range and often provide limitations in readout speed, thereby limiting their use in applications where rapid decoding of brain activity is important. SUMMARY Disclosed herein are systems and methods for enabling a user to communicate using a brain computer interface (BCI) system through unspoken communications. As used hereafter, unspoken methods and/or unspoken communications refer to communications that can be performed by an individual through non-verbal (e.g., without verbal sounds), non-physical (e.g., not inputted by an individual through a physical means such as a keyboard, mouse, touchscreen, and the like), and/or non-expressive (e.g., not expressed through facial features, body language, and the like) means. Generally, a BCI system interprets an individual's brain activity to characterize intentions of the individual in interacting with content in the environment of the user. In particular embodiments, the BCI system includes a light source subsystem, a detector subsystem, and an interface including optical fibers coupled to the light source subsystem and/or detector subsystem, and to a body region of a user. The light source subsystem, the interface, and the detector subsystem are coupled to other electronics providing power and/or computing functionality. The BCI system components are also configured in a wearable form factor that allows a user to repeatably and reliably position light transmitting and light sensing components at the body region. As such, the system can include components appropriate for a small form factor that is portable and worn discreetly at a head region of the user. Embodiments also relate to a sensor system for a brain computer interface (BCI) that enables detection and decoding of brain activity by optical tomography. The sensor system includes an array of pixels arranged as grouped pixel units to provide increased dynamic range. One or more of the grouped pixel units can operate in a saturated mode while providing information useful for decoding brain activity. Furthermore, the grouped pixel units are arranged to enable fast readout by a pixel scanner, thereby increasing detection and decoding ability by systems implementing the sensor design. The grouped pixel units of the sensor system are aligned with optical fibers of an interface to a body region of a user, where the optical fibers can be retained in position relative to the grouped pixel units by an optically transparent substrate that provides mechanical support while minimizing factors associated with divergence of light transmitted through optical fibers. Embodiments also relate to a brain computer interface system that includes a retainer and cap assembly for transmitting light to a user's head region and transmitting optical signals from the user's head region to a detector subsystem. The retainer is configured to secure the cap assembly to a head region of a user. The cap assembly includes an array of ports that retain an array of ferrules. A first ferrule in the array of ferrules can include a channel that extends at least partially through the body of the ferrule. The channel retains a fiber optic cable such that the fiber optic cable is in communication with a head region of a user during a mode of operation. The cap includes an elastic portion such that, in a mode of operation, the cap and array of ferrules are biased towards the head region of a user. Embodiments also relate to decoding architecture that rapidly (e.g., in real time or near real time) decodes light-derived signals to extract predicted user actions or intents (e.g., commands) in relation to interactions with objects (e.g., virtual objects, physical objects), such that the user can manipulate the objects or otherwise receive assistance without manually interacting with an input device (e.g., touch input device, audio input device, etc.). The decoding architecture thus enables a neural decoding process with a neural signal stream as an input, and provides feedback to the user, where the feedback is used to train the neural decoding algorithm and user behavior. The neural signals can be blood oxygenation level dependent (BOLD) signals associated with activation of different articulators of the motor cortex, and signals can characterize both actual and imagined motor cortex-related behaviors. With training of the decoding algorithm, rapid calibration of the BCI for new users can additionally be achieved. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a block diagram of a BCI system for detecting and decoding brain activity of a user, in accordance with one or more embodiments. FIG. 1B is a schematic of an embodiment of the BCI system shown in FIG. 1A. FIG. 2 depicts schematics of portions of a light source subsystem, in accordance with one or more embodiments. FIG. 3 depicts an example of light emission operation in relation to signal detection, in accordance with one or more embodiments. FIG. 4 depicts a schematic of an interface to a head region of a user, in accordance with one or more embodiments. FIG. 5A depicts a schematic and cross sectional view of an embodiment of an interface to a head region of a user, according to an embodiment. FIG. 5B depicts a schematic of a first mode of operation and a second mode of operation of an interface to a head region of a user, according to the embodiment of FIG. 5A. FIG. 5C depicts a schematic and cross sectional view of an embodiment of an interface to a head region of a user, according to an embodiment. FIG. 6A depicts an isometric view of a cap, with integrated ferrules for interfacing with a head region of a user according to an embodiment. FIG. 6B depicts a top, side, cross sectional, and front view of the embodiment of a cap shown in FIG. 6A. FIG. 6C depicts a top view of a cap with an array of ports, and a top view and side view of a port, according to the embodiment of FIG. 6A. FIG. 6D depicts side and cross sectional views of a ferrule, according to the embodiment of FIG. 6A. FIG. 6E depicts isometric, side, and cross sectional views of a ferrule with an integrated fiber optic cable, according to the embodiment of FIG. 6A. FIG. 7A depicts a schematic and cross sectional view of an embodiment of an interface to a head region of a user, according to an embodiment. FIG. 7B depicts a schematic and cross sectional view of a variation of the embodiment of an interface to a head region of a user shown in FIG. 7A. FIG. 7C depicts a first mode of operation and a second mode of operation of an interface to a head region of a user, according to the embodiment of FIG. 7B. FIG. 8A depicts schematics of portions of a detector subsystem, in accordance with one or more embodiments. FIG. 8B depicts a schematic of portions of a detector subsystem, including multiple rows of grouped pixel units, in accordance with one or more embodiments. FIG. 9 depicts a flow chart of a method for generating and processing optical signals, in accordance with one or more embodiments. FIG. 10A depicts unsaturated and saturated profiles of grouped pixel unit outputs, in accordance with one or more embodiments. FIG. 10B depicts overlap in outputs associated with different grouped pixel unit, in accordance with one or more embodiments. FIG. 11A depicts scanning operation with different exposure settings, in accordance with one or more embodiments. FIG. 11B depicts scanning operation with different exposure settings, different power settings, and different frame settings, in accordance with one or more embodiments. FIG. 12A depicts a schematic of a camera subsystem, in accordance with one or more embodiments. FIG. 12B depicts a schematic of the camera subsystem shown in FIG. 12A. FIG. 13 depicts a schematic of an embodiment of a system, with computing components, for implementing a neural decoding process. FIG. 14A depicts a flow chart of a method for neural decoding, in accordance with one or more embodiments. FIG. 14B depicts a flow diagram of an embodiment of the method for neural decoding shown in FIG. 14A. FIG. 14C depicts a schematic of a neural data stream capturing information associated with different articulators, in relation to an embodiment of the method shown in FIG. 14A. FIG. 14D depicts a process flow of an embodiment of the method shown in FIG. 14A. FIG. 14E depicts an expanded view of a portion of the process flow shown in FIG. 14D. FIG. 14F depicts an expanded view of a portion of the process flow shown in FIG. 14D. The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. For example, a letter after a reference numeral, such as “150a,” indicates that the text refers specifically to the element having that particular reference numeral. A reference numeral in the text without a following letter, such as “150,” refers to any or all of the elements in the figures bearing that reference numeral (e.g. “computing component 150” in the text refers to reference numerals “computing component 150a” and/or “computing component 150b” in the figures). DETAILED DESCRIPTION 1. Overview Embodiments relate to a brain computer interface (BCI) including a light source subsystem, an interface transmitting light from the light source subsystem to a body region of a user, and a detector subsystem coupled to the interface and configured to receive light signals from the body region of the user. The light source subsystem, the interface, and the detector subsystem are coupled to other electronics providing power and/or computing functionality. The BCI is designed to be worn at a head region of a user and to generate optical signals that can be used to characterize brain activity of the user, where decoded brain activity can be used as inputs to control other systems and/or electronic content provided to the user. In relation to brain activity sensing, embodiments also relate to a light source subsystem of the BCI, where the light source subsystem is provided in a miniaturized form factor that outputs light with appropriate characteristics to enable measurement of blood oxygenation by the detector subsystem, where oxygenation levels can be determined relative to a reference state. In some embodiments, the detector subsystem is configured to measure other types of optical brain signals. The light source subsystem also includes individually addressable light emitters that cooperate with readout operations of a pixel scanner associated with the detector subsystem. In relation to wearability, embodiments also relate to a wearable component that interfaces the light source subsystem and other system components to the head region of the user during use, in order to assess brain activity in a portable manner. The wearable component includes aspects that reliably bias optodes coupled to the light source subsystem and/or the detector subsystem to the user's head as the user moves about in his or her daily life. In relation to brain activity sensing and generation of outputs for optical tomography, embodiments also relate to a detector subsystem that can be included with the BCI, where the sensor system enables detection and decoding of brain activity by optical tomography. In some embodiments, detection methodologies other than optical tomography may be used by the detector subsystem. The sensor system includes an array of pixels arranged as grouped pixel units to provide increased dynamic range. One or more of the grouped pixel units can operate in a saturated mode while providing information useful for decoding brain activity. Furthermore, the grouped pixel units are arranged to enable fast readout by a pixel scanner (e.g., line scanner), thereby increasing detection and decoding ability by systems implementing the sensor design. The grouped pixel units of the sensor system are aligned with optical fibers of an interface to a body region of a user, where the optical fibers can be retained in position relative to the grouped pixel units by an optically transparent substrate that provides mechanical support while minimizing factors associated with divergence of light transmitted through optical fibers. 2. System Environment FIG. 1A is a block diagram of a system 100a (e.g., a BCI system) for detecting and decoding brain activity of a user, in accordance with one or more embodiments. FIG. 1B is a schematic of an embodiment of the BCI system 100b shown in FIG. 1A. The system 100 includes a light source subsystem 110, an interface 120 transmitting light from the light source subsystem 110 to a head region of a user, and a detector subsystem 130 coupled to the interface 120 and configured to receive light signals from the head region of the user. The light source subsystem 110, the detector subsystem 130, and/or additional sensors 140 can be coupled to a power component 150 and a computing component 160, which processes and decodes neural stream signals for delivery of feedback to a user device 180 through a network 170. As described in relation to other system components above and below, the housing 105a can house one or more of: the light source subsystem 110a, the detector subsystem 132a, the power component 150a, the computing component 160a, a data link 162a, and additional sensors 140a. The housing 105a can also house at least a portion of the interface 120a that is head-mountable for positioning signal transmission components at the head region of a user. The housing 105a can be head-mounted or can be coupled to the user in another manner. The housing can be composed of a polymer material and/or any other suitable materials. The system 100 is thus designed to be worn by a user during use. Emitters 112 of the light source subsystem 110 and sensors 132 of the detector subsystem 130 can be positioned and retained at the head region of the user through the interface 120, through use of optical fibers 121 and 128 and optodes 123 the maintain positions of terminal regions of the optical fibers at the head region of the user. The interface 120, with optodes 123, is configured to enable characterization of brain activity from one or more regions of the user's brain through a non-invasive method. Specifically, the one or more emitters 112 emit light and the one or more sensors 132 capture signals from the head region of the user, based on the emitted light. In some embodiments, the interface 120 is designed to fully cover the head of the user. In other embodiments, the interface 120 is designed to cover a portion of the head, depending on the region(s) of interest associated with applications for decoding brain activity. In various embodiments, the emitters 112 and sensors 132 enable optical tomography methods for receiving neural signals from the user, where the signals can be subsequently decoded and used for other applications (e.g., as control inputs that allow the user to control behavior of devices in his or her environment). The emitters 112 can emit a signal that is absorbed and/or attenuated by neurons or networks of neurons in the region of the brain, and/or cause a physiological response that can be measured. The sensors 132 detect a signal (e.g., backscattered light) from the same region of the brain. In one embodiment, the signal emitted by the emitters 112 and captured by the sensors 132 is light in the visible spectrum. Additionally or alternatively, in other embodiments, the signal emitted by the emitters 112 and captured by the sensors 132 is light in the non-visible spectrum. The light source subsystem 110 is in communication with a power component 150 that enables the emitters 112 to transmit light. The light source subsystem 110 can also be in communication with a computing component 160 of system electronics. For example, the light source subsystem 110 can receive inputs from the computing component 160 and can provide inputs to the emitters 112 to coordinate light transmission from the emitters 112 (e.g., in relation to operation of the detector subsystem 130 in coordination with the light source subsystem 110). More specifically, the light source subsystem 110 receives instructions for transitioning the emitters between operation states (e.g., on states, off states) and/or within variations of operation states (e.g., a high power mode in the on state, a low power mode in the on state, etc.). The light source subsystem and the emitters 112 are described in more detail below. The detector subsystem 130 receives the detected signals from the sensors 132, through coupling of the sensors 132 to the interface 120 to the user. The detector subsystem 130 can also be in communication with the power component to enable sensing, signal pre-processing, and/or signal transmission functions of the detector subsystem 130. The detector subsystem 130 can also be in communication with the computing component 160 of system electronics, in order to support detection operation modes (e.g., sensor scanning modes) and/or other operation modes (e.g., signal transmission modes) of the detector subsystem 130. In relation to the sensors 132 of the detector subsystem 130, the sensors 132 can include complementary metal oxide semiconductor (CMOS) architecture and/or another architecture, as described in more detail below. The system 100 can additionally include other sensors 140 for detecting user behavior. The additional sensors 140 can also be coupled to the power component 150 and/or the computing component 160 and provide signals useful for decoding brain activity of the user, as described in more detail below. While the computing component 160 of the system can be implemented onboard the wearable components of the system 100, the computing component 160 can additionally or alternatively be supported by or in communication with other computing devices 165 and/or user a user device 180, for instance, through the network 170. Examples of computing devices 165 and/or user devices 180 include a personal computer (PC), a desktop computer, a laptop computer, a notebook, a tablet PC executing an operating system, for example, a Microsoft Windows-compatible operating system (OS), Apple OS X, and/or a Linux distribution. In other embodiments, the computing devices and/or user devices can be any device having computer functionality, such as a personal digital assistant (PDA), mobile telephone, smartphone, wearable computing device, or any other suitable computing device. The computing component 160 and/or other computing devices can execute instructions (e.g., computer code) stored on a computer-readable storage medium in order to perform the steps and processes described herein for enabling unspoken communications for control of other systems by a user. Collectively, the computing component 160 and any other computing devices, with the network 170, can operate as a computing system for implementation of methods according to specific applications of use of the system 100. Generally, the computing system determines intentions of the user from signals provided by the detector subsystem 130, where the intentions describe user wishes in relation to interacting with electronic content or a virtual assistant. The computing system can determine the intentions that correspond to the neural signals that were gathered by the detector subsystem 130 by applying a predictive model that is trained to predict intentions from neural activity. The computing system can train the predictive model using training data including gathered experimental datasets corresponding to neural activity of previously observed individuals. Intentions can be decoded into communication related components (e.g., phonemes, words, phrases, sentences, etc.). In some related embodiments, the computing system can enable a user to access an online social networking system, and therefore, allows users to communicate with one another through the online social networking system. As such, the computing system may communicate on behalf of the individual through the network 170 with other computing devices (e.g., computing device 165, user device 180) of the social networking system. In some embodiments, the computing system can communicate on behalf of the individual to other computing devices using the predicted phonemes, words, phrases, and/or sentences. The network 170 facilitates communications between the one or more computing devices. The network 170 may be any wired or wireless local area network (LAN) and/or wide area network (WAN), such as an intranet, an extranet, or the Internet. In various embodiments, the network 170 uses standard communication technologies and/or protocols. Examples of technologies used by the network 170 include Ethernet, 802.11, 3G, 4G, 802.16, or any other suitable communication technology. The network 170 may use wireless, wired, or a combination of wireless and wired communication technologies. Examples of protocols used by the network 170 include transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), file transfer protocol (TCP), or any other suitable communication protocol. 3. System-Neuroimaging Modalities The system 100 described above operates to enable optical tomography or optical topography-associated modalities for decoding neural activity. The system 100 can characterize blood/tissue characteristics of the user through diffuse optical tomography/topography (DOT) modalities, in relation to characterizing cerebral blood flow, cerebral blood oxygenation, and/or other features indicative of brain activity. The system 100 can additionally or alternatively support other optical tomography or near-infrared spectroscopy approaches, including one or more of: functional near-infrared spectroscopy (fNIRS), functional time-domain near-infrared spectroscopy (TD-fNIRS), diffuse correlation spectroscopy (DCS), speckle contrast optical tomography (SCOT), time-domain interferometric near-infrared spectroscopy (TD-iNIRS), hyperspectral imaging, polarization-sensitive speckle tomography (PSST), spectral decorrelation, auto-fluorescence tomography, and photoacoustic imaging. 4. System Components 4.1 System—Light Sources FIG. 2 depicts a schematic of a light source subsystem 210, in accordance with one or more embodiments. The light source subsystem 210 includes one or more emitters 212. The emitter(s) 212 function to emit light having suitable parameters, where the light interacts with a region of interest (e.g., the head region of the user), and is subsequently transmitted to a detector subsystem (described below) for characterization of the region of interest. The light source subsystem 210 includes laser light emission elements but can include light emitting diode (LED) elements or other types of light emitters in alternative embodiments. In relation to laser light emission elements, the light source subsystem 210 can include vertical cavity surface emitting laser (VCSEL) elements with semiconductor architecture for perpendicular beam emission. Use of VCSEL elements contributes to a compact light emission configuration that provides suitable power for wearable applications requiring high power light output for optical detection of characteristics of a multidimensional region of interest (e.g., a head region of a user). In relation to laser light emission elements, the light source subsystem 210 can alternatively include emitters with conventional semiconductor architecture for edge beam emission from surfaces cleaved from a semiconductor wafer. Emitters 212 of the light source subsystem 210 include emitters configured to emit light in the visible spectrum and emitters configured to emit light in the non-visible spectrum. For BCI applications involving characterization of brain activity, the emitters include emitters configured to emit red wavelength light and near-infrared light. However, in alternative variations, the emitters 212 can be configured to emit only a single wavelength of light, or other wavelengths of light. Each emitter of the light source subsystem 210 has its own die physically coupled to an electronics substrate 223 (e.g., printed circuit board) of system electronics, such that each emitter of the light source subsystem 210 is individually addressable in a compact format. In some embodiments, the emitters are separated from each by at least the diameter of optical fibers. In relation to addressability, each emitter is transitionable between an activated state for emitting light (with different output settings) and a deactivated state. However, in alternative variations of light source subsystem 210, multiple emitters can be associated with a single die physically coupled to an electronics substrate 223 to enable addressability of emitters in groups, or all emitters of the light source subsystem 210 can be associated with a single die physically coupled to the electronics substrate 223. In alternative embodiments where each emitter does not have its own die, the emitters can, however, still be individually addressable using other wiring architecture. The emitters 212 of the light source subsystem 210 are arranged in a 2D array. The 2D array can be a square array, where the square array can have equal numbers of emitters along its width and height. The size of the array of emitters, in terms of number of emitters, distribution of emitters in space, and spacing between emitters, can be configured based on size of each individual emitter (in relation to size constraints of the wearable system), as well as morphological factors of the set of optical fibers 221 optically coupling emitters to other system components, as described in further detail below. In alternative embodiments, however, the emitters 212 can be arranged in a polygonal array, ellipsoidal array, or in any other suitable manner (e.g., an amorphous array). In an example, the emitters 212 are arranged in an 8×8 square array, spaced with a pitch of 1 mm, and collectively have a footprint of 1 cm2. The emitters 212 can operate in a continuous emission mode for continuous transmission of light. The emitters 212 can also operate in a pulsed mode, where periods of light emission are interspersed with periods of non-emission at a desired frequency. Pulses can thus be associated with one or more of: a pulse profile having width characteristics and other pulse shape aspects (e.g., peaks, troughs, etc.); power draw (e.g., in relation to power amplitude); temporal features (e.g., periodicity, frequency of pulses, etc.); and any other suitable pulse features. In a specific example, the emitters 212 operate in a pulsed mode with pulse width modulation (PWM) having a power draw of 27% efficiency at 100 mW of power output. In relation to non-continuous transmission modes, light emission by the emitters 212 of the light source subsystem 210 can be coordinated with detection by the detector subsystem, where emission of light (e.g., by a subset of emitters) is timed with light detection in phases. In one example, as shown in FIG. 3, the emitters of the light source subsystem emit a first light profile (e.g., a pulse of red light provided within a first time window), followed by a second light profile (e.g., a pulse of infrared light provided within a second time window), followed by a third light profile (e.g., dark period provided within a third time window), where each pulse and dark period is detected sequentially by an embodiment of the detector subsystem described below. As such, scanning by the detector subsystem to generate and decode light signal data can be carefully timed with operation of the light source subsystem 212. The emitters 212 of the light source subsystem 210 can further be configured to transmit light through optical elements that manipulate light along a path of transmission to a surface of the region of interest. Optical elements can include one or more of: filters, lenses, mirrors, collimation elements, waveguides, other beam shaping elements, and any other suitable optics. As shown in FIG. 2, the emitters 212 are coupled with first ends of a set of optical fibers 221 for transmission of light to a region of interest, through the wearable interface 230 described below. Each fiber of the set of optical fibers 221 includes a glass optically transparent core. Each fiber can also include sheathing layers including one or more of: a reflective layer (e.g., to provide total internal reflection of light from a portion of the region of interest to the corresponding grouped pixel unit), a buffer layer (e.g., to protect the fiber), and any other suitable layer. Additionally or alternatively, each fiber can be separated from adjacent fibers by a material (e.g., optically opaque medium, epoxy) that prevents cross-transmission of light between fibers, and also promotes coupling between first ends of the fibers and the emitters 212 of the light source subsystem 210. As such, each fiber can be isolated (e.g., optically, thermally, etc.) from other fibers. In morphology, each fiber 221 has a length that minimizes distance-related signal loss factors between the emitter and the region of interest. Each fiber can have a rectangular/square cross section to enable compact bundling of fibers. In a specific example, each fiber has a cross sectional width of 400 μm, thereby providing complete coverage of the apertures of the emitters, where, in a specific example, the VSCEL emitters each have 3 apertures collectively having a footprint of 50 μm in diameter; however, in alternative embodiments, the fibers can have a circular cross section or any other suitable cross section, with any other suitable dimensions. As described below in relation to the wearable interface 230, second ends of the set of optical fibers 221 are coupled to the wearable interface in a manner that provides controlled light transmission to the region of interest associated with the wearable interface. 4.2 System—Wearable Interface FIG. 4 depicts a head-mountable interface in accordance with one or more embodiments, where the interface 420 functions to interface light emitters of a light source subsystem 410 (such as an embodiment of the light source subsystem described above) to the head region of the user during use, and to interface the head region with a detector subsystem 430 (described in more detail below), in order to assess brain activity in a portable manner. The interface 420 is configured to be worn by a user and includes a cap 422 and an array of ferrules 472 supported by the cap 422. The cap 422 can also include a retainer 428 configured to secure the cap 422 to the head of the user (or another body region of the user, in relation to signal detection from other body regions). As described below, a ferrule 424 of the array of ferrules 472 can include channels or other positioning features for retaining optical fibers associated with the light source subsystem 410 and/or the detector subsystem 430 in position. As such, the array of ferrules can include ferrules supporting optical fibers for transmission of light toward and/or away from the target region. The cap 422 and array of ferrules 472 also function to bias the optical fiber ends against the target regions in a manner that is comfortable to the user during use. The biasing force can be created by elastic forces provided by deformation of the cap or elastic elements coupled to or otherwise in communication with the ferrules, as described in more detail below. FIG. 5A shows a first embodiment of the interface shown in FIG. 4. A cap 522a can be configured to couple to a user by a retainer 528a. The cap 522a includes a first broad surface 551a, a second broad surface 552a opposing the first broad surface, and an array of ports retaining an array of ferrules, including ferrule 524a. A cross sectional view (right) shows a ferrule 524a passing through the first broad surface 551a and the second broad surface 552a of the cap 522a. The ferrule 524a is retained in a port 553a of the array of ports. The ferrule 524a includes a channel 554a configured to retain a fiber optic cable 555a in position. The fiber optic cable 555a is positioned such that minimal bending or movement of the fiber optic cable 555a can occur within the channel 554a. The fiber optic cable 555a is configured to interact with the head region of a user and can be coupled to a light source subsystem and/or a detector subsystem as described in greater detail below. The channel 554a passes through the body 514a of the ferrule 524a and terminates at an opening at an end of the ferrule 524a configured to interface with the head region of the user. In other embodiments, the channel 554a can terminate within the body of the ferrule 524a, such that the channel 554a does not have an opening at an end of the ferrule 524a configured to interface with the head region of the user. In one embodiment, a cover 556a can be coupled to an end region of the ferrule 524a, the cover configured to seal the channel 554a from the external environment in order to protect internal components (e.g., the fiber optic cable 555a). The cover 556a can also be composed of a material that provides light manipulation functions. For instance, the cover 556a can be composed of an optically transparent material that allows light transmission without significant loss. In another embodiment, the cover 556a can be composed an optically translucent material to facilitate diffusion of stimulation light. In another embodiment, one or more regions of the cover 556a can include lenses that affect light transmission through or into the cover 556a. In one embodiment, such as the embodiment of FIG. 5A, the cap 522a is composed of a material that allows the cap 522a and the ferrule 524a to be biased toward a head region of a user. For example, FIG. 5B shows a relaxed state (top) and a compressed state (bottom) of an interface to a head region of a user, according to an embodiment. When a cap 522b is placed on the head region of a user, an end region of a ferrule 524b is configured to interact with the head region of a user. As shown in FIG. 5B (top), the system is in a baseline (e.g., relaxed) mode, whereby the array of ferrules is not biased toward a head region of the user. As shown in FIG. 5B (bottom), a normal force between the ferrule 524b and the head region of the user is produced by the stretching of the cap 522b (e.g., as the user stretches the cap to wear the system), such that stretching the cap produces a stressed mode of the cap assembly that biases the array of ferrules toward the head region of the user. The cap 522b material can thus be subjected to tensile and compressive forces, where tension produces a normal force that compresses the array of ferrules toward the head region of the user and supports the interaction of the ferrule 524b with the user. Then, during an operation mode of the system, a fiber optic cable 555b is retained in position in the channel 554b such that a signal can be transmitted from a first end of the fiber optic cable 555b to a second end of the fiber optic cable 555b, where either the first end or the second end of the fiber optic cable 555b is in communication with the head of the user. FIG. 5C shows a second embodiment of the interface shown in FIG. 4. Similar to the embodiment shown in FIG. 5A, the second embodiment includes a cap 522c that can be configured to couple to a user by a retainer 528c. The cap 522c includes a first broad surface 551c, a second broad surface 552c, and an array of ports retaining an array of ferrules. A cross section of the cap 522c (right) shows a port 553c passing through the first broad surface 551c and the second broad surface 552c. A ferrule 524c is retained in the port 553c. The ferrule 524c includes a first channel 554c and a second channel 556c where the first channel passes through the entire body of the ferrule 524c, and the second channel 556c passes through the body of the ferrule 524c and can terminate within the body of the ferrule 524c, in some embodiments. A first fiber optic cable 555c is retained in the first channel 554c by an optically clear adhesive 558 such that the first fiber optic cable 555c can be in communication with a user during a mode of operation. For example, during a compressed state described above in relation to FIG. 5B, the first fiber optic cable 555c is optically coupled to a head region of a user for transmission of light from the head region to a detector subsystem. As such, the first fiber optic cable can be coupled to a detector subsystem, as described above in relation to FIG. 1B. The second channel 556c retains a second fiber optic cable 557c such that one end of the fiber optic cable 557c is physically isolated from the external environment and/or used to transmit stimulation light in a desired manner. The second fiber optic cable 557c can be coupled to the interior surface of the second channel 556c and/or the ferrule 524c. Additionally, the second fiber optic cable 557c can be retained in position by an optically clear adhesive. In a mode of operation, a signal can be transmitted from the user through the ferrule 524c to the second fiber optic cable. In one embodiment, the second fiber optic cable 557c can be coupled to a light source subsystem as described above in relation to FIG. 1B. The first channel 554c can be separated (e.g., optically, electrically, physically) from the second channel 556c such that signal transmission in the first fiber optic cable 555c does not interfere with signal transmission in the second fiber optic cable 557c. FIG. 6A shows an isometric view from the top right of a cap and retainer assembly, according to an embodiment. The cap 622 includes an array of ferrules 672 that protrude from the first broad surface and the second broad surface of the cap 622. In one embodiment, the cap 622 and the retainer 628 are one continuous component formed as a continuum of material. In other embodiments, the cap 622 and the retainer 628 can be separate components configured to interact with each other (e.g., coupled by an adhesive, attached by a hinge mechanism, coupled to a strap, etc.). In the embodiment shown in FIG. 6A, a portion of the array of ferrules 672 is coupled to a detector subsystem and each ferrule in the array includes a channel that extends through the body of the ferrule as described above in relation to FIG. 5A. In the embodiment, a portion of the array of ferrules 672 is coupled to a light source subsystem and each ferrule in the array of ferrules 672 includes a channel that extends partially through the body of the ferrule as described below in relation to FIGS. 6D-6E. Alternatively, a portion of the array of ferrules 672 can include a ferrule with two channels as described above in relation to FIG. 5C. FIG. 6B shows a top view (first), a front view (second), a cross sectional view (third), and a side view (fourth) of the embodiment of FIG. 6A. The top view (first) illustrates a first broad surface 651 of a cap 622 retaining an array of ferrules 672. The array of ferrules 672 is positioned in a hexagonal configuration (e.g., close packed configuration, in the orientation shown in FIG. 6B, top). Alternatively, the array of ferrules 672 could be configured as any other type of array (e.g., rectangular array, circular array). The cap 622 is coupled to a retainer 628 by two slots 680 that can be configured to interact with a body portion of a user, such as an ear, to couple the array of ferrules 672 to the head region of the user. In alternative embodiments, the cap 622 can include a different attachment mechanism (e.g., a clip, a buckle). The front view of FIG. 6B (top middle) illustrates the array of ferrules 672 passing through a first broad surface 651 and a second broad surface 652. The mid portion of the cross section of the array of ferrules 672 is sealed from the external environment by the cap 622. The array of ferrules 672 protrudes from the first broad surface 651 and the second broad surface 652. In other embodiments, a portion of the array of ferrules 672 can be recessed within or flush with the cap 622 for user comfort or operational purpose. A cross sectional view (bottom middle) of FIG. 6B illustrates an internal section of a region of the cap 622. The array of ferrules 672 has a hexagonal configuration such that the ferrules appear to alternate with the cap 622 in a cross sectional view. The side view (bottom) of the assembly illustrates the cap 622 including the first broad surface 651 and the second broad surface 652. FIG. 6C shows a schematic of a cap and retainer assembly, according to the embodiment of FIG. 6A. The top view (FIG. 6C, top) illustrates a cap 622 with an array of ports 670. The array of ports 670 passes through the first broad surface 651. The array of ports 670 has a hexagonal close packed configuration in order to retain an array of ferrules. A port 653 can be shaped such that it is able to retain a ferrule of a specified size and shape. A zoomed in view (FIG. 6C, bottom left) of a port 653 illustrates the circular shape of the port 653 and the orientation of a port 653 within the array of ports 670. In alternative embodiments, a port 653 can be any shape suitable for retaining a ferrule in position. In alternative embodiments, the port 653 can be oriented such that it has more or fewer regions of adjacency with other ports (e.g., in non-hexagonal close packed configurations). A side view (FIG. 6C, bottom right) of a port 653 shows the port 653 passing through the first broad surface 651 and the second broad surface 652 of the cap 622. The width of the port 653 is largest at the first broad surface 651 and the second broad surface 652. The width of the mid-section of the port 653 is the smallest width such that a ferrule can interlock with the port 653 (e.g., in a lock-and-key mechanism), where embodiments of ferrules that interface with the cap 622 are described in more detail below. In alternative embodiments, the cap 622 can be configured to couple with a ferrule in another manner, for instance, with one or more of: an adhesive, a magnetic interface, a thermal bond, a friction-inducing interface (e.g., a press fit), and/or another manner. In morphology, the cap 622 can be designed to fully cover the head of a user. In other embodiments, the cap 622 is designed to cover a portion of the head, depending on which location of the brain the emitters and sensors are intending to gather neural signals from, as described above. For example, if the sensors are to gather neural signals corresponding to neurons in the occipital lobe, then the head cap can be designed to reside in contact with the back of the user's head. The cap 622 and retainer 628 can also be shaped such that they can interact with other regions of the body (e.g., neck, arm, leg, etc.). In relation to material composition, the cap 622 can be composed of a single material or a composite material to provide suitable physical properties for support of the system 600. The material can have mechanical properties (e.g., ductility, strength) suited to support interaction of the system 600 with a user. A cap 622 is configured to be worn by a user on his/her head region. As such, the cap 622 can be composed of a breathable material in order to provide comfort to the user. For example, the cap 622 can be composed of a breathable and comfortable material such as nylon or polyester. In relation to mechanical properties, the material(s) of the cap 622 can have a compressive strength, a shear strength, a tensile strength, a strength in bending, an elastic modulus, a hardness, a derivative of the above mechanical properties and/or other properties that enable the cap 622 to deform in one or more directions without fracture and/or damage to other system 600 components (e.g., ferrule 624, fiber optic cable 655). The cap can be composed of an elastic material such as silicone, rubber, nylon, spandex, etc. In particular, the cap 622 can have an elastic modulus of 0.5-10 MPa. In alternative embodiments, the cap 622 can have an elastic modulus of any suitable value. In relation to electrical properties, the material(s) of the cap 622 can have a conductivity, resistivity, and a derivative of the above electrical properties and/or other properties that support signal transmission through a fiber optic cable 655 retained in a channel of a ferrule. For example, the cap 622 may be composed of an insulative material in order to reduce noise interference between components of the system 622. In relation to optical properties, the material(s) of cap 622 can have optic properties suited to facilitating signal transmission through a fiber optic cable. For instance, the cap 622 can be composed of an optically opaque material, in order to prevent excess light signals from bleeding to other portions of the system 600 in an undesired manner. FIG. 6D is a schematic of a ferrule, according to the embodiment of the system shown in FIG. 6A. A side view (FIG. 6D, left) shows a ferrule 624 that can be retained in a port, such as the port 653 described above in relation to FIG. 6C. The ferrule 624 has a first region 626 and a second region 627 that are joined at a mid-region 630, where the width of the ferrule 624 is reduced at a mid-region 630 such that the ferrule 624 can interlock with a port 653. The the embodiment of FIG. 6D, the ferrule 624 has a varying width along the length of the body 625. The width of the ferrule 624 is smallest at the end of the second region 627 such that the end of the second region 627 can comfortably interact with a head region of a user. In alternative embodiments, the ferrule 624 can have a constant width along the length of the body 625 and be coupled to the cap in another manner, as described above. The ferrule 624 can also have wider or smaller end regions, depending on the design considerations of the system (e.g., manufacturability, size, material, etc.). The ferrule 624, shown by the cross sectional view of FIG. 6D (right), includes a body 625 and a channel 654. The channel 654 terminates within the second region 627 of the body 625. Alternatively, the channel 654 can extend through the first region 626 and the second region 627 of the ferrule 624. The ferrule 624 can include a cover 629 coupled to the body 625 and/or other components within the channel 654. The cover 629 can function to seal the channel 654 from the external environment in order to protect internal components. The cover 629 can also function as the interface between the user and the ferrule 624. The cover 629 can be a separate component coupled to (e.g., by an adhesive, interlocking mechanism, etc.) or ensheathing the body 625. Alternatively, the cover 629 can be a continuous piece of the body 625. The cover 629 can also be composed of a material that provides light manipulation functions. For instance, the cover 629 can be composed of an optically transparent material that allows light transmission without significant loss. In another embodiment, the cover 629 can be composed an optically translucent material to facilitate diffusion of stimulation light. In another embodiment, one or more regions of the cover 629 can include lenses that affect light transmission through or into the cover. FIG. 6E illustrates a ferrule including a fiber optic cable, according to the embodiment of FIG. 6A. An isometric view (FIG. 6E, left) and a side view (FIG. 6E, middle) show a fiber optic cable 655 entering through the first region 626 of the ferrule 624. The fiber optic cable 655 extends partially through the body 625 into the second region 627 of the ferrule 624. A cover 629 can be optically coupled to the fiber optic cable 655. Alternatively, the fiber optic cable 655 can be optically coupled to the body 625 of the ferrule 624. A cross sectional view (right) of the ferrule illustrates the fiber optic cable 655 retained the channel 654 of the body 625. The body 625 is configured to interact with the user head region at the second region 627 such that the fiber optic cable 655 can transmit a signal between the first region 626 of the ferrule 624, the second region 627 of the ferrule 624, and the user head region. The fiber optic cable 655 in FIG. 6E can be coupled to a light emission subsystem where light is transmitted from an end region of the fiber optic cable 655 to the head region of the user. Alternatively, the fiber optic cable 655 can be coupled to a light detection subsystem. In morphology, the ferrule 624 can have protruding and/or recessed regions. In the embodiment of FIGS. 6D-6E, the body 625 has a recessed ring about its external perimeter that forms a portion of an interlocking mechanism such that it can mate with a port 653 of array of ports, for instance, where the port includes a protrusion about one or more portions of its internal perimeter, the protrusion operable to lock with the recess of the body 625. The width of a middle portion of the body 625 may be smaller than one or both end regions of the body 625 such that the ferrule 624 is retained in position in a port 653 without an adhesive or other attachment mechanism. In an alternative embodiment, the body 625 of a ferrule 624 has a constant width. In still other embodiments, the body 625 may be cylindrical, polygonal, or any other suitable shape for supporting the fiber optic cable 655. In relation to material composition, the ferrule 624 can be composed of a single material or a composite material to provide suitable physical properties for supporting the fiber optic cable 655. The material can have mechanical properties (e.g., ductility, strength) suited support the fiber optic cable 655. In relation to mechanical properties, the material(s) of the ferrule 624 can have a compressive strength, a shear strength, a tensile strength, a strength in bending, an elastic modulus, a hardness, a derivative of the above mechanical properties and/or other properties that enable the ferrule 624 to move with respect to the cap 622 while maintaining its position within a port 623. In the embodiment shown in FIG. 6E, the body 625 is composed of polycarbonate; however, the body can be composed of another material (e.g., polymeric material, non-polymeric material). In relation to electrical properties, the material(s) of the ferrule 624 can have a conductivity, resistivity, a derivative of the above electrical properties and/or other properties that support signal transmission through the fiber optic cable 655 retained in the channel 654 of the ferrule 624. The ferrule 624 may be composed of an insulative material in order to prevent excess noise from propagating from the light emission subsystem to the user and/or to other components of the system. In relation to optical properties, the material(s) of a ferrule can have optic properties that enable signal transmission through the fiber optic cable 624. The channel 654 of the ferrule can include an optically opaque adhesive configured to facilitate signal transmission between the first region 626 and the second region 627 of the ferrule 624. The body 625 of the ferrule can also be composed of an optically opaque adhesive. In alternative embodiments, the ferrule 624 can be composed of any suitable material. FIG. 7A shows a schematic of a cap, according to an alternative embodiment. FIG. 7A includes a cap 722a configured to a user head region by a retainer 728a. A cross section of the cap 722a shows the components of a ferrule 724a according to an embodiment. A port 753a passes through a first broad surface 751a and a second broad surface 752a of the cap 722a. The port 753a retains the ferrule 724a in position. In the embodiment of FIG. 7A, the port 753a and the ferrule 724a have the same width. In other embodiments, such as FIG. 7B described below, the width of the ferrule 724a can be smaller than the width of the port 753a at a region along the body of the ferrule 724a (e.g., in order to form a portion of a locking mechanism, as described above in relation to FIGS. 6D-6E). The port 753a and the ferrule 724a can, however, be configured in another suitable manner, in relation to mating and/or coupling with each other. The ferrule 724a includes multiple components retained within the port 753a. The ferrule 724a has a second body portion 762a protruding from the second broad surface 752a. The second body portion 762a is coupled to a first body portion 761a of the ferrule 724a, within a channel 754a that extends through the first body portion 761a of the ferrule 724a. The channel 754a can be continuous with a cavity within the second body portion 762a, in order to allow passage of and/or retain a fiber optic cable 755a. Furthermore, as shown in FIG. 7A, the channel 754a includes a region that allows the second body portion 762a to translate relative to the first body portion 761a within a range of motion, in order to maintain coupling with the user. The channel 754a retains a spring 760a and a fiber optic cable 755a. In one or more user modes, as described in more detail below, the spring 760a can be compressed such that the spring 760a biases a tip of the second body portion 762a against the head region of the user to allow for light transmission through the fiber optic cable 755a. Also shown in FIG. 7A, the second body portion 762a is rounded (e.g., hemispherical) such that the interaction is comfortable to the user. However, in alternative embodiments, a terminal region of the second body portion 762a can have any other suitable morphology in relation to interfacing with the body of the user. FIG. 7B shows a schematic of a cap, according to a variation of the embodiment shown in FIG. 7A. FIG. 7B includes a cap 722b configured to couple to a head region of a user by a retainer 728b. The cross section A-A shows a ferrule 724b retained in a port 753b in the cap 722b. The port 753b passes through the first broad surface 751b and the second broad surface 752b and is configured to retain the ferrule 724b in position. The ferrule 724b includes a channel 754b configured to retain a fiber optic cable 755b and a spring 760b. The channel 754b is sealed from the external environment by a lid 759b. The channel 754b can extend partially or completely through the first body portion 761a and the second body portion 762a of the ferrule 724b. The ferrule 724b is shaped such that it interlocks with the port 753b. A second body portion 762a of the ferrule 724b protrudes from the second broad surface 752b and is rounded to provide comfort to the user during use. During one or more use modes, the spring 760b is compressed such that the rounded end region of the ferrule 724b is coupled to a head region of a user in order to transmit signals through the fiber optic cable 755b between a user and a light emission or light detection subsystem. FIG. 7C is a schematic of a relaxed state (top) and a compressed state (bottom) of the embodiment shown in FIG. 7B. The relaxed state (bottom) shows a first body portion 761c and a second body portion 762c retained in a port 753c. A channel 754c retains a relaxed spring 760c and a fiber optic cable 755c enclosed by a lid 759c. The second body portion 762c is bias outward when a force is not exerted on the second body portion 762c. The lid 759c, the first body portion 761c, and the second body portion 762c are configured to interact such that when the system is in a relaxed state, the components are retained by the port. In a compressed state (bottom) of FIG. 7C, the second body portion 762c interacts with a head region of a user and the spring 760c is compressed. The channel 754b allows the second body portion 762b to have a translational range of motion relative to the first body portion 761b. In the compressed state, the second body portion 762c is recessed further in the port 753c than in the relaxed state. The second body portion 762c may be flush with the first body portion 761c in a compressed state. The first body portion 761c is retained in position by the lid 759c. As such, the spring 760c and the second body portion 762c can be compressed toward the lid 759b and the first body portion 761c while the lid 759b and the first body portion 761c remain in position. A normal force is exerted between the second body portion 762c and the head of the user and the normal force compresses the spring 760c. The spring can have a spring constant ranging from 200 to 5000 N/m depending on other design components of the system (e.g., size of the channel, number of ferrules, material of the cap). The embodiments described in FIGS. 4-7C are not exclusive or exhaustive of an interface for a head region of a user. Alternative embodiments and combinations of embodiments can exist for a device configured to transmit optic signals between a user and a subsystem. For example, the array of ferrules may be a rectangular configuration and ferrules can have a cylindrical body that does not protrude from a surface of a cap. The system may also lack some components described above in relation to FIGS. 4-7C. For instance, the system may not have a retainer. FIGS. 4-7C are provided for illustrative purposes and one skilled in the art would recognize other design possibilities. 4.3 System—Detector Subsystem FIG. 8A is a schematic of a detector subsystem 830, in accordance with one or more embodiments. The detector subsystem 830 includes one or more sensors 832. The sensor(s) 832 function to convert light to signals (e.g., electrical signals) that can be received and processed to decode brain activity of a user. The sensor(s) 832 include complementary oxide semiconductor (CMOS) architecture. However, in alternative embodiments, the sensor(s) 832 can include N-type metal-oxide-semiconductor (NMOS) architecture. In still alternative embodiments, the sensor(s) 832 can include charged coupled device (CCD) architecture, quanta image sensor (QIS) architecture, and/or any other suitable architecture for converting received light to electrical signals. In relation to CMOS architecture, the sensor(s) 832 of the detector subsystem 830 can operate in current mode or in voltage mode. To condition signals, the sensor(s) 832 can be coupled to amplifiers, attenuators, and/or any other suitable signal conditioning hardware, at the sensor level or at the pixel level. As shown in FIGS. 8A and 8B, the pixels 833 of the sensor(s) 832 are arranged into grouped pixel units, including grouped pixel unit 834. The grouped pixel units are distributed linearly along an axis 835, which improves readout efficiency to enable real-time or near real-time transmission and decoding of captured signals. However, the grouped pixel units can alternatively be distributed non-linearly (e.g., along a curve or an arc), can be distributed in a multidimensional array, or can be distributed in any other suitable manner in relation to readout from the sensor(s) 832. The pixels 833 of a grouped pixel unit 834 are arranged in a square array, where the square array can have equal numbers of pixels along its width and height. In examples, a grouped pixel unit can have 2-100 pixels along its length and height. The size of the array of pixels corresponding to a grouped pixel unit 834 can be configured based on size of each individual pixel, as well as morphological factors of the set of optical fibers 828 corresponding to the set of grouped pixel units 834, as described in further detail below. In alternative embodiments, however, a grouped pixel unit 834 can have pixels arranged in a polygonal array, ellipsoidal array, or in any other suitable manner (e.g., an amorphous array). Each grouped pixel unit can be identically configured in relation to distribution of pixels in an array; however, in alternative embodiments, one or more grouped pixel units of the set of grouped pixel units can have an array structure that is different from others in the set of grouped pixel units. In relation to individual pixels, the pixels preferably have a fill factor close to or equal to 100%, such that most or all of the pixel area is useable for light collection. As such, each pixel preferably has little-to-no buffer region between pixels. Each pixel also has physical characteristics that contribute to increased dynamic range, where physical characteristics include efficiency (e.g., quantum efficiency)-related parameters, capacitance-related parameters (e.g., well capacity), surface irregularity-related parameters (e.g., dark current producing irregularities), and/or any other suitable physical characteristics. The grouped pixel units 834 are arranged in one or more linear arrays, where the linear arrays can include any suitable number of grouped pixel units. In examples, a linear array of grouped pixel units can have 2-100 grouped pixel units along its length. The size of the array of grouped pixel units can be configured based on overall sensor size limitations, in relation to providing a system in a portable and wearable form factor. In alternative embodiments, however, the array(s) of grouped pixel units 834 can be arranged in a polygonal array, ellipsoidal array, or in any other suitable manner (e.g., an amorphous array). Each array of grouped pixel units can be identically configured in relation to distribution of grouped pixel units; however, in alternative embodiments, one or more arrays of grouped pixel units can have an array structure that is different from another array of grouped pixel units. In relation to grouped pixel units, the grouped pixel units preferably have a fill factor close to or equal to 100%, such that most or all of the grouped pixel area is useable for light collection, and no signals associated with light signal decay at a grouped pixel unit edge and/or signal cross-talk due to overlap between adjacent grouped pixel units are discarded. As such, each grouped pixel unit preferably omits a buffer region between grouped pixel units. FIG. 8B depicts a schematic (top) of a portion of a detector subsystem and a plan view (bottom) of the portion of the detector subsystem. As shown in FIG. 8B, each grouped pixel unit corresponds to an optical fiber 828 of a set of optical fibers for light transmission. The optical fibers, as shown in FIG. 8B, include first end regions 828a mapped to a multidimensional region of interest (e.g., at a head region of a user) and second end regions 828b aligned linearly with the set of grouped pixel units. This configuration allows light from a multidimensional region of interest to be mapped to a linear array of grouped pixel units 834, in order to facilitate fast readout by the pixel scanner described below. In the embodiment shown in FIG. 8B, the fibers are mapped to the grouped pixel units in a one-to-one manner; however, in alternative embodiments, the fibers can be mapped to the grouped pixel units in any other suitable manner (e.g., not one-to-one). Each fiber includes a glass optically transparent core. Each fiber can also include sheathing layers including one or more of: a reflective layer (e.g., to provide total internal reflection of light from a portion of the region of interest to the corresponding grouped pixel unit), a buffer layer (e.g., to protect the fiber), and any other suitable layer. Additionally or alternatively, each fiber can be separated from adjacent fibers by a material (e.g., epoxy) that prevents cross-transmission of light between fibers. As such, each fiber can be isolated (e.g., optically, thermally, etc.) from other fibers. In morphology, each fiber 828 has a length that minimizes distance-related signal loss factors between the region of interest and corresponding sensor region. Each fiber can have a rectangular/square cross section to enable compact bundling of fibers. In a specific example, each fiber has a cross sectional width of 400 μm; however, in alternative embodiments, the fibers can have a circular cross section or any other suitable cross section, with any other suitable dimensions. As shown in FIG. 8A, the set of fibers can be coupled to a substrate 838, where the substrate 838 is positioned between the set of fibers and the grouped pixel units of the sensor 832. The second end regions 828b of the set of fibers can be bonded (e.g., thermally bonded) to the substrate 838 or otherwise coupled to the substrate 838 in any other suitable manner. The substrate 838 is composed of an optically transparent material with high transmittance. The substrate 838 is also composed of a rigid material. The substrate 838 can also have thin cross section along light transmission paths between the set of optical fibers and the sensor 832, in order to position the second fiber end regions 828b as close to the grouped pixel units 834 of the sensor 832 as possible, to minimize divergence of light transmitted through the ends of the fibers to the sensor 832, and/or to prevent reflections from opposing surfaces (e.g., foreplane surfaces, backplane surfaces) of the substrate 832. As such, the substrate 838 functions to provide suitable optical properties for light transmission from the set of fibers, retain positions of the set of fibers relative to the grouped pixel units 834 of the sensor 832, and can additionally function to mechanically support the set of fibers and/or sensor 832 (e.g., in relation to attenuating or removing transmission of forces between the set of fibers and the pixels 833 or grouped pixel units 834 of the sensor 832). In a specific example, the substrate 838 is an alkali-free flat glass that has a thickness of 0.03 mm along a direction of light transmission between the set of fibers and the sensor 832. The substrate 838 is polished to have low roughness, has a coefficient of thermal expansion of 2.6E-6/° C., and is usable in applications involving temperatures of up to 600° C. However, in alternative embodiments, the substrate 838 can be composed of any other suitable material, have any other suitable optical, thermal, or physical properties, have any other suitable thickness, have any other suitable rigidity, and be processed in any other suitable manner. Material and morphological features of the fibers 828 and/or the substrate 838 cooperate to control and reduce light divergence of light incident on the sensor 832. The fibers 828 and/or the substrate 838 can additionally or alternatively support a multi-wavelength (e.g., dual wavelength) light transmission mode of the system in order to control light divergence. As shown in FIG. 8A, the detector subsystem 830 also includes a pixel scanner 839. The pixel scanner 839 is a line scanner that reads electrical signals produced by the grouped pixel units 834 of the sensor 832 in order to generate neural stream data that can be processed by the computing system described above. The pixel scanner 839 can read each grouped pixel units of a row of grouped pixel units sequentially and/or linearly, in order to produce fast readout speeds. As such, the pixel scanner 839 can be specified with parameters related to speed (e.g., in terms of frame rate), power consumption, or any other suitable parameter. In a specific example, the pixel scanner 839 can read a row of grouped pixel units within 10 μs at a frame rate of 2500 Hz; however, alternative embodiments of the pixel scanner 839 can read rows of grouped pixel units with any other suitable speed. The detector subsystem 830 is thus operable in a line scanning mode. In relation to the line scanning mode, one or more grouped pixel units can be saturated by incident light that travels from the region of interest, through the set of fibers 828, and to the sensor 832. Additionally or alternatively, in relation to the line scanning mode, edge regions of a first grouped pixel unit can receive light signals associated with a second grouped pixel unit (e.g., a grouped pixel unit adjacent to the first grouped pixel unit), where crosstalk associated with overlapping signals from different grouped pixel units can be processed to isolate signal features specific to a grouped pixel unit using signal processing methods described below. As such, as described in more detail in relation to the methods of Section 3.3.1 and 4 below, the detector subsystem 830 can, during characterization of the region of interest, have a configuration where a central region of a first grouped pixel unit of the set of grouped pixel units is saturated by light from one of the set of optical fibers. Operating in a saturated mode can significantly increase dynamic range of the detector subsystem 830. Additionally or alternatively, the detector subsystem 830 can, during characterization of the region of interest, have a configuration where an unsaturated edge region of the first grouped pixel unit receives light associated with a second grouped pixel unit adjacent to the first grouped pixel unit, and the pixel scanner 839 transmits light-derived signals from the central region and the unsaturated edge region of the first grouped pixel unit for characterization of the region of interest. Thus, the system can operate with saturated grouped pixel units and/or crosstalk across grouped pixel units while still allowing extraction and decoding of signals that are characteristic of brain activity of a user who is interacting with the system. 4.3.1 Signal Generation Methods of Detector Subsystem As described above, FIG. 9 depicts a flow chart of a method 900 for generating and processing optical signals, in accordance with one or more embodiments. The method 900 functions to enable decoding of optical signal-derived data from a region of interest, using high dynamic range sensors and individually addressable light sources of a compact system. As shown in FIG. 9, the detector subsystem, through first ends of a set of optical fibers, receives 910 light from a multidimensional region of interest, which functions to provide controlled transmission of light that can be received by a sensor for characterization of the region of interest. The light transmitted from the multidimensional region of interest can include light in the non-visible spectrum and/or light in the visible spectrum, can include a single wavelength of light or multiple wavelengths of light, can include naturally encoded information (e.g., due to physiologically induced phenomena), and/or can include synthetically encoded information (e.g., due to polarization or other light manipulating optics positioned along a light transmission pathway). The transmitted light can be associated with any energy associated factors (e.g., power, duration of transmission, intensity), waveform factors (e.g., pulsed, non-pulsed, waveform shape), temporal factors (e.g., frequency of signal transmission), and/or any other suitable factors. In relation to the interface described above, the multidimensional region of interest is a head-region of the user, where noninvasively-acquired light signals can be used to decode brain activity of the user through the head region. The head region can include one or more of: a frontal region, a parietal region, a temporal region, an occipital region, an auricular region, an orbital region, a nasal region, or an infraorbital region. Additionally or alternatively, the head region can include other cranial or facial regions including one or more of: an oral region, a parotid region, a buccal region, or any other suitable region of the head of the user. In alternative embodiments, the multidimensional region of interest can be associated with another anatomical region of the user. Additionally or alternatively, the multidimensional region can be associated with a surface or volume of material of another object. The optical fibers can transmit light derived from light that has originated at the set of light emitters and interacted with the multidimensional region of interest (e.g., the head of the user). The transmitted light can thus be associated with light sourced from individually addressable emitters, where light output from the emitters can be timed according to pixel scanning of the detector according to methods described below. The optical fibers can additionally or alternatively transmit light derived from ambient light from the environment of the user, where the ambient light has interacted with the multidimensional region of interest. Light transmission through the set of optical fibers can, however, come from any other suitable source. As shown in FIG. 9, the optical fibers transmit received light 920 to an array of pixels of the detector subsystem. The array of pixels include CMOS pixels, but can alternatively include CCD pixels or pixels having any other suitable sensor architecture. The pixels can be arranged linearly along an axis as a set of grouped pixel units, where the arrangement of pixels as grouped pixel units is described above. However, the array of pixels can alternatively be configured in another manner. Light is transmitted through second ends of the set of optical fibers toward the sensor for generation of electrical signals that are processed to decode information from the region of interest. The second ends can be positioned as closely as possible to the sensor to minimize adverse effects of light divergence due to distance between the fiber ends and the sensor. In relation to elements described above in relation to the sensor, the detector subsystem can transmit light through second ends of the set of optical fibers, through a thin glass substrate, and toward the grouped pixel units of the sensor, in order to reduce light divergence effects and in order to provide robust positioning and alignment of the fibers relative to the grouped pixel units of the sensor. As shown in FIG. 9, the detector subsystem can also saturate 924 a central region of one or more grouped pixel units of the set of grouped pixel units, by allowing transmission of high power/high intensity light to the one or more grouped pixel units. In the example shown in FIG. 10A, the profile (e.g., intensity profile) of light transmitted to a first grouped pixel unit can have a top hat shape in an unsaturated scenario (FIG. 10A, left) and a truncated top hat state in a saturated scenario (FIG. 10A, right). The sensor of the detector subsystem can thus still operate in a mode where one or more grouped pixel units are saturated, which provides a significant increase in dynamic range of the sensor, and where features of the heel region and/or saturated central region are extracted during signal processing to decode information from the region of interest, as described in more detail below. As shown in FIG. 9, the detector subsystem can also allow 926 light that is associated with a second grouped pixel unit to be received at the first grouped pixel unit, where cross-talk across the grouped pixel units can be deconstructed or isolated due to determined characteristics of the signals received at the grouped pixel units. For instance, if the heel region characteristics are known for a saturated or unsaturated intensity profile for one grouped pixel unit, relevant portions of the unsaturated intensity profile can be removed from another grouped pixel unit associated with crosstalk, in order to isolate the other grouped pixel unit's features. FIG. 10B shows an example where the heel regions (and/or other regions) of light intensity received overlap between two grouped pixel units. The first grouped pixel unit and the second grouped pixel unit associated with the overlap can be adjacent to each other or can alternatively be not adjacent to each other. Furthermore, the overlap can be associated with unsaturated edge regions of the grouped pixel units involved in the overlap, or can be associated with any other suitable region of a grouped pixel unit. After light is transmitted 920 to the grouped pixel units, the detector subsystem can generate 930 light-derived signals for characterization of the region of interest, upon scanning the set of grouped pixel units. The scanning operation(s) of the detector subsystem provide fast readout of the array of grouped pixel units, in order to facilitate rapid processing and decoding of information (e.g., neural stream data) derived from incident light on the grouped pixel units. In generating 930 light-derived signals, the pixel scanner reads electrical signals produced by the grouped pixel units of the sensor. The pixel scanner reads each grouped pixel units of a row of grouped pixel units sequentially in order to produce fast readout speeds. Furthermore, the pixel scanner reads the grouped pixel units in a linear manner. However, in alternative embodiments, the pixel scanner can read grouped pixel units in any other suitable order or along any other suitable path. The scanning operation can read full frames of signals for each grouped pixel unit and/or can read less than full frames of signals (e.g., a central line of signals along the scan path) for one or more of the grouped pixel units. The scanning operation can be specified with parameters related to speed (e.g., in terms of frame rate), power consumption, or any other suitable parameter. In a specific example, the pixel scanner reads a row of grouped pixel units within 10 μs at a frame rate of 2500 Hz. However, the pixel scanner can alternatively read grouped pixel units with any other suitable frame rate (e.g., greater than 100 Hz, greater than 500 Hz, greater than 1000 Hz, greater than 2000 Hz, greater than 3000 Hz, etc.). As described above, the detector subsystem, with the pixel scanner, can be configured to coordinate scanning operation with light output through individually addressable light emitters (e.g., light emitters of the VCSEL array, LED light emitters, etc.). As such, to generate 930 the light-derived signals, a portion (e.g., a first subset) of light emitters are activated and the detector subsystem scans the grouped pixel units in coordination with activation of the portion of light emitters. Then, a second portion (e.g., a subset the same as or different from the first subset) of light emitters are activated and the detector subsystem scans the grouped pixel units in coordination with activation of the second portion of light emitters. In this example, portions of light emitters can be activated in a manner to minimize crosstalk/interference due to light emission through fibers toward adjacent grouped pixel units, or to serve any other suitable purpose. However, the first portion and the second portion of light emitters activated can be associated with targeting different portions of the region of interest (and not necessarily to minimize crosstalk/interference). However, in alternative embodiments, coordination of timing between light emission and scanning by the detector system can be conducted in any other suitable manner. In relation to embodiments shown in FIG. 10B, generating light-derived signals can facilitate extraction of features from saturated grouped pixel units, where features can be extracted from saturated central regions and/or unsaturated edge regions of the grouped pixel units in order to increase dynamic range of the sensor outputs by several orders of magnitude relative to unsaturated sensor configurations. Features related to saturated portions can include positions of boundaries between a saturated central region of a grouped pixel unit and unsaturated edge regions of the grouped pixel unit (indicative of total power), diameter of a saturated central region of a grouped pixel unit, projected area of a saturated central region of a grouped pixel unit, or any other suitable shape-related features associated with the saturated central region. Features related to unsaturated edge regions of the grouped pixel units can include positions of boundaries between unsaturated edge regions of the grouped pixel unit, slope related features (e.g., rates of decay) of a heel portion of an unsaturated edge region, features related to integrated areas under an intensity curve corresponding to unsaturated edge regions, or any other suitable shape-related features associated with the unsaturated edge region. While features associated with intensity are described above, features that can be derived from the generated light signals can include features of any other suitable light-related parameter. Furthermore, in relation to the region of interest being at a head region of the user, the features of interest can be decoded to distinguish different types of brain activity of the user, which can be used as control inputs for controlling operation states of other systems (e.g., virtual assistants, interactions with an online social network, smart home devices, etc.). As described in more detail in relation to FIGS. 14A-14E below. As shown in FIGS. 11A and 11B, generating light-derived signals can also include generating 1135 light-derived signals associated with multiple light exposure levels. FIG. 11A depicts an example where the detector subsystem exposes grouped pixel units to light with a short exposure setting and the pixel scanner reads the grouped pixel units to extract features associated with the short exposure setting. As shown in FIG. 11A, the detector subsystem also exposes grouped pixel units to light with a long exposure setting, and the pixel scanner reads the grouped pixel units to extract features associated with the long exposure setting. FIG. 11B shows an example where the detector subsystem exposes grouped pixel units to light with different power settings (e.g., a low power setting, a high power setting) and scans grouped pixel units with different exposure settings, where the scans can be full frame or less than full frame. The total time associated with this operation is equal to the total time of exposures added to the total time for each scan, which is on the order of 1.82 ms per full frame scan. Features can thus be rapidly extracted and decoded from scans associated with multiple power settings, multiple exposure settings, full frame scans, and/or less-than-full frame scans, in order to characterize the region of interest. 4.4 System—Other Sensors As shown in FIG. 1A, the system can include additional sensors 140a for detecting user behaviors and/or other biometric signals that can supplemental data. FIG. 12A depicts a schematic of a camera subsystem 1241, in accordance with one or more embodiments, and FIG. 12B depicts a schematic of the camera subsystem 1241 shown in FIG. 12A. As shown in FIG. 12A, the additional sensors can include one or more cameras 1242 of a camera subsystem 1241, which function to generate image data of the user and/or of an environment of the user. The cameras 1242 utilize light of the visible spectrum, but can additionally or alternatively include sensors that utilize any other portion of the electromagnetic spectrum (e.g., infrared spectrum). The camera subsystem 1241 can use image sensors of the camera(s) 1242 to capture image data and/or video data. In relation to image data and video data, the camera subsystem 1241 can be configured to capture data with sufficiently high resolution to capture features of interest of the user (e.g., pupil position and orientation, facial features, body movements, etc.) and/or of the environment of the user (e.g., states of objects in the environment of the user), where applications of user tracking and environmental tracking are described in more detail below. In relation to the wearable interface 120 described above, the camera(s) 1242 of the camera subsystem 1241 can be coupled to the wearable interface (and electronics subsystem 1250) shown in FIGS. 12A and 12B, in a manner that orients the camera(s) with a field of view capturing the face (or a portion of the face) of the user, and/or with a field of view capturing an environment of the user (e.g., from a point of view of the user). As such, the camera subsystem 1241 can include a first camera 1242a coupled to (e.g., mounted to, electromechanically coupled to, etc.) a portion of the wearable interface and in an inward-facing orientation to provide a field of view capturing the face of the user, in order to generate eye tracking data (e.g., in relation to coordinates of objects the user looks at, in relation to swell time) of the user and/or facial expressions of the user. The camera subsystem 1241 can also include a second camera 1242b coupled to (e.g., mounted to, electromechanically coupled to, etc.) a portion of the wearable interface and in an outward-facing orientation to provide a field of view capturing the environment of the user, in order to generate image data of objects or environments with which the user is interacting. The camera subsystem 1241 can, however, have more than two cameras coupled to the wearable interface or other portions of the system in another orientation. Additionally or alternatively, the camera(s) can be fixed in position, or can be actuated to adjust field of view. As indicated above, the camera subsystem 1241 can cooperate with other portions of the system described above, in applications where capturing interactions of the user with the environment of the user can be combined with decoded brain activity of the user in a useful manner. In one such application, the system can monitor, by way of cameras 1242 of the camera subsystem 1241, objects that the user is interacting with in his/her environment by generating and analyzing images of eye motion of the user, head motion of the user, gaze of the user, and/or line-of-sight to objects in the user's environment, decode an intention of the user from brain activity of the user acquired through the detector subsystem described above, and apply the intention as an input to control an operational state of the object. Examples of objects can include electronic content provided at a display (e.g., of a computer, of a wearable device, of an artificial reality system, of a virtual reality system, of an augmented reality system, etc.), electronic content provided at an audio output device, electronic content provided at a haptic feedback device, connected devices (e.g., temperature control devices, light control devices, speakers, etc.), or other objects. Examples of intentions can include desired adjustments to operational states of devices (e.g., turn off device, turn on device, adjust device brightness, adjust device output volume, etc.), desired interactions with electronically-provided content (e.g., select object, select menu item, navigate to another web page, scroll up, scroll down, close window, etc.), desired interactions with a virtual assistant, or any other intentions. As such, in one specific example, the camera subsystem 1241, in combination with other system outputs, can cooperate to determine that the user is looking at a particular connected light in the user's bedroom, decode a brain activity signal that indicates that the user wants to dim the light, and generate control instructions for dimming the light, all without the user speaking a command or adjusting dimness of the light using a physically-manipulated controller. In another specific example, the camera subsystem 1241, in combination with other system outputs, can cooperate to determine that the user is looking at a selectable button for purchasing an item within an online marketplace, decode a brain activity signal that indicates that the user wants to “click the button”, and generate control instructions for selecting the button to purchase the item, all without the user speaking a command or physically clicking the button (e.g., with a mouse). In relation to the additional sensors 140a shown in FIG. 1A, the system can additionally or alternatively include other sensors and/or biometric sensors for sensing aspects of the user, the user's physiology, and/or the environment of the user. Other sensors can include audio sensors (e.g., microphones), motion/orientation sensors (e.g., accelerometers, gyroscopes, inertial measurement units, etc.), respiration sensors (e.g., plethysmography sensors), cardiovascular sensors (e.g., electrical signal-based cardiovascular sensors, radar-based cardiovascular sensors, force-based cardiovascular sensors, etc.), temperature sensors for monitoring environmental temperature (e.g., ambient temperature) and/or body temperature of the user, other brain activity sensors (e.g., electroencephalography sensors), other electrophysiology sensors (e.g., skin conductance sensors), and/or any other suitable sensors. Outputs of the additional sensors 140a can be processed with outputs of other system components described above, in order to improve applications where co-processing brain activity information with other sensor-derived information would be beneficial. 4.5 System—Other Electronics The system can include additional electronics coupled to one or more of the embodiments of the light source subsystem, detector subsystem, additional sensors, network, and/or wearable interface, as described above. For instance, as shown in FIG. 1A, the system can include a power component 150a that provides power and/or manages power provision to one or more other system components. The power component 150a can include a battery (e.g., rechargeable battery, non-rechargeable battery) electrically coupled to a power management system that maintains desired circuit voltages and/or current draw appropriate for different system components. The power component 150a can be retained within a housing 105a associated with the wearable interface and coupled to the light source subsystem and/or the detector subsystem. As described in relation to other system components below, the housing 105a can house one or more of: the light source subsystem 110a, the detector subsystem 132a, a power component 150a, a computing component 160a, a data link 162a, and additional sensors 140a. The housing 105a can also house at least a portion of the interface 120a that is head-mountable for positioning signal transmission components at the head region of a user. The housing 105a can be head-mounted or can be coupled to the user in another manner. The housing can be composed of a polymer material and/or any other suitable materials. As shown in FIG. 1A, the system can also include a computing component 160a that functions to coordinate light transmission from the light source subsystem and/or operation states of the detector subsystem (e.g., in relation to emission from the light source subsystem). The computing component 150a can thus include architecture storing instructions in non-transitory computer readable media for implementing portions of methods described, controlling operation states of the light source subsystem, the detector subsystem, and/or additional sensors, monitoring states of components coupled to the computing component 160a, storing data in memory, coordinating data transfer (e.g., in relation to the data link described below), and/or performing any other suitable computing function of the system. The computing component 160a can additionally or alternatively include signal conditioning elements (e.g., amplifiers, filters, analog-to-digital converters, digital-to-analog converters, etc.) for processing signal outputs of sensors of the system. As shown in FIG. 1A, the system can also include a data link 162a coupled to the computing component 160, for handling data transfer between electronics of the wearable system components and the network 170a. The data link 165a can provide a wired and/or wireless (e.g., WiFi, Bluetooth LE, etc.) interface with the network or other external systems. 5. Method—Neural Decoding Process with Co-Learning FIG. 13 depicts a schematic of an embodiment of a system with computing components for implementing a neural decoding process. The system shown in FIG. 13 is an embodiment of the system shown in FIG. 1B, and includes a light source subsystem 1310, an interface 1320 transmitting light from the light source subsystem 1310 to a head region of a user, and a detector subsystem 1330 coupled to the interface 1320 and configured to receive light signals from the head region of the user. The light source subsystem 1310 and the detector subsystem 130 are coupled to a power component 1350 and a computing component 1360, which processes and decodes neural stream signals for delivery of feedback to a user as a closed-loop system. The system shown in FIG. 13 can be used to implement the methods described below, in relation to receiving and processing neural signal streams including, at least partially, signals derived from light transmitted from the head region of the user. In particular, the system can apply optical tomography (DOT) to generate blood oxygen level dependent (BOLD) signals, which can be decoded using a trained neural decoding model to determine user actions, as described below. FIG. 14A depicts a flow chart of a method 1400 for neural decoding, in accordance with one or more embodiments. FIG. 14B depicts a flow diagram of an embodiment of the method for neural decoding shown in FIG. 14A. As shown in FIGS. 14A and 14B, the system (e.g., light transmission, light detection, and computing components) generates 1410 a neural data stream capturing brain activity user in response to detecting a set of light signals from a head region of a user as the user interacts with an object in an environment. The system extracts 1420 a predicted user action upon processing the neural data stream with a neural decoding model, where the predicted user action can be actual (e.g., actual speech) or imagined (e.g., thought), as described in more detail below. The system then provides 1430 a feedback stimulus to the user based upon the predicted user action, and generates an updated neural decoding model based upon a response of the user to the feedback stimulus 1440. The system also implements one or more co-learning processes 1440, 1450 for improvement of the neural decoding model and/or behavior of the user. As such, the method 1400 can provide a closed loop process whereby the neural decoding model is updated and trained as the user interacts with content or other stimuli, and provides additional light-derived signals that capture brain activity. The method 1400 functions to rapidly (e.g., in real time or near real time) decode light-derived signals to extract predicted user actions or intents (e.g., commands) in relation to interactions with objects (e.g., virtual objects, physical objects), such that the user can manipulate the objects or otherwise receive assistance without manually interacting with an input device (e.g., touch input device, audio input device, etc.). The method 1400 thus provides a neural decoding process with the neural stream signal as an input, and provides feedback to the user, where the feedback is used to train the neural decoding algorithm and user behavior. The neural signals can be blood oxygenation level dependent (BOLD) signals associated with activation of different articulators of the motor cortex, and signals can characterize both actual and imagined motor cortex-related behaviors. With training of the decoding algorithm, rapid calibration of the system for new users can additionally be achieved. 5.1 Method—Generating Data As shown in FIGS. 14A and 14B, the system (e.g., light transmission, light detection, and computing components) generates 1410 a neural data stream capturing brain activity of the user as the user interacts with an object, which functions to generate source data that can be processed with the neural decoding model to decode cognition through a non-traditional method. As noted above, the neural data stream is derived from input light signals that are provided to the user's head and output light signals that are captured after passing through the user's head, where the output signals carry information about the level of oxygen present in blood of the user, associated with different regions. As such, the signals associated with the neural data stream are a type of blood oxygen-level dependent (BOLD) signal that carries hemodynamic response information. In use, signals generated can be evaluated for reliability, such that only reliable signals are passed by the computing subsystem through downstream processing steps to generate predicted user actions. Reliability can be evaluated based upon consistency in signal characteristics (e.g., variances around a mean signal characteristic). As described above in relation to the detector subsystem, in generating the neural data stream, the system can transform input signals from a detector-associated space (e.g., in relation to fiber optics coupled to an array of detector pixels) to a brain region-associated space (e.g., in relation to brain regions associated with the input signals). Also described above in relation to embodiments of the detector subsystem, signals of the neural data stream that are derived from unsaturated and saturated grouped pixel units that receive light from distinct head regions, where, for saturated grouped pixel units, features can be extracted from saturated central regions and/or unsaturated edge regions. The signals derived from saturated portions can include information related to positions of boundaries between a saturated central region of a grouped pixel unit and unsaturated edge regions of the grouped pixel unit (indicative of total power), diameter of a saturated central region of a grouped pixel unit, projected area of a saturated central region of a grouped pixel unit, or any other suitable shape-related features associated with the saturated central region. Features related to unsaturated edge regions of the grouped pixel units can include positions of boundaries between unsaturated edge regions of the grouped pixel unit, slope related features (e.g., rates of decay) of a heel portion of an unsaturated edge region, features related to integrated areas under an intensity curve corresponding to unsaturated edge regions, or any other suitable shape-related features associated with the unsaturated edge region. While signal characteristics associated with intensity are described above, signal characteristics that can be derived from the generated light signals can include features of any other suitable light-related parameter. Also described above in relation to the detector subsystem, the system can generate signals of the neural data stream associated with multiple light exposure levels (e.g., short and long exposure levels), different light power settings (e.g., a low power setting, a high power setting), and/or different light parameter settings. In relation to the detector subsystem described above, the neural data stream includes data from separate head regions (e.g., separate head regions associated with a single cortex, separate head regions associated with multiple cortices) of the user. In one embodiment, the neural data stream includes light data from different head regions, where signals from the different head regions map to a set of motor cortex articulators associated with articulation of different speech components. Articulation can be actual or imagined, such that the signals can carry information associated with actual or imagined speech. Furthermore, with repeated iterations of the method 1400, the computing subsystem can generate a template that refines mapping between the detector subsystem components and specific brain anatomy of the user(s) associated with the method 1400. Over time, aggregation and processing of large amounts of data from the user(s) can be used to provide rapid calibration of system components and system response for the user(s) and/or new users. FIG. 14C depicts a schematic of a neural data stream capturing information associated with different articulators, in relation to an embodiment of the method shown in FIG. 14A. In the schematic shown in FIG. 14C, the neural data stream contains information from different brain/head regions that map to different articulators, where the different articulators are associated with different speech components (e.g., phonemes). In more detail, the neural data stream captures data associated with a first set of light signals corresponding to a first articulator, where the first articulator is a labial articulator associated with the phonemes “p”, “b”, and “m”. The neural data stream also captures data associated with a second set of light signals corresponding to a second articulator, where the second articulator is a labiodental articulator associated with the phonemes “f” and “v”. The neural data stream also captures data associated with a third set of light signals corresponding to a third articulator, where the third articulator is a dental articulator associated with the phoneme “th”. The neural data stream also captures data associated with a fourth set of light signals corresponding to a fourth articulator, where the fourth articulator is an alveolar articulator associated with the phonemes “t”, “d”, “n”, “s”, and “z”. The neural data stream also captures data associated with a fifth set of light signals corresponding to a fifth articulator, where the fifth articulator is a postalveolar articulator associated with the phonemes “sh” and “ch”. The neural data stream also captures data associated with a sixth set of light signals corresponding to a sixth articulator, where the sixth articulator is a velar articulator associated with the phonemes “k”, “g”, and “ng”. The neural data stream also captures data associated with a seventh set of light signals corresponding to a seventh articulator, where the seventh articulator is a glottal articulator associated with the phoneme “h”. In alternative embodiments, however, the neural data stream can additionally or alternatively capture data associated with light signals corresponding to different articulators and/or different speech components. In relation to generation of the neural data stream, the detector subsystem can be configured to separate detector subregions associated with different articulators, in order to increase distinction between signals associated with different articulators. In relation to signals of the neural data stream shown in FIG. 14C, the signals associated with different articulators can be aggregated and processed, as described in downstream portions of the method 1400, in order to generate higher bandwidth signals from lower bandwidth carriers associated with articulator-specific signals received at different time points. As such, sequences of activation of different articulators associated with actual or imagined speech can generate low bandwidth carrier signals that can be processed, in relation to temporal and spatial factors, to produce a higher bandwidth signal that can be decoded. In other embodiments, the system can generate a neural data stream using other techniques including any or more of: functional magnetic resonance imaging (fMRI), other forms of blood-oxygen-level dependent (BOLD) contrast imaging, near-infrared spectroscopy (NIRS), magnetoencephalography (MEG), electrocorticography (ECoG), electroencephalography (EEG), positron emission tomography, nuclear magnetic resonance (NMR) spectroscopy, single-photon emission computed tomography. 5.2 Method—Extracting Predicted User Action FIG. 14D depicts a flow chart of a portion of the method for neural decoding shown in FIG. 14A. As shown in FIG. 14D, the system (e.g., computing components) extracts 1420 a predicted user action upon processing the neural data stream with a neural decoding model, which functions to transform captured input signals associated with the neural data stream into decoded information that can be used to trigger responses (e.g., by environmental objects) that benefit the user in some manner. The predicted user action, as described above, can include a prediction of actual or imagined speech of the user, where the speech is associated with commands provided by the user to manipulate one or more objects in the environment of the user. The objects can be associated with a virtual or a real environment. For instance, the object can be a player or other entity in a virtual game environment, where the predicted user action is an action that manipulates behavior (e.g., movement) of the player or entity within the virtual game environment. In another example, the object can be a digital object associated with a virtual assistant, such that the predicted user action is an action that commands the virtual assistant to perform a task (e.g., in relation to scheduling, in relation to device operation state manipulation, in relation to executing communications with entities associated with the user, etc.). In another example, the object can be a connected object (e.g., a smart home light, smart home thermostat, smart home speaker, smart home appliance, other smart home device, etc.), such that the predicted user action is an action that affects operation of the connected object, through provision of control instructions to the connected object. In alternative embodiments, however, the predicted user action can be an action associated with different motor cortex functions (e.g., actual or imagined movement of another part of the body), different cognitive functions, different cognitive states (affective states, etc.). The action can, however, be another suitable action. In relation to extracting predicted user actions, the system (e.g., computing components of the system) can implement a neural decoding model that decodes the probability of a predicted action based upon environment/object state and an analysis of information from the neural data stream. As such, the system, as shown in FIG. 14D, can perform decoding by determining 1422 an empirical probability of an action due to state of the object and/or environment associated with the object, and by determining 1424 a light signal-decoded probability as determined from the neural data stream. In one embodiment, the probability function can be assumed to have the shape: p(predicted user action|environment state, neural data)=softmax[α*Q(predicted user action environment state)+β*L(predicted user action neural data)], where p is the probability of the predicted user action. Q is determined from an analysis of probability of a given action from a set of candidate options based upon the environment or object state. L is a negative log-likelihood given by an estimator (e.g., neural network model) that is trained based upon incoming neural data from one or more users. The parameters α and β are free hyperparameters. In an example associated with a gaming environment, where the goal is to navigate a grid to drive a character toward a prize positioned within the grid, Q is determined by policy iteration over the available positions on the grid. In more detail, Q in this example is the negative of the distance from the position of the character to the position of the prize, where the distance can be measured by Dijkstra's algorithm or another distance-determining algorithm. In different environments (e.g., other virtual or real environments) with different objects, however, Q can be used to determine a probability of an action based on environment state with another process. In an example where the predicted user action is associated with actual or imagined speech commands, the neural decoding model can determine L upon receiving and aggregating sequences of signals associated with the speech articulators, in order to form single or multi-consonant words that represent different commands. As such, the neural decoding model can transform the neural data stream into a set of speech components mapped to a set of motor cortex articulators associated with the head region or, can transform a sequence of activated motor cortex articulators, captured in the set of light signals of the neural data stream, into one or more phoneme chains representative of the commands. Based on the set of articulators from which signals are able to be captured by the detector subsystem, the phoneme chains can be literally translated into the commands (e.g., phoneme chains that form directional words). Additionally or alternatively, the phoneme chains can be trained representations of a spoken version of the commands. For instance, a phoneme chain of “h” “th” “m” detected in the neural data stream can translate to “hotham”, a phoneme chain of “v” “d” “k” detected in the neural data stream can translate to “vadok”, a phoneme chain of “p” “ch” “th” detected in the neural data stream can translate to “poochoth”, and a phoneme chain of “k” “v” “n” detected in the neural data stream can translate to “kevin”, where the representative words “hotham”, “vadok”, “poochoth”, and “kevin” map to different commands (e.g., commands that move a character in different directions, such as left, right, up, and down, in a virtual environment). Also shown in FIG. 14D, implementation of the neural decoding model can include modulating which components (e.g., an environment state-based component, a neural data stream-based component) govern the output of the model. In more detail, as shown in FIG. 14C, the computing subsystem can determine 1422 an environment state-based probability associated with the predicted user action and determine 1424 a light signal-decoded probability associated with the user action. The computing subsystem can also determine 1426 a value of a confidence parameter associated with the light signal-decoded probability, and compare 1427 the value of the confidence parameter to a threshold condition. Then, if the threshold condition is satisfied, the output of the neural decoding model can be based upon the light signal-decoded data (e.g., as determined using a diffuse optical tomography analysis). However, if the threshold condition is not satisfied, the output of the neural decoding model can be based upon the environment state-based probability analysis. Alternatively, the computing subsystem can implement a weighting function that weighs the confidences in each of the environment state-based analyses and the light-signal decoded analyses, and provides an output that combines the weighted probability components as an aggregated output (e.g., based on convex combination, based on another combination algorithm). As such, without knowledge by the user, the computing subsystem can ensure that the neural decoding model 1420 outputs a predicted user action, even if the confidence in the analysis of the action captured in the neural data stream is low, by using an empirical probability determined from the environment state. Furthermore, the computing subsystem can implement training data from situations where the predicted user action is known, in order to increase the accuracy of the light-decoded probabilities. Then, as the confidence in the light signal-decoded probability based on analysis of the neural data stream increases, the computing subsystem can primarily output predicted user actions based on analysis of the neural data stream, as the accuracy in decoding light signals of the neural data stream increases. FIG. 14E depicts an expanded view of a portion of the process flow shown in FIG. 14D, in relation to implementation of the neural decoding model as applied to input signals derived from sensors associated with the brain computer interface. As shown, input signals can be associated with hemodynamic responses captured in light signals. Input signals can additionally or alternatively include local field potentials, signals capturing speech processes (actual or imagined speech processes), signals capturing oculomotor processes, signals capturing other motor processes, other biometric data (e.g., associated with respiration, etc.) and other suitable input signals, based on sensor outputs of the BCI. In particular, an embodiment of the detector subsystem can generate light signals associated with hemodynamic response in relation to different articulators, as described above. Additionally, one or more microphones can generate audio signals (e.g., capturing speech information) that can supplement data used to generate the predicted user action. Additionally, one or more video cameras can generate video data associated with the user's face or eyes and/or an environment of the user that can supplement data used to generate the predicted user action. Additionally, one or more touch sensors can generate signals indicative of motor skill activities of the user that can supplement data used to generate the predicted user action. Additionally, one or more motion sensors (e.g., of an inertial measurement unit) can generate signals indicative of motion of the user that can supplement data used to generate the predicted user action. Additionally, other brain activity sensors (e.g., electrodes for electroencephalography, etc.) can generate signals from electrical potentials that can supplement data used to generate the predicted user action. Additionally, other biometric sensors (e.g., respiration sensors, cardiovascular parameter sensors, etc.) can generate signals that can supplement data used to generate the predicted user action. In relation to processing of the neural data stream, input light signals derived from one or more embodiments of the detector subsystem described above can be classified by the computing subsystem hosting the neural decoding model into groups associated with different articulators (e.g., different articulators associated with different speech components, an example of which is shown in FIG. 14C). Then, outputs of the classification can be assembled (e.g., into n-grams), based upon temporal factors or other factors. The computing subsystem hosting the neural decoding model can then process the assembled information with an action prediction model. In one embodiment, the computing subsystem can transform signals associated with a sequence of activated motor cortex articulators (e.g., as captured in a set of light signals) into a phoneme chain representative of a command intended to be executed by the user. If including analysis of audio signals in the neural decoding model, the computing subsystem hosting the neural decoding model can also extract features of the audio signals, determine and annotate speech components or other audio components from the audio signals, and perform a segmentation operation to determine boundaries between individual speech components, in relation to a user action. If including analysis of video signals in the neural decoding model, the computing subsystem hosting the neural decoding model can also implement facial feature tracking algorithms, with fiducial labeling and gesture segmentation models, in relation to detecting a user action. The computing subsystem can additionally or alternatively process video signals in order to track motions of the eye(s) of the user, in order to determine coordinates of objects that the user is looking at and/or dwell time, in relation to a user action. The neural decoding model can additionally or alternatively accept other input signals, which can be aggregated by architecture of the neural decoding model to combine features into an output of a predicted user action (e.g., with a confidence score). In relation to the neural decoding model, the computing subsystem can include architecture for synchronization of input signals associated with the same or different sensors, in relation to an event (e.g., an environment state, an object state, a stimulus, a feedback stimulus provided based on a predicted user action, as described in more detail below, etc.). In order to synchronize input signals, the computing subsystem can include architecture for signal registration (e.g., based upon temporal signatures within different signals, based upon interpolation of signals with different associated sampling rates, etc.), to a desired degree (e.g., with millisecond alignment, with microsecond alignment, etc.). As such, the computing subsystem implements the neural decoding model to extract predicted user actions contemporaneously (e.g., within a time threshold to) a time point associated with an event, such as a state of an environment or object associated with the user. 5.3 Method—Providing Feedback Stimulus As shown in FIGS. 14A, 14D, and 14F, where FIG. 14F depicts an expanded view of a portion of the process flow shown in FIG. 14B, the computing subsystem provides 1430 a feedback stimulus to the user based on the predicted user action output by the neural decoding model. The feedback stimulus can be a representation of the predicted user action output by the neural decoding model, in text or other visual format, in audio format, and/or in haptic format. For instance, if the predicted user action is associated with a command (e.g., a command to manipulate an object) or request, the feedback stimulus can be a textual representation of the command or request, or a symbolic representation of the command or request. In a specific example, if the command is an indication by the user that the user wants to move an object in a direction, the feedback stimulus can be a rendered text description of the direction or a rendered arrow depicting the direction, where the computing subsystem generates instructions for rendering the feedback stimulus at a display. In another specific example, if the command is an indication by the user that the user wants to move an object in a direction, the feedback stimulus can be an audio output that states the direction in speech, where the computing subsystem generates instructions for transmitting audio through a speaker of a device associated with the user. The representation of the command, provided to the user as the feedback stimulus, can be validated by the user (e.g., the user can indicate that the predicted user action is correct, based upon the feedback stimulus), as a transitional step to execution of the command or request by the computing subsystem or other device. Additionally, as described below, the representation of the command, provided to the user as the feedback stimulus, can be used in a co-learning process in order to train the user's behavior (e.g., to provide feedback to the user so that the user can tune his/her behaviors to provide signals that are more easily decoded), such that training of the neural decoding model occurs in coordination with training of user behaviors to increase the accuracy of the neural decoding model. In providing the feedback stimulus, the computing subsystem can also generate instructions for execution of a command or request by the user, in relation to modulation of a state of a digital object 1432 or a physical object 1436, with generation 1434 of control instructions in a computer-readable medium for object modulation, several examples of which are described below. In the context of a game architected in a digital platform, the feedback stimulus can include direct manipulation of a user's character in the game, in terms of motion, behavior, or another action performable by the user's character. In a specific example, execution of the command can include moving the user's character in the game environment, in direct response to the predicted user action being associated with a direction in which the user intends the character to move. In the context of a game architected in a digital platform, the feedback stimulus can include direct manipulation of a game environment, in terms of adjustable parameters in the virtual environment. In the context of a virtual assistant platform, the feedback stimulus can include generation of control instructions for the virtual assistant to navigate and/or manipulate systems in order to perform a task for the user. For instance, the computing subsystem can generate control instructions that instruct the virtual assistant to execute communication (e.g., in a text message, in an audio message, etc.) with an entity associated with the user, to generate a reminder, to perform a calendar-related task, or to perform another task. In the context of a virtual environment, with menus or other selectable objects, the feedback stimulus can include execution of instructions for selection of the object or navigation of a menu, based upon the predicted user action. In the context of connected devices physically associated with a real environment of the user, the feedback stimulus can include manipulation of operation states of the connected device(s). In examples, the connected device(s) can include one or more of: temperature control devices, light control devices, speakers, locks, appliances, and other connected devices. In providing the feedback stimulus, the computing subsystem can generate control instructions for adjusting operational states of devices (e.g., turn off device, turn on device, transition device to idle, adjust device brightness, adjust device output color, adjust device output volume, adjust device sound output profile, adjust microphone operation state, adjust temperature output, adjust lock state, adjust appliance operation state, etc.) In other contexts, the feedback stimulus may not be related to a command or request. For instance, the predicted user action can be a subconscious cognitive state or affective state, and the computing subsystem can generate and/or execute instructions for manipulation of an object or environment based upon the subconscious cognitive state or affective state. 5.4 Method—Co-Learning As shown in FIGS. 14A and 14B and described above, the system also implements one or more co-learning processes 1440, 1450 for improvement of the neural decoding model and/or behavior of the user. In relation to the co-learning processes, the computing subsystem implementing the method 1400 provides a closed loop process whereby the neural decoding model is updated and trained as the user interacts with content or other stimuli, and provides additional light-derived signals that capture brain activity. Additionally, the feedback stimuli provided to the user produces a behavior by the user and can be used to train the user in relation to adjusting responses to the environment in a manner that is more efficiently decoded by the neural decoding model. As such, the method 1400 can implement computing architecture that inputs an output derived from the feedback stimulus (e.g., verification that the feedback stimulus was appropriate in relation to a user action) back into the neural decoding model, in order to refine the neural decoding model. In relation to a closed-loop system, the computing subsystem can, based upon a behavior of the user in response to the feedback stimulus, process additional signals from the user to generate additional predicted user actions (with refinement of the neural decoding model). Then, based upon the additional predicted user actions, the computing subsystem can provide additional feedback stimuli to the user, derivatives of which and/or responses to which can be used to further refine the neural decoding model. 6. Conclusion The foregoing description of the embodiments has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the patent rights to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Embodiments may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the patent rights. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights, which is set forth in the following claims. 16773036 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Dec 8th, 2020 12:00AM https://www.uspto.gov?id=US11316911-20220426 Social media music streaming Systems and methods for social media music streaming may include (1) providing a music service within a social media platform, (2) presenting, via the music service, a music consumption interface that displays a collection of personal music stations, each of which is dedicated to music content associated with a different user of the social media platform and each of which is selected based on a user's listening behavior identified while the user is listening to music via the music service of the social media platform in a broadcasting mode, and (4) in response to receiving the user input, playing music content from the selected personal music station. Various other methods, systems, and computer-readable media are also disclosed. 11316911 1. A computer-implemented method comprising: providing a music service within a social media platform; presenting, via the music service, a music consumption interface that displays a plurality of personal music stations, wherein: each personal music station is dedicated to music content associated with a different user of the social media platform; and music content for a user's personal music station is selected based on the user's listening behavior identified while the user is listening to music via the music service of the social media platform in a broadcasting mode; receiving user input selecting one of the personal music stations; and in response to receiving the user input, playing music content from the selected personal music station. 2. The computer-implemented method of claim 1, further comprising, prior to presenting the music consumption interface, creating each personal music station by: monitoring, via the music service, the listening behavior of a user to whom the personal music station is dedicated while the user is listening to music in the broadcasting mode; and selecting music compositions for the personal music station that correspond to the monitored listening behavior. 3. The computer-implemented method of claim 2, wherein: monitoring the listening behavior comprises identifying one or more music compositions played for the user to whom the personal music station is dedicated; and selecting the music compositions for the personal music station comprises adding the identified music compositions to the personal music station. 4. The computer-implemented method of claim 1, wherein providing the music service within the social media platform comprises providing, within an interface of another service of the social media platform, a persistent entry point that navigates to an interface of the music service. 5. The computer-implemented method of claim 1, wherein providing the music service within the social media platform comprises providing the music service as a passive layer within another service provided by the social media platform. 6. The computer-implemented method of claim 5, wherein the other service comprises at least one of: a newsfeed; a digital stories service; or a messaging application. 7. The computer-implemented method of claim 1, further comprising automatically creating a shared music station for members of a group chat in response to the creation of the group chat. 8. The computer-implemented method of claim 1, further comprising creating a most-popular station for a user of the music service by: identifying one or more contacts of the user; identifying a plurality of music compositions that are most popular with the user's contacts based on a popularity metric; and adding the music compositions identified as most-popular to the most-popular station. 9. The computer-implemented method of claim 1, further comprising: providing a personal station interface corresponding to the selected personal music station; and providing, within the personal station interface, at least one of: a like push button; a bookmark push button; a push button to digital share a music composition from the selected personal music station via at least one of a digital story composition or a private message; or a push button to send a digital message to the user to whom the selected personal music station is dedicated. 10. The computer-implemented method of claim 1, further comprising creating a periodic music digest comprising a summary of digital social reactions to music compositions that is based on social media comments associated with the music compositions during a designated period. 11. A system comprising: a providing module, stored in memory, that provides a music service within a social media platform; a presenting module, stored in memory, that presents, via the music service, a music consumption interface that displays a plurality of personal music stations, wherein: each personal music station is dedicated to music content associated with a different user of the social media platform; and music content for a user's personal music station is selected based on the user's listening behavior identified while the user is listening to music via the music service of the social media platform in a broadcasting mode; an input module, stored in memory, that receives user input selecting one of the personal music stations; a music player module, stored in memory, that in response to the input module receiving the user input, plays music content from the selected personal music station; and at least one physical processor configured to execute the providing module, the presenting module, the input module, and the music player module. 12. The system of claim 11, wherein, prior to the presenting module presenting the music consumption interface, a station module creates each personal music station by: monitoring, via the music service, the listening behavior of a user to whom the personal music station is dedicated while the user is listening to music in the broadcasting mode; and selecting music compositions for the personal music station that correspond to the monitored listening behavior. 13. The system of claim 12, wherein: monitoring the listening behavior comprises identifying one or more music compositions played for the user to whom the personal music station is dedicated; and selecting the music compositions for the personal music station comprises adding the identified music compositions to the personal music station. 14. The system of claim 12, wherein providing the music service within the social media platform comprises providing, within an interface of another service of the social media platform, a persistent entry point that navigates to an interface of the music service. 15. The system of claim 14, wherein providing the music service within the social media platform comprises providing the music service as a passive layer within another service provided by the social media platform. 16. The system of claim 15, wherein the other service comprises at least one of: a newsfeed; a digital stories service; or a messaging application. 17. The system of claim 11, further comprising a station module automatically creates a shared music station for members of a group chat in response to the creation of the group chat. 18. The system of claim 11, further comprising a station module that creates, for a user of the music service, a poly-user station dedicated to the user and at least one additional user by: identifying an overlap between a music preference of the user and a music preference of the additional user; and adding, to the poly-user station, one or more music compositions that correspond to the identified overlap. 19. The system of claim 11, further comprising a station module that creates a most-popular station for a user of the music service by: identifying one or more contacts of the user; identifying a plurality of music compositions that are most popular with the user's contacts based on a popularity metric; and adding the music compositions identified as most-popular to the most-popular station. 20. A non-transitory computer-readable medium comprising one or more computer-readable instructions that, when executed by at least one processor of a computing device, cause the computing device to: provide a music service within a social media platform; present, via the music service, a music consumption interface that displays a plurality of personal music stations, wherein: each personal music station is dedicated to music content associated with a different user of the social media platform; and music content for a user's personal music station is selected based on the user's listening behavior identified while the user is listening to music via the music service of the social media platform in a broadcasting mode; receive user input selecting one of the personal music stations; and in response to receiving the user input, play music content from the selected personal music station. 20 CROSS REFERENCE TO RELATED APPLICATION This application is a continuation of U.S. patent application Ser. No. 16/555,690, entitled “SOCIAL MEDIA MUSIC STREAMING,” filed Aug. 29, 2019. The disclosure of which are incorporated, in its entirety, by this reference. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure. FIG. 1 is a flow diagram of an exemplary method for providing social media music streaming. FIG. 2 is a block diagram of an exemplary system for providing social media music streaming. FIG. 3 is an illustration of an exemplary persistent entry point element that navigates to a music service interface. FIG. 4A is an illustration of an exemplary contextual music player with an exemplary broadcast push button. FIG. 4B is an illustration of the exemplary contextual music player depicted in FIG. 4A, which has been minimized and placed in a hover screen over a newsfeed interface. FIG. 4C is an illustration of the exemplary contextual music player depicted in FIGS. 4A and 4B, which has been further minimized. FIG. 5 is an illustration of an exemplary personal music station interface. FIG. 6 is an illustration of an exemplary player interface corresponding to a personal music station included in the interface depicted in FIG. 5. FIG. 7 is an illustration of an exemplary hover interface that includes one or more actions that may be performed in connection with a music composition being played from the personal music station depicted in FIG. 6. FIG. 8 is an illustration of an exemplary messaging interface corresponding to the player interface depicted in FIG. 6 and/or the hover interface depicted in FIG. 7. FIG. 9 is an illustration of an exemplary station interface that includes a collection of selectable personal stations. FIG. 10 is an illustration of a messaging interface corresponding to an exemplary group chat that operates in conjunction with a music service. FIGS. 11A-11B are illustrations of interfaces associated with a shared music station that corresponds to the group chat depicted in FIG. 10. FIG. 12 is an illustration of an exemplary group chat message that coincides with the shared music station depicted in FIGS. 11A-11B. Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS People all over the world feel a need for music. Some music consumption is a personal experience. However, music is also viewed by many as a meaningful social experience. In light of this, the present disclosure identifies a need for improved systems and methods for providing a computer-implemented music service that improves the social connection experienced through music. As will be described in greater detail below, embodiments of the present disclosure may provide systems and methods for providing a computer-implemented music service within a social media platform. In some examples, the music service may be provided as an active layer within the social media platform. For example, the social media platform may provide a digital media player (e.g., a music player) that may be used to play compositions (e.g., to play compositions that have been organized by the social media platform into music stations for its users). Additionally or alternatively, the music service may be provided as a passive layer within the various services offered by the social media platform (e.g., a digital newsfeed service, a digital stories service, a messenger service, etc.). The music service may enable its users to easily share their music listening activities using a selectable broadcast push button. In some examples, the music service may create a personal music station (that is, a user-specific music station) for a user of the music service and/or of the social media platform. The personal music station may represent a public-facing music station dedicated to a particular user and may include music listened to by the user and/or music predicted to be enjoyed by the user based on the music listened to by the user. In some examples, a personal music station may include a persistent queue of music compositions. Meaning, the personal music station may include, within its queue of music compositions, music compositions corresponding to past music consumption, as opposed to only including music currently being listened to by a user. Thus, the personal music station may enable asynchronous sharing of music listening behavior. In some examples, a personal music station may only include music content corresponding to listening behavior identified while a broadcast push button is selected (e.g., within a player interface). In these examples, the broadcast push button may provide an easy sharing mechanism for a user, which enables the user to passively share the user's music listening activities with others. In some examples, a user of the music service may be presented with a digest of other users' personal music stations. For example, a user may be presented with a music station interface that includes the personal music stations of one or more of the user's contacts. In these examples, each personal music station within the music station interface may be selectable. When selected, the music service may play music corresponding to the selected personal music station. This may enable users to discover music from others' listening activities. In some embodiments, the music service may enable users to start conversations relating to their music listening activities and/or to organize group playlists (e.g., using a messenger service). In certain embodiments, the music service may create music stations that promote a specific social connection. For example, the music service may create, for a user, a playlist consisting of music that has been listened to by both the user and one of the user's contacts. As another example, the music service may create, for a user, a playlist of music that is most popular with the user's contacts. In some examples, the music service may provide information relating to the cultural relevance of music being played (e.g., related videos, cover albums, articles, comments posted by other users, etc.). In one embodiment, the music service may provide users with a weekly music summary. The weekly music summary may include a variety of information (e.g., most played songs of the week, new music of the week, etc.). In one embodiment, the music service may automatically create yearly music awards based on listening data aggregated from its users. As will be explained in greater detail below, embodiments of the present disclosure may improve systems for providing music that digitally promotes social connection. The present disclosure may improve the functioning of a computer itself by improving music data organization within devices. The following will provide, with reference to FIG. 1, detailed descriptions of computer-implemented methods for creating and maintaining personal music stations within a social media platform. Detailed descriptions of corresponding example systems will also be provided in connection with FIG. 2. Detailed descriptions of interfaces related to a corresponding music service will be provided in connection with FIGS. 3-12. FIG. 1 is a flow diagram of an exemplary computer-implemented method 100 for creating and/or maintaining personal music stations within a social media platform. The steps shown in FIG. 1 may be performed by any suitable computer-executable code and/or computing system, such as the systems described herein. In one embodiment, the steps shown in FIG. 1 may be performed by modules operating within a computing device. For example, the steps shown in FIG. 1 may be performed by modules operating in a server 202 and/or modules operating in a user device 204 (e.g., as shown in exemplary system 200 in FIG. 2). Server 202 generally represents any type or form of backend computing device that may perform one or more functions directed at providing music to users of a music service 206. In some examples, server 202 may perform music functions in conjunction with a social media platform 208 that provides music service 206 to its users. Although illustrated as a single entity in FIG. 2, server 202 may include and/or represent a group of multiple servers that operate in conjunction with one another. User device 204 generally represents any type or form of computing device capable of reading computer-executable instructions. For example, user device 204 may represent a smart phone and/or a tablet. Additional examples of user device 204 may include, without limitation, a laptop, a desktop, a wearable device, a personal digital assistant (PDA), etc. In some examples, a user 210 of user device 204 may also be a user of music service 206. In examples in which music service 206 is provided by social media platform 208, user 210 may be a member of social media platform 208 and user device 204 may have installed an instance of a social media application 212 that operates as part of social media platform 208. Additionally or alternatively, user device 204 may have installed a browser that may navigate to one or more webpages maintained by social media platform 208. In these examples, music service 206 may operate as part of social media application 212 and/or one or more webpages maintained by social media platform 208. Social media platform 208 may provide a variety of services (e.g., in addition to music service 206) for the users within its network. In one example, social media platform 208 may provide a newsfeed service. The term “newsfeed” may generally refer to any type or form of social media consumption channel that presents a scrollable collection of newsfeed compositions. In some examples, a newsfeed may scroll (e.g., upward or downward) to reveal different compositions within the newsfeed, in response to receiving user scrolling input. In one example, the scrollable collection may include a collection of newsfeed compositions created by contacts of a particular user (e.g., friends of the particular user). The term “newsfeed composition” as used herein generally refers to any type or form of composition that may be displayed in a newsfeed. Newsfeed compositions may include, without limitation, text-based compositions, a music composition (as will be described in greater detail below), media-based compositions (which may include either a single media item or a collage of multiple media items), and/or a link to an online article. As another example, social media platform 208 may provide a digital story service. The digital story service may provide users with a story consumption channel, which presents a continuous series of digital story compositions to a story-consumer, one by one. In one example, the story consumption channel may transition from presenting one digital story composition to the next automatically, without requiring any user input to do so. In some examples, a digital story composition may only be viewable fora predetermined amount of time. For example, a digital story composition may be set to disappear after twenty-four hours. The term “digital story composition” may generally refer to any type or form of social media composition intended for a story consumption channel. A digital story composition may include a variety of content (e.g., a digital photograph, a graphic, text, a digital video and/or a digital recording of a music composition). In some examples, digital story compositions from a same source (e.g., created and posted by a same user) may be grouped together within the story consumption channel, such that each digital story composition from a particular source is displayed prior to displaying digital story compositions from another source. As another example, social media platform 208 may provide a messaging service. The term “messaging service” may generally refer to any type or form of digital message delivery system that enables users of social media platform 208 to exchange messages (e.g., private messages between two or more users). These messages may include a variety of content (e.g., a text, link, live video, voice recordings, music compositions, etc.) and may take a variety of forms (e.g., e-mail, text message, group chat, etc.). Returning to FIG. 1, at step 110, one or more of the systems described herein may provide a music service within a social media platform. For example, as illustrated in FIG. 2, a providing module 214 may provide music service 206 (e.g., to user 210 via user device 204) as part of social media platform 208. The term “music service” may generally refer to any type or form of service that digitally provides music. In some examples, music service 206 may represent a web-based service that streams music to a device that is connected to a network such as the Internet. In one such example, music service 206 may additionally provide music when the device is offline (that is, when the device is disconnected from the network). For example, music service 206 may enable music to be downloaded for a designated amount of time. Music service 206 may provide music in a variety of ways. In some examples, music service 206 may provide a music player that plays music compositions (i.e., digital recordings of music compositions). In one example, the music player may play music that has been requested directly via user input. For example, the music player may receive a user request for a particular music composition and may play the requested music composition in response to receiving the request. In another example, music service 206 may create a music station for a user (e.g., user 210) and the music player may play music content from the music station. The term “music station” may refer to any type or form of digital container that stores a queue of music compositions that may be played via a music player provided by music service 206. In some examples, the queue may represent an evolving queue and music compositions may continually be added to the queue in real time (e.g., as the music compositions within the queue are being played). In other examples, the queue may represent a designated set of music compositions (e.g., a playlist). In some examples, the queue may be filled with music compositions that correspond to a particular genre of music or that relate to a common theme. The music compositions may be manually added to a music station via user input, may be automatically added based on deduced user preferences, or a combination. Music service 206 may deduce a user's preferences in a variety of ways. In one embodiment, a preference-deduction module may deduce the preferences based on the user's listening history with music service 206 (e.g., songs played for the user, posted by the user, and/or designated as liked by the user in the past). In one embodiment, a user's preferences may be based in part on the user's current context. Music service 206 may provide music in various modes. For example, music service 206 may provide music via an intentional-user mode. In this example, music service 206 may enable users to actively search for music to consume and/or share. In another example, music service 206 may provide music via an ambient mode. In this example, music service 206 may provide a user with music that the user has not specifically searched for (e.g., by providing an automatically generated music station). In an ambient mode, music service 206 may (1) automatically create a music station and (2) automatically provide a push notification with a suggestion to consume the automatically created music station. In some examples, as discussed above, user 210 may be a member of social media platform 208 and music service 206 may be provided to user 210 as part of social media platform 208 (e.g., via social media application 212). Music service 206 may operate within social media platform 208 in a variety of ways. In one embodiment, music service 206 may operate as a passive layer that operates in the background of another service provided by social media platform 208 and/or as a supplemental feature of another service provided by social media platform 208. For example, music service 206 may operate as a passive layer within a digital story service, a messaging service, and/or a newsfeed service. As a specific example, a composition interface that enables user 210 to create a social media composition (e.g., a digital story composition and/or a newsfeed composition) may include a selectable element that enables user 210 to add music content to the social media composition. The composition interface may enable the user to create a social media composition that includes music content as the sole and/or primary element of the social media composition or that includes music content as one of several components of the social media composition (e.g., as background music to a digital photograph). In another example, music service 206 may operate as a passive layer within a messenger service. In this example, a messenger interface that enables user 210 to create private messages may include a selectable element that enables user 210 to share music in the private message, as will be described in greater detail below in connection with step 140. In addition, or as an alternative, to operating as a passive layer within social media platform 208, music service 206 may operate as part of an active layer within social media platform 208 (e.g., within an active-layer interface or a set of active-layer interfaces dedicated to music consumption and/or music sharing). In some examples, an active-layer interface may correspond to a music player, which may be used to play music content, as will be described in greater detail below. The term “music player” may generally refer to any type or form of application software, provided and/or utilized by music service 206, that is configured to play multimedia files (e.g., audio files) provided via music service 206. In additional or alternative examples, which will be described in greater detail below in connection with step 120, an active-layer interface may correspond to a music consumption interface that displays a collection of personal music stations. Additionally or alternatively, an active-layer interface may correspond to an informational interface (e.g., a current music events page dedicated to information describing music that is currently trending within social media platform 208's user base). In embodiments in which music service 206 functions within an active layer of social media platform 208, social media platform 208 may provide a persistent entry point to one or more active-layer interfaces. For example, as illustrated in FIG. 3, an interface 300 may include a persistent entry point 302, which may be permanently affixed to its position within interface 300, that navigates to the active-layer interface. In some examples, a position of persistent entry point 302 may remain the same within a variety of different interfaces provided by social media platform 208. In some embodiments, as mentioned above, music service 206 may provide a music player, which may be user-operable via a player interface (e.g., an active-layer interface). In these embodiments, the player interface may be presented in a full-screen mode, as illustrated by player interface 400 in FIG. 4A. The player interface in the full-screen mode may include a variety of content. For example, the player interface in full-screen mode may include (1) a list of music compositions that are currently being played, that have been played, and/or are in queue to be played and (2) user controls that enable the user to pause the playing and/or skip forward and/or backward to other music compositions. The player interface may also be minimizable, as shown in FIGS. 4B and 4C, or dismissible. The minimized player interface may provide a minimal amount of information. For example, the minimized player interface may only (1) display a title of a music composition currently playing and/or (2) provide user controls allowing the user to pause the playing and/or skip forward and/or backward to other music compositions. In some examples, the minimized player interface may hover over another interface provided by social media platform 208 (e.g., via social media application 212), such as newsfeed 402 illustrated in FIGS. 4B-4C. In some examples, the player interface may also include a broadcasting element. FIGS. 4A-4C provide an exemplary depiction of a broadcast push button 404. In these embodiments, a broadcasting module may be configured to broadcast the music content consumed via the player interface while the broadcasting element is selected. In some examples, the broadcasting module may enable a user to select an audience to which the music content will be broadcasted (e.g., a public audience, a contacts-only audience, an audience of select contacts, etc.). The broadcasting module may broadcast the music content consumed while the broadcasting element is selected in a variety of ways. In some examples, the broadcasting module may broadcast the music content to a user profile. Additionally or alternatively, the broadcasting module may broadcast the music content to a social media composition (e.g., a digital story composition and/or a newsfeed composition). In one embodiment, the broadcasting module may broadcast the music content to a personal music station, as will be described in greater detail below in connection with steps 120-140. In these examples, as will be described in greater detail below, a user of social media platform 208 may be configured with a dedicated music station that includes the music consumed while the user has the broadcasting element selected. Returning to FIG. 1, at step 120, one or more of the systems described herein may present, via the music service, a music consumption interface that displays a collection of personal music stations, each of which is dedicated to music content associated with a different user of the social media platform. For example, as illustrated in FIG. 2, a presenting module 216 may present a music consumption interface 218 that displays a collection of personal music stations 220, each of which is dedicated to a different user of social media platform 208. The term “personal music station” may refer to any type or form of music station that is dedicated to music content that reflects the music preferences of a particular user. Each personal music station may be configured as a public-facing station. That is, a user's personal music station may be conceptualized as a means for sharing and/or broadcasting something of the user with others, similar to the way a newsfeed and/or digital story may be conceptualized as a means for sharing and/or broadcasting. In one embodiment, the disclosed systems and methods may also provide a group music station, which functions in the same ways described for personal music stations 220 but that is dedicated to music content that reflects the music preferences of a particular group of users. In some examples, a station module 222 may automatically create a personal music station for each user that is registered with social media platform 208 (that is, that has an account with social media platform 208). In these examples, station module 222 may maintain each personal music station as long as its corresponding user account is active and may designate each personal music station by the username associated with the corresponding user account. Station module 222 may create a personal music station in a variety of ways. In one example, station module 222 may monitor, via music service 206, the listening behavior of a user to whom the personal music station is dedicated. For example, station module 222 may monitor music searched for and/or listened to via the user's user account with social media platform 208. Then, station module 222 may select music compositions for the personal music station that correspond to the monitored listening behavior. Station module 222 may select music compositions that correspond to the monitored listening behavior in a variety of ways. In some examples, station module 222 may (1) identify music compositions that were played for a user, (2) deduce that the music compositions that were played reflect the user's preferences, and (3), in response to the deducing, select the music compositions that were played and/or music compositions that are musically similar to the music compositions that were played. Additionally or alternatively, station module 222 may (1) identify music compositions that the user designated as enjoyable (e.g., by receiving a user selection of a “like” push button while the music composition was playing and/or by receiving a user submission of one or more music compositions that the user indicates reflect the user's preferences) and (2) select the identified music compositions and/or music compositions that are musically similar to the identified music compositions. Station module 222 may determine that music compositions are musically similar using any type or form of similarity-detection model. In some examples, station module 222 may determine that the music compositions are musically similar based on a usage analysis. For example, station module 222 may determine that music compositions are musically similar based on data collected from playlists of users within a user base. As a specific example, station module 222 may determine that two music compositions are similar because the two music compositions are co-located in a same playlist. As another example station module 222 may determine that music compositions are musically similar based on a musical quality (e.g., a beat and/or tempo), an artist, and/or a social reaction (e.g., derived from digital comments posted to social media platform 208). In some examples, station module 222 may determine that music compositions are similar using machine learning (e.g., based on an output received from a neural network). The disclosed systems and methods may provide a variety of vehicles for giving a user control over the privacy of the user's personal music station. For example, a user may select an audience for the user's personal music station via a setting in the user's user account. In some examples, as described above, a music player provided by music service 206 may provide an interface, such as player interface 400 depicted in FIGS. 4A-4C, that displays information relating to a music composition currently being played via the music service for a user. As discussed above, the interface may include a broadcast push button, such as broadcast push button 404 in FIGS. 4A-4C. In these examples, a user may control which music is added to the user's personal music station using the broadcast push button. For example, station module 222 may be configured to select music compositions for a personal music station based only on listening behavior monitored while the broadcast push button is in the on-state. Music consumption interface 218 may be configured in a variety of ways. In some examples, music consumption interface 218 may be exclusively dedicated to presenting personal music stations. FIG. 5 depicts an exemplary music consumption interface 500 that is dedicated exclusively to presenting personal music stations. In other examples, music consumption interface 218 may be dedicated to providing a variety of different stations, including a set of personal music stations. FIG. 9 depicts an exemplary music consumption interface 900 with this configuration. Returning to FIG. 2, presenting module 216 may select personal music stations to include within music consumption interface 218 in a variety of ways. For example, presenting module 216 may include the music stations of user 210's contacts, as shown in FIG. 5, of users being followed by user 210, as shown in FIG. 9, and/or of personal music stations that have been designated as open to the public. Presenting module 216 may rely on any type or form of prioritization algorithm to determine an order in which to present personal music stations. In some embodiments, music consumption interface 218 may include a search element that enables user 210 to browse the personal music stations and/or the content of the personal music stations associated with music consumption interface 218. As a specific example, a search element may enable user 210 to search for personal music stations with certain criteria (e.g., for personal music stations of female contacts and/or for the personal music station of a particular contact) and/or to search for music (e.g., music compositions by a particular artist and/or that relate to a particular theme and/or music type) that is included in the personal music stations of user 210's contacts and/or a specified subset of user 210's contacts. Returning to FIG. 1, at step 130, one or more of the systems described herein may receive user input selecting one of the personal music stations. For example, an input module 224 may receive user input selecting a personal music station 226 from among personal music stations 220. Turning to FIG. 5 as a specific example, input module 224 may receive user input selecting personal music station 226 corresponding to Penelope Witherspoon. Input module 224 may receive the user input in a variety of ways. In some embodiments, input module 224 may receive the user input via an auxiliary device, such as a keyboard and/or a digital mouse. Additionally or alternatively, input module 224 may receive the user input via a touchscreen. In response to receiving the user input, one or more of the systems described herein may play music content from the selected personal music station (step 140 in FIG. 1). For example, a music player module 230 may play music content from personal music station 226 in response to receiving the user input selecting personal music station 226. In some examples, music player module 230 may, in response to receiving the user input, display a personal station interface that (1) displays one or more music compositions from personal music station 226 and (2) provides a music player for playing the displayed music compositions. FIG. 6 provides a specific example of a personal station interface 600 that may be provided in response to receiving user input to personal music station 226 via music consumption interface 500 depicted in FIG. 5. In some examples, a personal station interface may provide a variety of digital means for socially connecting via personal music station 226. For example, as shown in FIG. 6, personal station interface 600 may include a “like” push button 602 for digitally liking a music composition being played, a “bookmark” push button for bookmarking a music composition being played, and/or a “more” push button 604 that may navigate to additional music-response options. As shown in FIG. 7, “more” push button 604 may navigate to an additional interface 700 with additional options (e.g., to add a music composition from personal music station 226 to a digital story composition, to digitally share a music composition from personal music station 226 in a private message, to send a digital message to the user to whom personal music station 226 is dedicated, etc.). FIG. 8 provides an exemplary messaging interface 800 of a digital message that may be initiated using a “message” push button illustrated in FIG. 7. By creating and maintaining public-facing personal music stations as described above, the disclosed systems and methods may provide an interesting and searchable structure for organizing music. This may enable a form of music discovery that promotes digital social engagement through music. The disclosed systems and methods may enable social engagement via music in a variety of additional ways, in addition to enabling social engagement by providing personal music stations. In one embodiment, station module 222 may create a poly-user station for multiple users who are contacts within social media platform 208 (e.g., user 210 and one or more additional users). In this embodiment, station module 222 may (1) identify an overlap in the users' music preferences and (2) add music compositions to the poly-user station that correspond to the identified overlap. Station module 222 may identify the overlap in a variety of ways. For example, station module 222 may (1) identify a set of music compositions known or predicted to be of interest to user 210, (2) identify a set of music compositions known or predicted to be of interest to the additional users, and (3) identify an overlap in the set of music compositions. In one such example, station module 222 may identify the overlap by (1) scanning a database of music compositions previously played by the music station for each of the users and (2) identifying one or more common music compositions that are included in each database. Additionally or alternatively, station module 222 may (1) deduce a user music preference of each user (e.g., based on user listening history as described above in connection with step 110) and (2) identify an overlap in the deduced user preferences. In some examples, station module 222 may additionally create a most-popular playlist for user 210. In these examples, station module 222 may (1) identify one or more of user 210's contacts, (2) identify music compositions that are most popular with the contacts based on a popularity metric, and (3) add the music compositions identified as most-popular to the most-popular playlist. Station module 222 may rely on a variety of popularity metrics in determining which music compositions are most popular with user 210's contacts. For example, station module 222 may determine which music compositions have the highest number of listens by user 210's contacts, which music compositions have been listened to by more than a threshold number and/or ratio of user 210's contacts, and/or which music compositions have been listened to more than a threshold number of times by user 210's contacts. In one embodiment, a digest module may create a periodic music digest to provide to user 210, which includes music-related information relating to a current time period. In this embodiment, the digest module may provide the periodic music digest via music service 206 (e.g., within a music-dedicated interface provided by social media application 212). The periodic music digest may include a variety of information. In some examples, the periodic music digest may include information collected via social media platform 208. For example, the periodic music digest may include a list of music compositions most popular with social media platform 208's user base, such as a list of most played songs of the week. Additionally or alternatively, the periodic music digest may include a summary of digital social reactions to music compositions (e.g., social media comments associated with music compositions during the period). In some examples, the periodic music digest may include one or more new music compositions created and/or first listened to by members of the user base during the period and/or information relating to new music and/or current music events (e.g., information collected from third-party webpages). In one embodiment, an awards module may automatically create music awards. For example, the awards module may (1) aggregate listening behavior of its user base and (2) create an award based on the aggregated listening behavior (e.g., an award for a music composition that was most listened to by the user base, most commented on, most digitally liked, most often shared via social media platform 208, etc.). In some embodiments, the disclosed systems and methods may provide a message-sharing platform that includes a music sharing system that enables social music engagement within groups. In one example, the music sharing system (e.g., operating as part of music service 206) may enable users to start conversations relating to their music listening activities. For example, a player interface used to play music compositions may include a share-button (i.e., a selectable element) that may be used to share a music composition, a playlist, an album, a music station, and/or a collection of music compositions currently being listened to. The share button may be used to share the music compositions in a variety of digital locations (e.g., to a newsfeed composition, a personal music station, a digital story, and/or a private message). In some examples, the music sharing system may enable shared music experiences within a messaging system. For example, the music sharing system may allow members of a group chat to digitally share music. In one example, the music sharing system may create a shared music station for members of a group chat. The music sharing system may create the shared music station automatically (e.g., in response to the creation of the group chat) or in response to receiving user input initiating the creation of the shared music station. In some examples, the music sharing system may enable the members of the group chat to add music compositions to the shared music station. Additionally or alternatively, the music sharing system may automatically add music compositions to the shared music stations. In one such embodiment, the music sharing system may select music compositions that are automatically added based on (1) shared music preferences of the members of the group chat and/or (2) monitored listening behavior of one or more members of the group chat (e.g., music compositions listened to while a broadcast push button is selected). In some examples, the shared music station may automatically be created for each group chat that is created via the messaging system. In other examples, the shared music station may be created in response to affirmative user input initiating the same. FIGS. 10-12 provide a specific example of a group chat entitled “Roommates” that operates in connection with a music sharing system. In this example, a group chat interface 1000 may include a music station selectable element 1002. When selected, music station selectable element 1002 may navigate to a shared music station corresponding to the group chat (e.g., depicted within player interface 1100 in FIGS. 11A-11B). Player interface 1100 may include a variety of information (e.g., a queue of music compositions, a music composition currently being played, and/or a list of members of the group chat that are currently listening). In one embodiment, player interface 1100 may be used to synchronously play music compositions (e.g., via a music player 1102 included within player interface 1100) to each member of the group chat that is currently accessing the shared music station. In another embodiment, player interface 1100 may be used to asynchronously play the music compositions (e.g., members of the group chat may select any song from the shared music station to play at any time). In some examples, the player interface may be used to add music to the shared music station (e.g., via an element 1104). By creating a shared music station for members of a group chat, the disclosed music sharing system may facilitate members of a group chat to share music and discuss the shared music (e.g., using their group chat). FIG. 12 illustrates a digital messaging conversation 1200 that incorporates music from a group chat's shared music station. As described throughout the present disclosure, the disclosed systems and methods may provide systems and methods for social media music streaming. In one example, a computer-implemented method may include (1) providing a music service within a social media platform, (2) presenting, via the music service, a music consumption interface that displays personal music stations, each of which is dedicated to music content associated with a different user of the social media platform, (3) receiving user input selecting one of the personal music stations, and (4) in response to receiving the user input, playing music content from the selected personal music station. In one embodiment, the computer-implemented method may further include, prior to presenting the music consumption interface, creating each personal music station by (1) monitoring, via the music service, the listening behavior of a user to whom the personal music station is dedicated and (2) selecting music compositions for the personal music station that correspond to the monitored listening behavior. In this embodiment, monitoring the listening behavior may include identifying one or more music compositions played for the user to whom the personal music station is dedicated and selecting the music compositions for the personal music station may include adding the identified music compositions to the personal music station. Additionally or alternatively, (1) monitoring the listening behavior may include (i) providing, to the user, an interface that includes a broadcast push button and displays information relating to a music composition currently being played via the music service, (ii) receiving user input selecting an on-state for the broadcast push button, and (2) selecting the music compositions for the personal music station may include selecting the music compositions based only on listening behavior monitored while the broadcast push button is in an on-state. In some examples, providing the music service within the social media platform may include providing the music service as a passive layer within another service provided by the social media platform (e.g., a newsfeed, a digital stories service, and/or a messaging application). In one embodiment, the computer-implemented method may further include creating, for a user of the music service, a poly-user station dedicated to the user and at least one additional user by (1) identifying an overlap between a music preference of the user and a music preference of the additional user and (2) adding, to the poly-user station, one or more music compositions that correspond to the identified overlap. In some examples, the computer-implemented method may further include creating a most-popular station for a user of the music service by (1) identifying one or more contacts of the user, (2) identifying music compositions that are most popular with the user's contacts based on a popularity metric, and (3) adding the music compositions identified as most-popular to the most-popular station. In some embodiments, the computer-implemented method may further include providing a user of the music service with a periodic music digest that may include music-related information relating to a current time period. In one embodiment, the computer-implemented method may further include automatically creating a music award based on aggregated listening behavior of users of the music service. In one embodiment, a system for implementing the above-described method may include (1) a providing module, stored in memory, that provides a music service within a social media platform, (2) a presenting module, stored in memory, that presents, via the music service, a music consumption interface that displays personal music stations, each of which is dedicated to music content associated with a different user of the social media platform, (3) an input module, stored in memory, that receives user input selecting one of the personal music stations, (4) a music player module, stored in memory, that in response to the input module receiving the user input, plays music content from the selected personal music station, and (5) at least one physical processor configured to execute the providing module, the presenting module, the input module, and the music player module. In some examples, the above-described method may be encoded as computer-readable instructions on a non-transitory computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to (1) provide a music service within a social media platform, (2) present, via the music service, a music consumption interface that displays personal music stations, each of which is dedicated to music content associated with a different user of the social media platform, (3) receive user input selecting one of the personal music stations, and (4) play music content from the selected personal music station. As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor. The term “memory device” generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory. In addition, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor. Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks. In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device. The term “computer-readable medium” may refer to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems. The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed. The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the instant disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the instant disclosure. Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” 17114538 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 26th, 2022 12:00AM Nov 16th, 2020 12:00AM https://www.uspto.gov?id=US11316813-20220426 Method and system for presenting a subset of messages in a modular inbox Exemplary embodiments relate to improvements in the design of a messaging inbox. The inbox may display different units or “modules” for providing a user with quick access to different inbox functionalities. After a number of recent or unread messages are shown in the inbox's initial interface, the messages end and are replaced with modules. A threshold may be defined for the number of recent/unread messages to display before transitioning to modules. The threshold may be determined dynamically based on a minimum and/or maximum number of messages to display. The determination may be based on the current time, whether there is an active conversation in a thread, whether there are unread messages in a thread, etc. The determination may vary from user to user based, for example, on whether the user is a relatively active user, and/or how the user has used the messaging application in the past. 11316813 1. A method, comprising: presenting an inbox interface for a messaging application comprising: a first portion displaying a first set of message conversations from the inbox, the first set comprising one or more of recent message conversations that have been active within a threshold period of time or unread message conversations that include messages deemed unread by the messaging application; a second portion displaying one or more functional modules; and a third portion displaying a second set of message conversations comprising remaining message conversations in the inbox not displayed in the first portion; dynamically determining a minimum number of message conversations and a maximum number of conversations for display in the first set displayed in the first portion; selecting messages for the first set based on the minimum and maximum number of conversations; and displaying the selected messages in the first portion of the inbox interface. 2. The method of claim 1 wherein: the upper threshold is set based on an activity level of a user of the inbox; the activity level is determined as being high if the user has participated in a predetermined number of message conversations with a predefined recent period of time, and low otherwise; the upper threshold is set higher when the user has a high level of activity; and the upper threshold is set lower if the user has a lower level of activity. 3. The method of claim 1 wherein: the upper threshold is set to the number of active message conversations in the inbox; and a message conversation in the inbox is active if a message in the message conversation has been received within a predetermined recent period of time. 4. The method of claim 3 wherein the upper threshold is set to the number of active message conversations in the inbox plus the number of unread messages in the inbox. 5. The method of claim 1 wherein: the upper threshold is based on a frequency with which a user interacts with the messaging application; and the upper threshold is set higher if the user interact more frequently and lower if the user interacts less frequently. 6. The method of claim 1 wherein: the upper threshold is set based on time of day; and the upper threshold is set higher during daytime hours and lower during nighttime hours. 7. The method of claim 1 wherein: the upper threshold is set to a number of message conversations which have been active during a predetermined recent time period; and a message conversation in the inbox is active if a message in the message conversation has been sent or received within the predetermined recent period of time. 8. A non-transitory, computer-readable medium storing instructions configured to cause one or more processors to: receive a plurality of messages in an inbox of a messaging application; receive an instruction to present an inbox interface of the messaging application comprising: a first portion displaying a first set of message conversations from the inbox, the first set comprising one or more of recent message conversations that have been active within a threshold period of time or unread message conversations that include messages deemed unread by the messaging application; a second portion displaying one or more modules providing access to features of the messaging application different from message conversation presentation features; a third portion displaying a second set of messages or message conversations comprising remaining message conversations in the inbox not displayed in the first portion; and dynamically determine a minimum number of message conversations and a maximum number of conversations for display in the first set displayed in the first portion; select messages for the first set based on the minimum and maximum number of conversations; and display the selected messages in the first portion of the inbox interface. 9. The medium of claim 8 wherein: the upper threshold is set based on an activity level of a user of the inbox; the activity level is determined as being high if the user has participated in a predetermined number of message conversations with a predefined recent period of time, and low otherwise; the upper threshold is set higher when the user has a high level of activity; and the upper threshold is set lower if the user has a lower level of activity. 10. The medium of claim 8 wherein: the upper threshold is set to the number of active message conversations in the inbox; and a message conversation in the inbox is active if a message in the message conversation has been received within a predetermined recent period of time. 11. The medium of claim 10 wherein the upper threshold is set to the number of active message conversations in the inbox plus the number of unread messages in the inbox. 12. The medium of claim 8 wherein: the upper threshold is based on a frequency with which a user interacts with the messaging application; and the upper threshold is set higher if the user interact more frequently and lower if the user interacts less frequently. 13. The medium of claim 8 wherein: the upper threshold is set based on time of day; and the upper threshold is set higher during daytime hours and lower during nighttime hours. 14. The medium of claim 8 wherein: the upper threshold is set to a number of message conversations which have been active during a predetermined recent time period; and a message conversation in the inbox is active if a message in the message conversation has been sent or received within the predetermined recent period of time. 15. An apparatus comprising: a non-transitory computer-readable medium storing a messaging application; and a processing component configured to: receive a plurality of messages in an inbox of the messaging application; receive an instruction to present an inbox interface of the messaging application comprising: a first portion displaying a first set of message conversations from the inbox, the first set comprising one or more of recent message conversations that have been active within a threshold period of time or unread message conversations that include messages deemed unread by the messaging application; a second portion displaying one or more modules providing access to features of the messaging application different from message conversation presentation features; and a third portion displaying a second set of message conversations comprising remaining message conversations in the inbox not displayed in the first portion; dynamically determine a minimum number of message conversations and a maximum number of conversations for display in the first set displayed in the first portion; select messages for the first set based on the minimum and maximum number of conversations; and display the selected messages in the first portion of the inbox interface. 16. The apparatus of claim 15 wherein: the upper threshold is set based on an activity level of a user of the inbox; the activity level is determined as being high if the user has participated in a predetermined number of message conversations with a predefined recent period of time, and low otherwise; the upper threshold is set higher when the user has a high level of activity; and the upper threshold is set lower if the user has a lower level of activity. 17. The apparatus of claim 15 wherein: the upper threshold is set to the number of active message conversations in the inbox plus the number of unread messages in the inbox; and a message conversation in the inbox is active if a message in the message conversation has been received within a predetermined recent period of time. 18. The apparatus of claim 15 wherein: the upper threshold is based on a frequency with which a user interacts with the messaging application; and the upper threshold is set higher if the user interact more frequently and lower if the user interacts less frequently. 19. The apparatus of claim 15 wherein: the upper threshold is set based on time of day; and the upper threshold is set higher during daytime hours and lower during nighttime hours. 20. The apparatus of claim 15 wherein: the upper threshold is set to a number of message conversations which have been active during a predetermined recent time period; and a message conversation in the inbox is active if a message in the message conversation has been sent or received within the predetermined recent period of time. 20 RELATED APPLICATIONS This application is a continuation of, claims the benefit of priority to previously filed U.S. patent application Ser. No. 15/272,367, titled “MESSAGING AND SYSTEM FOR PRESENTING IN A MODULAR INBOX,” filed Sep. 21, 2016, which is hereby incorporated by reference in its entirety. BACKGROUND Messaging systems, such as instant messaging systems and short message service (“SMS”) systems, allow users to communicate with each other by exchanging messages. Messaging services may also provide capabilities beyond exchanging messages, but in many cases the user may not be aware of the additional capabilities or how to use them. In some situations, the additional capabilities may be relatively difficult to locate in a messaging application, or their use may be non-intuitive. As a result, these additional capabilities may be underutilized. Moreover, users of the messaging service who might be relatively active users if they were aware of the additional capabilities may instead become less active. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A depicts an exemplary messaging interface including several types of individual and group messages; FIG. 1B depicts an exemplary message composition interface; FIG. 1C depicts an example of selecting a recipient of a message in a messaging interface; FIG. 1D depicts an example of selecting a group of recipients of a message in a messaging interface. FIG. 2A depicts an exemplary interface for a messaging inbox including a first portion displaying a first set of messages, a second portion displaying one or more modules, and a third portion displaying a second set of messages; FIG. 2B depicts an exemplary interface for a messaging inbox in which a second set of messages is displayed in the first portion of the interface including the first set of messages; FIG. 2C depicts an example of a top contacts module; FIG. 2D depicts an example of a people/states module; FIG. 2E depicts an example of a new behavior module; FIG. 2F depicts an example of a live videos module; FIG. 2G depicts an example of an events module; FIG. 2H depicts an example of a businesses module; FIG. 2I depicts an example of a messaging stickers module; FIG. 2J depicts an example of a wireless network finder module; FIG. 2K depicts an example of a transportation services module; FIG. 3 is a flowchart depicting an exemplary process for displaying an inbox interface including one or more modules; FIG. 4A depicts an example of a sharing module for sharing local content; FIG. 4B depicts an example of a sharing module for sharing social networking content; FIG. 4C depicts an exemplary interface for selecting content to be shared; FIG. 4D depicts an exemplary interface for selecting a group of recipients to receive the content selected in FIG. 4C; FIG. 4E depicts an exemplary interface for confirming the sending of the content to the recipients; FIG. 4F depicts an exemplary inbox after receiving the content shared in FIGS. 4C-4E FIG. 4G is a flowchart depicting an exemplary process for sharing content from a module; FIG. 5A depicts an example of a promotional material module; FIG. 5B depicts an example of promotional material integrated into non-promotional material module; FIG. 5C depicts an example of a message generated in response to interacting with promotional material; FIG. 5D is a flowchart depicting an exemplary process for providing promotional content in a module; FIG. 6 is a flowchart depicting an exemplary process for determining a transition point between a first group of messages and a set of one or more modules; FIG. 7A is a block diagram providing an overview of a module ranking framework; FIG. 7B is a flowchart depicting an exemplary process for determining an inter-module rank; FIG. 7C is a flowchart depicting an exemplary process for determining an intra-module rank; FIG. 8A is a block diagram providing an overview of a system including an exemplary centralized messaging service; FIG. 8B is a block diagram providing an overview of a system including an exemplary distributed messaging service; FIG. 8C depicts the social networking graph of FIGS. 8A-8B in more detail; FIG. 9 is a block diagram depicting an example of a system for a messaging service; FIG. 10 is a block diagram illustrating an exemplary computing device suitable for use with exemplary embodiments; FIG. 11 depicts an exemplary communication architecture; and FIG. 12 is a block diagram depicting an exemplary multicarrier communications device. DETAILED DESCRIPTION Messaging applications may provide an inbox that allows a user to read and send messages. However, a messaging application may provide additional functionality beyond reading and sending messages, such as playing games, sending money to a friend, viewing which other users are online to spur a conversation, etc. If these additional functions are hidden in menus or accessed through special commands or gestures, many users will rarely or never use them. The users may not know that this functionality exists or accessing the functionality may prove too cumbersome to encourage regular use. Because this functionality can spur increased use of the messaging application, administrators of the messaging service may wish to encourage its use. In a messaging inbox displaying messages, it is often the case that some messages are more important or valuable to a user than others. For example, it may be highly likely that a user will wish to access recent messages and unread messages, whereas relatively stale messages or messages that have been previously read are less likely to be accessed. According to exemplary embodiments, these insights are combined to provide a modular inbox that encourages use of the messaging service's full functionality, while also providing ready access to the user's most important or valuable messages. In the modular inbox, the inbox may be divided into different inbox units or modules. The modules may provide a user with quick and convenient access to different inbox functionalities that the user might not otherwise be aware of (or inclined to use on a regular basis). After a number of messages are shown in the inbox's initial display, the messages end and are replaced with modules. The number of messages to display before switching to modules (referred to herein as a messaging cliff) is determined based on a minimum threshold and a dynamic maximum threshold that can differ from user-to-user and based on the context (e.g., the time of day). The point where the inbox transitions from the recent messages to the modules (the messaging cliff) may be determined statically or dynamically. A static determination may involve transitioning to the modules after a predetermined number of recent messages. A dynamic determination may be made based on, for example, selecting a number of messages to display that falls within a minimum number of messages and a maximum number of messages. The minimum may be, for example, 4-8 messages and may be based on a predetermined minimum message threshold. The maximum number of messages may be dynamic, and may be based on the current time of day, the number of message threads in which the user is participating in an active conversation, or the number of unread messages in the user's inbox (e.g., the cliff may be set to display all unread messages within a certain time frame, which may include gaps where read messages have been filtered out). The maximum number of messages displayed may be different for different users. For example, a power user may receive a large number of messages over a short time frame. Such a user might see messages only from the very recent past but might have a higher threshold for the number of messages to display. An infrequent user, on the other hand, might see fewer messages over a longer time frame. User activity may be determined based on historical usage of the messaging application or messaging service. The modules section of the inbox differs from the message display section of the inbox in that the modules are primarily configured for functionality other than the displaying of messages or message threads. Modules may provide different kinds of functionality, such as showing active users, suggesting new activities, or making it easy to share content from a device (e.g., through the device's camera roll or photo album), a social networking service, or another source. The modules may include modules for sharable articles/videos/pictures that allow a user to select content to be provided to other messaging service users. Modules can also include, or can be, advertisements. Further embodiments provide modules relating to the sharing of content from a social networking service associated with the messaging service. For example, some modules may allow a user to share articles, videos, or pictures from the social networking service. Exemplary interfaces simplify the sharing procedure by providing content recommendations, which may be retrieved from the social networking service based on consumption information. Alternatively or in addition, the content may be retrieved from a location outside the social networking service, or from multiple sites. Content items within modules can be ranked to determine which intra-module order to display the content items. The content may be ranked based on a number of metrics, such as recency of access, interaction time, and/or an enjoyment metric personalized to a given user. The intra-module content ranking scheme may be defined on a module-by-module basis. Content may be displayed in the sharing module based on the rank. In addition to (or alternatively from) ranking the content within a module, the modules may also be ranked against each other in order to determine which order to show the modules. Inter-module ranking may be determined based on ranking metrics, such as the user's estimated interest in the module, the estimated interest in the module among a user base of the messaging service, and a value of displaying the module to the messaging service or an associate of the messaging service. Inter-module ranking may be determined at a server communicating with a client device, although the inter-module ranking scheme may also be extensible with offline models. The intra-module content ranking may be used to affect the inter-module ranking. For example, if a particular content item in a low-ranked module is determined to be particularly pertinent or exciting (e.g., an article is currently being viewed by a large number of people on a social network), then this may cause the content item's module to be elevated in the inter-module ranking. In some circumstances, the module may even be elevated above the messages displayed in the first section of the inbox. When sharing content, the module may suggest a group of recommended recipients. The group may include members selected based on the content (e.g., who is considered the most likely to enjoy the content) and/or metrics based on users with whom the sharing user has historically shared similar content. When sharing local or social networking media, exemplary embodiments provide simple low-friction ways to share the material. In general, the system may provide an interface displaying content recommendations and a list of suggested users with whom the content may be shared. Upon selecting the content and a list of target users, the original user may simply press send, and the messaging service will automatically generate suitable messages and/or message threads to share the content. Still further, some modules may be used to deliver promotional materials. Promotional material may be in the form of promotional content items, such as individual offers or advertisements. The promotional content items may be presented in a dedicated module (e.g., a promotional materials module), and/or may be integrated into other modules (e.g., providing a promotional content item among the shareable articles in an articles module). In some embodiments, a business may purchase a higher ranking for their promotional content item to allow the promotional content item to be displayed earlier in the list of modules or within a given module. Interacting with the promotional content items may cause a new message or thread to be delivered to the user's inbox. Such a message may include a code that offers a discount when scanned in a retail location. The messages and/or promotional content items may be generated based on proximity. In one example, promotional content items may be displayed based on a user's affinity, and interacting with a promotional content item may allow a user to claim an offer from a provider. When the user's device is identified at a location proximate to a retail location for the provider, the user may be sent a message prompting the user to enter the retail location and scan the code to receive a discount. In some embodiments, sponsored promotional content items (e.g., advertisements) may be distinguished from discount offers, which are generally perceived to be purely beneficial and thus may be better tolerated by users in certain circumstances. Different types of promotional content items may be prompted in different ways, and interacting with different types of promotional content items may produce different results. For example, purely sponsored promotional content items may be displayed in an advertising-specific module, whereas discount items may be presented among other content items in a different module. In some cases, purely sponsored promotional content items may be displayed among other content items (e.g., when the user is determined to be in a location proximate to a provider of the promotional content item). In another example, interacting with a discount offer may cause an interface to be presented allowing the user to share the discount offer with their friends (e.g., to encourage the friends to go to a coffee shop together), whereas interacting with a promotional content item containing an advertisement might open a message thread with the sponsor of the promotional content item. After scrolling through the recent messages and reaching the cliff, the user may scroll through the modules. When the available modules are exhausted (or after displaying a predetermined or dynamically determined number of modules), the inbox may transition back to older unread messages. Alternatively or in addition, older or unread threads may be collapsed into the top section, before the cliff. As an aid to understanding, a series of examples will first be presented before detailed descriptions of the underlying implementations are described. It is noted that these examples are intended to be illustrative only and that the present invention is not limited to the embodiments shown. Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. However, the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives consistent with the claimed subject matter. In the Figures and the accompanying description, the designations “a” and “b” and “c” (and similar designators) are intended to be variables representing any positive integer. Thus, for example, if an implementation sets a value for a=5, then a complete set of components 122 illustrated as components 122-1 through 122-a may include components 122-1, 122-2, 122-3, 122-4, and 122-5. The embodiments are not limited in this context. Messaging Overview A general overview of messaging techniques is now described. Users may interact with a messaging system through a client application. FIG. 1A depicts an example of a client application displaying a messaging interface 100. The messaging interface 100 of FIG. 1A shows an exemplary summary screen that provides an overview of messages recently sent to (or by) the user of the client application. Messaging systems may support a variety of different types of messages. For example, the messaging interface 100 includes a summary of a one-to-one (or individual) message 102. A one-to-one message is a message exchanged between two entities, so that only the two entities can see and participate in the conversation. For example, in the one-to-one message 102, the current user (Jack Doe) recently received a message from his wife, Jane Doe. The other participant in the conversation is indicated in the interface 100 using an identifier 104 (including a name and profile picture, in this example). Only Jack and Jane participate in the conversation, and only Jack and Jane can view the conversation. Another message type supported by the messaging system is a group conversation. In a group conversation, multiple users see and participate in the conversation. FIG. 1A depicts an exemplary summary of a group conversation 106. In the summary of the group conversation 106, each of the other users participating in the conversation is indicated by respective identifiers 108. In this case, the identifiers include the names or handles of the other users participating in the group conversation, and an icon to indicate that the conversation is a group conversation. For example, in this case the current user (Jack) is participating in a conversation with his friends Ben and Alex. Jack, Ben, and Alex can each see all of the messages in the conversation (regardless of who sent the message) and can send messages to the group. Another type of message supported by the messaging system is a message between one or more users and an organization (such as a business) or event. For example, FIG. 1A shows an event message 110 sent by the current user (Jack) to the page of an event being organized through a social network. The identifier 112 identifies the name of the event, and an icon is presented identifying this particular event is a concert. In an event message 110, all participants in the event (as a participant is defined, e.g., by the event's social networking page) can view and send event messages 110. Participants may include, for example, people attending the event, fans of the event that have signed up with the event's page to receive messages about the event, event organizers, etc. By selecting an existing message summary 102, 106, 110, the user can view messages in an existing conversation and add new messages to the conversation. Moreover, the interface 100 includes interface elements 114 allowing the user to create a new message. For example, FIG. 1B depicts an interface 116 displayed by the messaging client application in response to receiving a selection of the “compose” interface element 114. A “new message” window is displayed in the interface 116. The new message window includes a recipient field 118 for allowing the user to manually enter identifiers for one or more recipients. If available, the user's contacts list 120 may also be displayed in the interface 116 in order to simplify the selection of the recipients. In the example of FIG. 1C, the user has entered the identifier of a recipient in the recipient field 118. In order to indicate the recipient's inclusion in the recipients list, a selection indication 122 is displayed on the recipient's icon in the contacts list 120. It is possible to select more than recipient in the interface 116 in order to create a group message, e.g. by manually adding multiple recipients in the recipient filed 118, selecting multiple contacts in the contacts list 120, or a combination of methods. FIG. 1D depicts an example of such a group selection. This brief summary is intended to serve as a non-limiting introduction to the concepts discussed in more detail below, in connection with FIGS. 2-8. However, before discussing further exemplary embodiments, a brief note on data privacy is first provided. A more detailed description of privacy settings and authentication will be addressed in connection with the following Figures. A Note on Data Privacy Some embodiments described herein make use of training data or metrics that may include information voluntarily provided by one or more users. In such embodiments, data privacy may be protected in a number of ways. For example, the user may be required to opt in to any data collection before user data is collected or used. The user may also be provided with the opportunity to opt out of any data collection. Before opting in to data collection, the user may be provided with a description of the ways in which the data will be used, how long the data will be retained, and the safeguards that are in place to protect the data from disclosure. Any information identifying the user from which the data was collected may be purged or disassociated from the data. In the event that any identifying information needs to be retained (e.g., to meet regulatory requirements), the user may be informed of the collection of the identifying information, the uses that will be made of the identifying information, and the amount of time that the identifying information will be retained. Information specifically identifying the user may be removed and may be replaced with, for example, a generic identification number or other non-specific form of identification. Once collected, the data may be stored in a secure data storage location that includes safeguards to prevent unauthorized access to the data. The data may be stored in an encrypted format. Identifying information and/or non-identifying information may be purged from the data storage after a predetermined period of time. Although particular privacy protection techniques are described herein for purposes of illustration, one of ordinary skill in the art will recognize that privacy protected in other manners as well. Further details regarding data privacy are discussed below in the section describing network embodiments. Assuming a user's privacy conditions are met, exemplary embodiments may be deployed in a wide variety of messaging systems, including messaging in a social network or on a mobile device (e.g., through a messaging client application or via short message service), among other possibilities. An overview of a messaging system is now provided Modular Inbox Overview FIG. 2A depicts an exemplary interface 200 for a messaging inbox. The interface includes a first portion 202 for displaying a first set of messages 204, a second portion 206 displaying one or more modules 208, and a third portion 210 displaying a second set of messages 212. Although FIG. 2A depicts each of the first portion 202, the second portion 206, and the third portion 208 together on a single screen of the device, any or all of the interface portions may extend to or beyond a single device screen. The user may navigate through the interface 200, for example by scrolling. The first set of messages 204 may be the most relevant or recent messages, whereas the second set of messages 212 may be less relevant or recent messages. The transition point between the first set of messages 204 and the second set of messages 212 is referred to herein as a message cliff. Techniques for determining which messages to include in the first set of messages 204, and which to include in the second set of messages 212, are covered in detail in the section below addressing The Cliff. Scrolling through the interface 200 may cause the interface 200 to expose or display the first portion 202, the second portion 206, and the third portion 210, respectively. For example, the display may first render the first portion 202 of the interface 200. If the messages 204 of the first portion 202 do not occupy all of the display space available to the interface 200, then at least a part of the second portion 206 may also be displayed (as much as will fit in the space left over after the first portion 202 is displayed). If the messages 204 of the first portion 202 occupy more space than is immediately available to the interface 200, then scrolling through the interface 200 may cause additional messages in the first set of messages 204 to be displayed, until the user reaches the messaging cliff. If the user continues to scroll after reaching the messaging cliff, then the second portion 206 including the modules 208 may be displayed. The user may continue to scroll through the modules 208 until the available modules are exhausted. At this point, the interface 200 may begin to display the third portion 210 including the second set of messages 212. In the example of FIG. 2A, the messaging inbox displays messages up to a certain point (the first set of messages 204), then modules 208, and then the second set of the user's messages 212 (in the third portion 210 of the interface 200). As an alternative, or in addition to, this approach, some or all of the second set of messages 212 (messages falling after the messaging cliff) may be collapsed into the first portion 202 of the interface 200. For example, FIG. 2B depicts an exemplary interface 214 for a messaging inbox in which a second set of messages is displayable in the first portion 202 of the interface 200, which also includes the first set of messages 204. In the interface 214, the second set of messages may be initially hidden behind a collapsible heading 216 having an expandable interface element 218. Interacting with the expandable interface element 218 may cause the second set of messages 204 to be displayed. As further depicted in FIG. 2B, in addition to scrolling through the interface 200 as a whole, individual modules 208 may also be scrollable. The modules 208 may present a predetermined amount of content or options. For example, a videos module 220 may display a number of videos from a social networking service that the system considers the current user to be most likely to share with friends. If the system determines that the amount of content can fit on the display of the user's device (e.g., based on the screen width and resolution available to the interface 200), then the content may be entirely displayed within the width of the interface, in a section reserved for the module 208. On the other hand, if the amount of content does not entirely fit on the display (or if the module 208 allows for a potentially unlimited or amount of content), then the system may provide options for allowing the user to scroll through the videos module 220. The system may present an option to scroll horizontally through the content (e.g., by allowing a user to gesture on a touch display with a left- or right-swipe), vertically through the content (e.g., by allowing a user to gesture on the touch display with an up- or down-swipe within an area reserved for the module), or both. In the depicted example, a horizontal scroll bar 222 shows the user's progress through the available content. Exemplary Modules The second portion 206 of the interface 200 may include multiple different sections providing different types of modules 208. Exemplary modules 208 are discussed below, although it is contemplated that additional types of modules may also be used. Exemplary embodiments may display some or all of the different types of modules 208 in the second portion 206 of the interface 200. The modules 208 to be displayed may be selected based on the user's sharing history or interactions with a social networking service; those modules that the social networking service considers are most likely to be useful to the user may be selected for display. FIG. 2C depicts an example of a top contacts module 224. The top contacts module 224 may include a list of top contacts 226 from the messaging inbox owner's contacts list. The top contacts 226 may be selected from the contacts list based on, for example, the inbox owner's messaging history (e.g., which users the inbox owner has messaged most frequently, most recently, at certain times of the day, etc.), and/or other metrics such as an affinity of the inbox owner for the contact, a proximity of the contact to the inbox owner, or context-sensitive information such as a current or future change in location. For example, if an out-of-town contact is traveling to the inbox owner's location, then the out-of-town contact may be displayed in the top contacts module 224, even if they would not otherwise be included based on other metrics. In another embodiment, the contacts may be selected based on properties of the contact (such as whether it is the contact's birthday). In yet another example, the system may add contacts to the list if the contact has not been messaged by the inbox owner for more than a predetermined period of time, particularly if the contact was someone that the message owner previously contacted often (e.g., to encourage the inbox owner to message the contact). Explanatory information (e.g., “User X will be visiting City Y from Date A to Date B”) may be displayed in the top contacts module 224. A predetermined number of top contacts 226 may be selected for display in the top contacts module 220. Optionally, the predetermined number may be user-configurable so that the user may specify how many top contacts to display. If the number of contacts displayed in the top contacts module 224 is too great to fit within the space reserved for the top contacts module 224 on the display, then the top contacts module 224 may be scrollable (e.g., with a horizontal scroll). FIG. 2D depicts an example of a people/states module 228. The people/states module may display a list of the user's contacts who satisfy a condition that the contact be in one or more selected states. For example, the people/states module 228 may include a list of contacts 230 that are currently online, or who have recently participated in a conversation. The people/states module 228 may include an indicator 232 that indicates the current state of the associated contact 230. In the depicted example, the indicator 232 is a color-coded dot that changes color to reflect if the contact 230 is online or away. The list of contacts in the people/states module 228 may be dynamically updated as contacts in the user's contacts list change their states, with contacts being added or removed from the people/states module. The people/states module may be configurable to allow the user to select how many contacts should be displayed in the people/states module, and in which state(s) the user is interested. If the people/states module 228 is configured to display a specific number of contacts that is less than the total number of contacts that match the selected states, then the people/states module 228 may apply one or more relevancy metrics to determine which contacts to display (e.g., contacts most recently messaged, contacts messaged at the highest frequency, etc.). The relevancy metrics may include the metrics described above in connection with the top contacts module. FIG. 2E depicts an example of a new behavior module 236. The new behavior module 236 presents a list of behaviors or activities that the inbox owner could engage in on the messaging service, but for which the inbox owner has not (or has not recently) engaged. In the depicted example, the user is presented with the content option 238-1 to chat with a bot representing a company that the user likes or is predicted to like. The new behavior module 236 may also or alternatively suggest behaviors in which the user does engage in some contexts, and for which the messaging service determines the user would be likely to engage if presented with the opportunity. For example, the user may have previously joined an interest group (e.g., “Rock Climbers of Springfield”), and the messaging service may determine that the user would be likely to want to join a related group. In the depicted example, the user is presented with a content option 238-2 to join the “Rock Breakers” group, representing another group of local rock climbers. In a further example, the user may have previously participated in hangouts or online gatherings; when a celebrity or public figure begins a new hangout or gathering, the user may be presented with an option 238-3 to join the gathering in the new behaviors module 236. In another example, if new functionality is added to the messaging service, then the new behavior module 236 may suggest that the inbox owner try the new functionality. One example of new behavior may be engaging in a video conversation—if the inbox owner has not previously engaged in a video conversation, but instead has always engaged in text messages, then the new behavior module 236 may suggest that the inbox owner initiate a video call with another user. FIG. 2F depicts an example of a live videos module 240. The live videos module 240 may present content options 242-1, 242-2 representing live video streams currently being transmitted by other users of the messaging service (or an associated social networking service). The live videos module 240 may also present a content option 242-3 for allowing the inbox owner to create a live video stream from their device. FIG. 2G depicts an example of an events module 244. The events module 244 may display a list of upcoming events 246 that the inbox owner and/or the inbox owner's contacts are scheduled to attend. Alternatively or in addition, the events module 244 may display a list of events 246 in which the messaging service or social networking service have determined the inbox owner may be interested (e.g., based on the inbox owner's interests as indicated through the inbox owner's interactions on a social network, or based on the interests of the inbox owner's contacts). FIG. 2H depicts an example of a businesses module 248. The businesses module 248 may display a list of businesses 250 that the inbox owner and/or the inbox owner's contacts have previously interacted. Alternatively or in addition, the businesses module 248 may display a list of business 250 in which the messaging service or social networking service have determined the inbox owner may be interested (e.g., based on the inbox owner's interests as indicated through the inbox owner's interactions on a social network, or based on the interests of the inbox owner's contacts). FIG. 2I depicts an example of a messaging stickers module 252. The messaging service may allow the inbox owner to add graphics referred to as stickers to a message. These stickers may be downloaded from the messaging service, a social networking service, or another site. The stickers module 252 may display a list of stickers 254 for which the inbox owner may have an interest. The inbox owner may interact with the stickers module 252 in order to download or flag stickers for use in future messages. Upon selecting one or more of the stickers 254-i, the selected stickers may be downloaded and added to the user's local library for future use. FIG. 2J depicts an example of a wireless network finder module 256. The wireless network finder module 256 may display a list of available wireless networks 258 in proximity to the device of the inbox owner. The wireless network finder module 256 may also display information about the networks, such as the network name, the entity that provides or manages the network, wireless signal strength, and whether and how the network is secured. The inbox owner may select one of the wireless networks through the wireless network finder module 256 in order to connect to the wireless network. FIG. 2K depicts an example of a transportation services module 260. The transportation services module 260 presents options for allowing the inbox owner to secure transportation, either immediately or at a scheduled time. The transportation services module 260 may display a list of transportation services 262 that are available in an area proximate to the inbox owner (as determined, for example, by the location of the inbox owner's mobile device). The transportation services may include, for example, taxi services, ride sharing services, public transportation, etc. The transportation services module 260 may connect to an application on the user's device, or to an internet site, for securing the transportation services. The transportation services module 260 may connect to a social networking page associated with a transportation service and may allow the user to communicate with the ride sharing service's page (e.g., through bot interaction). The transportation services module 260 may, alternatively or in addition, display a calendar allowing transportation services to be scheduled for a future time. Any or all of the above-described modules may be displayed in the inbox interface (as well as additional modules described below). FIG. 3 is a flowchart depicting an exemplary process 300 for displaying an inbox interface including one or more modules. At block 302, a messaging application may receive an instruction to display an inbox interface for a messaging service associated with the messaging application. Block 302 may occur, for example, in response to starting up the messaging application or in response to receiving an instruction to access a home screen or inbox screen in the messaging application. At block 304, the inbox interface may be generated and a first set of messages may be displayed in a first portion of the inbox interface. The first portion of the inbox interface may provide thread display functionality, in which message threads are displayed. The message threads may be summarized in the first portion of the inbox interface (e.g., by displaying a thread's most recent message, or a representative message of the thread, and/or a list of participants in the thread). Interacting with one of the message threads may cause the message thread to be expanded so that an exchange between two or more thread participants may be viewed. In some embodiments, the first portion of the interface may be provided as a module specifically dedicated to message or thread display functionality. The first set of messages displayed in the first portion of the inbox interface may include a subset of the totality of the threads or messages available to the inbox owner, as determined based on a messaging cliff (discussed in more detail below). For example, the first set of messages may include a set of unread messages or a set of recent messages received within a predetermined amount of time. In some embodiments, messages that are not selected for inclusion in the first set of messages may be collapsed into a message header and presented, e.g., at the end of the first set of messages. At block 306, the inbox interface may receive an instruction to navigate past the first portion of the inbox interface. For example, the messaging application may register a gesture on a touch screen corresponding to a scrolling gesture, where scrolling the interface in accordance with the gesture would cause the interface to scroll beyond the final message or message thread in the first set of messages. Scrolling or navigation may be achieved in other ways as well, such as by interacting with a pointing device (e.g., a computer mouse), voice commands, etc. At block 308, the messaging application may cause a second portion of the inbox interface to be displayed. The second portion may include one or more modules, where the modules of the second portion provide access to functionality that is different from the message or thread display functionality of the first portion of the inbox interface. As the inbox interface is scrolled through, the second portion of the inbox interface may incrementally or immediately replace the first portion of the inbox interface as the inbox interface is scrolled. A list of the modules to display may be retrieved from a messaging server. The list may include a set of identifiers associated with each module to be displayed. Optionally, an entry in the list associated with each module may include further information, such as a type of the module, metadata such as a name of the module to be displayed, and optionally may include content items to be displayed. The content items returned with the list of the modules may be a null (empty) set, in which case the module may determine which content to display, either based on local content on the client device or remote content on a server associated with the module. The messaging server may determine a subset of available modules to assign to the user (e.g., based on the user's predicted affinity for the modules) and may provide the list of modules to a client device running the inbox owner's messaging application. The ordering of the modules in the second portion of the inbox interface may be determined by an inter-module ranking, as discussed in more detail below. In some cases, a module in the set of modules may be determined to be highly relevant, which may cause the module to be elevated in the inter-module ranking. In some embodiments, if it is determined that one of the modules in the set of modules is particularly relevant (e.g., above a certain relevancy threshold), then the module may be elevated even above the first portion of the interface (e.g., so that the module is displayed before the messages or message threads). For example, if a particular live video is being viewed by a significant number of the contacts of the inbox owner, then the live videos module may be elevated above the messages of the first portion of the interface. The content within the modules may be retrieved from the messaging server associated with the messaging service. Alternatively or in addition, the content may be retrieved from separate servers associated with each module (e.g., each module may independently define and fetch its own content). At block 310, the inbox interface may receive an instruction to navigate past the second portion of the inbox interface. For example, the messaging application may register a gesture on a touch screen corresponding to a scrolling gesture, where scrolling the interface in accordance with the gesture would cause the interface to scroll beyond a final module of the set of modules in the second portion of the interface. Scrolling or navigation may be achieved in other ways as well, such as by interacting with a pointing device (e.g., a computer mouse), voice commands, etc. At block 312, the messaging application may optionally cause a second set of messages or message threads to be displayed in a third portion of the inbox interface. The third portion may be displayed after the second portion, and may incrementally or immediately replace the second portion of the inbox interface as the inbox interface is scrolled. The second set may include messages or message threads that were not flagged for inclusion in the first set of messages. Alternatively or in addition, some or all of the second set of messages or message threads may be collapsed in the first interface portion, as described above. Sharables Modules A number of module types have been discussed above. In addition to these modules, a particular type of module configured to share local or social networking content may also be provided. Sharables modules are described in detail with reference to FIGS. 4A-4G. FIG. 4A depicts an example of a local sharing module 402 for sharing local content. A number of local content items 404 may be displayed in the sharing module 402. The local content items 404 may be, for example, photos or videos from a local device on which the messaging application is running. The local sharing module 402 may display a predetermined or dynamically determined number of local content items 404 on the display. More local content items 404 may be accessible, for example using a horizontal scrolling technique or through an additional menu 406. Activation of the menu 406 may, for example, cause the local sharing module 402 to present an interface into the local storage of the device. The messaging application may, for example, access the photo album of the local device and suggest photos or videos that the inbox owner may be interested in sharing with their contacts. FIG. 4B depicts an example of a social media sharing module 408 for sharing social networking content. The social media sharing module 408 may retrieve content items 410 associated with the inbox owner from a social networking service. For example, the content items 410 may include media that has been uploaded by the inbox owner to the social networking service, media that the inbox owner has interacted with through the social networking service (e.g., content on which the inbox owner has commented, liked, etc.), or media that the social networking service determines that the inbox owner is likely to appreciate or enjoy. The content items 410 may include, for example, videos, pictures (e.g., GIFs), articles, etc. that have been uploaded to, or otherwise accessed from, the social networking service. Additional content items 410 may be available through a horizontal scrolling technique. Alternatively or in addition, a menu 412 may be provided for displaying additional content items 410. Activation of the menu 412 may, for example, cause the social media sharing module 408 to present an interface into the social networking service and display additional content items 410 available through the social networking service. FIG. 4C depicts an exemplary interface 414 for selecting content to be shared. The content may be local content as shown in the local sharing module 402 or content from a social networking service as shown in the social media sharing module 408. In the depicted example, the content items 416 in the interface 414 represent articles available on a social networking service. As shown in FIG. 4C, one or more of the content items 416 may be selected. The selected content items may be identified using an identifier 418, such as a checkmark in this example. Once one or more of the content items 416 are selected, a recipients interface may be displayed, as shown in FIG. 4D. FIG. 4D depicts an exemplary interface 420 for selecting a group of recipients 422 to receive the content selected in FIG. 4C. The recipients 422 displayed in the interface 420 may include users connected to or associated with the inbox owner through a social networking service or through the messaging service. For example, the recipients 422 may include recipients with whom the inbox owner has recently shared content items, or whom the inbox owner has recently messaged. The recipients 422 may be selected, at least in part, based on an identity of the content items 416 selected for sharing. For example, the social networking service may identify a subset of the inbox owner's contacts or friends on the social networking service who (based on their own content interaction history) the social networking service determines are likely to enjoy or appreciate the content items 416. To accomplish this, the social networking service may consult a social graph, as described in more detail below. Upon selecting one or more of the recipients 422, an indication 424 (such as a check box, in this example) may be displayed to indicate which recipients 422 have been selected. The messaging application may then display an interface for confirming the sending of the content to the recipients, such as the exemplary interface 426 depicted in FIG. 4E. The interface 426 may display the selected content items 416 and the selected recipients 422 from the interfaces 414, 420. Thus, the inbox owner may review the content to be distributed and the users to whom the content will be shared. Optionally, a prompt may be provided for allowing the inbox owner to add explanatory text when the content is sent. When the inbox owner is satisfied, a confirmation indicator 428 may be selected to confirm the transmission of the content. When the indicator 428 is selected, the messaging application may transmit the selected content items 416 to the selected recipients 422. FIG. 4F depicts an exemplary inbox interface 430 for one of the recipients 422, after receiving the content 416 shared in FIGS. 4C-4E. A new message thread or inbox item 432 may be created in the first section of the modular inbox (e.g., the section containing message or thread content). The inbox item 432 includes an identification 434 of the sender of the content, along with any explanatory text 436 added during the sending process. The shared content item 416 may be displayed in the message, and interacting with the shared content item 416 may cause the inbox interface 430 to display a larger version of the content item 416 (e.g., replacing a thumbnail of the content item 416 with a larger version), or to navigate to a location of the content item (e.g., taking the recipient to the recipient's social networking page, in the case of social networking content, or to a web site containing the content). FIG. 4G is a flowchart depicting an exemplary process 438 for sharing content from a module. At block 440, the messaging application may display a sharing module. The sharing module may be displayed in the second portion of the modular inbox, dedicated to non-message or non-message-thread display. In some embodiments, the module may be configured to share content from a social networking service with users of the messaging service. In other embodiments, the module may be configured to share local content from the local device with users of the messaging service. The module may be distinct from a portion of the inbox interface that provides message or message thread display functionality; At block 442, the messaging application may identify recommended content items for display. The sharing module may be a module for sharing a particular type of content, such as articles, videos, or pictures, and the messaging application may identify content of the type associated with the sharing module. The sharing module may define where content items for the sharing module may be found (e.g., a photo album or roll on the local device for a photos module, a social networking service for a videos module, etc.). The messaging application may retrieve a number of the content items (e.g., a number defined by the sharing module) from the identified location, and may select a recommended subset of the retrieved items for display. The recommended subset may be determined based on one or more metrics, which may be defined by the sharing module. For example, in the case of a sharing module for sharing photos, the recommended items may be the most recently captured photographs on the local device. In the case of a sharing module for sharing content from a social network, the metrics may be based on consumption information for the content items. For instance, the most consumed or interacted-with content items on the social network may be selected as recommended content items. The recommendation may also be based on a determination of which content items the inbox owner is most likely to wish to share (e.g., which content the inbox owner has interacted with recently, or is likely to enjoy based on the inbox owner's interaction history through a social network). At block 444, the content items identified in block 442 may be ranked. The content items may be ranked, for example, based on the recency of the content item, a predicted likelihood that the inbox owner would enjoy the content item, an amount time spent watching the content by the inbox owner or users associated with the inbox owner, etc. One or more ranking metrics may be defined by the sharing module. After the content items have been ranked, the sharing module may display the content items in ranked order. At block 446, the sharing module may receive a selection of one or more content items to be shared. For example, the messaging application may register a touch at a location on a touch-sensitive display corresponding to the content item. The messaging application may register a selection in other ways, such as through a pointing device, voice commands, etc. The messaging application may update the display to show an indication (e.g., a check box) on the selected content item. At block 448, a recommended list of recipients may be identified and displayed. The recommended list of recipients may be a subset of the inbox owner's contacts through the messaging service or social networking service. The recommended list of recipients may be selected based at least in part on an identity of the content item. For example, a social networking service associated with the networking service may be consulted to determine which users are most likely to enjoy the content item (e.g., based on the users' consumption history through the social network and/or based on the users' likes and dislikes as indicated through the social graph). In some embodiments, the content item may be associated with one or more users, who may be identified as recommended recipients. For example, a photograph may include the inbox owner and may also include another member of the messaging service or a social networking service. The other users in the photograph may be selected as recommended recipients. Alternatively or in addition, the messaging application may present an option for selecting a set of recipients not in the list of recommended recipients. For example, a menu item may be presented allowing the inbox owner to access their contacts list, and recipients may be selected from the contacts list. At block 450, the sharing module may register a selection of one or more recipients presented in block 448. For example, the messaging application may register a touch at a location on a touch-sensitive display corresponding to the recipient. The messaging application may register a selection in other ways, such as through a pointing device, voice commands, etc. The messaging application may update the display to show an indication (e.g., a check box) on the selected recipient. At block 452, the messaging application may present a prompt asking the sender to confirm transmission of the selected content item(s) to the selected recipient(s). The messaging application may optionally allow the sender to add a message to the content item for transmission to the recipients. Upon receiving confirmation of the sender's intent, at block 454 the messaging application may share the identified content with the identified recipients. For example, the messaging application may generate a message and/or message thread including the content item and the sender's message (if any). The messaging application may transmit the message to the recipients identified at block 450 using the messaging service. Promotional Material Delivery Another type of module and/or content item that may be employed in the modular inbox is a promotional material module or content item. FIGS. 5A-5D depict various examples of promotional material delivery. FIG. 5A depicts an example of a promotional material module 502. The promotional material module 502 may be a module dedicated to providing promotional material, such as sponsored items. Promotional content items 504 such as advertisements, offers, discounts, etc. may be displayed in the promotional material module. The promotional content items 504 may be selected dynamically, such as by determining a location of the client device and selecting promotional content items 504 associated with stores or shops located in close proximity to the client device. The exemplary promotional material module 502 of FIG. 5A includes two different types of promotional material. A first promotional content item 504-1 is a sponsored advertisement prompting a user to purchase a good or service. A second promotional content item 504-2 is a discount offer that provides the user with an opportunity to receive goods or services for free or at a discount. Thus, the first content item 504-1 is a sponsored advertisement while the second promotional content item 504-2 is a discount offer that provides a benefit to a user. The messaging application may treat these different types of promotional content differently, such as by surfacing the promotional content items 504 in different ways depending on whether the content item is an advertisement or a discount offer. For example, advertisement content may be surfaced only when a user is in close proximity to the goods or services being advertised, whereas a discount offer may be displayed at any time. A user may interact with the promotional content item 504 associated with the discount offer to claim the discount offer and generate a message related to the discount offer. At a later time, the user can interact with the message in order to activate the discount at a retail location. In another embodiment, advertisements may be relegated to a dedicated promotions module 502, whereas discount offers may be interspersed among the content of other modules. For example, FIG. 5B depicts an example of promotional material integrated into non-promotional material module 506. In this example, the module 506 is a sharing module for sharing articles from a social networking service. The content items of the sharing module include various sharable content items 508 in the form of articles. A promotional content item 504-2 in the form of a discount offer is provided among the sharable content items 508. The intra-module rank of the promotional content item 504-2 (defining where in the module the item 504-2 will appear) may depend on a level of sponsorship by the promoter of the promotional content item 504-2. In some embodiments, advertisements may also be interspersed in non-promotional materials module. For example, depending on an amount of sponsorship by the promoter, the promotional content items may be displayed higher in the intra-module ranked order, or may be displayed in a non-promotional content module. In another embodiment, the module in which sponsored content is provided may be elevated in an inter-module ranked order depending on the level of sponsorship. In some embodiments, a user may be presented with an option to share a discount offer with their friends, e.g. to encourage the user and their friends to gather together at a given location in order to claim the offer. As noted above, in some embodiments a user may interact with the promotional content item 504 associated with the discount offer to claim the discount offer and generate a message related to the discount offer. FIG. 5C depicts an example of a message 510 generated in response to interacting with promotional material. The message 510 may appear as any other message among the user's message threads, and may be treated as any other message content for purposes of the modular inbox. The message 510 includes message text 512 generated by a promoter of the promotional content item 504 which describes the offer being sent. Optionally, the message 510 may also include a scannable code 514, such as a bar code or QR code, which a user may present at a retail location in order to claim the offer or discount. In the depicted example, scanning the code 514 at a retail location results in the user being able to claim a free coffee. FIG. 5D is a flowchart depicting an exemplary process 516 for providing promotional content in a module. At block 518, the messaging application may display a first set of messages in a first portion of an inbox interface for a messaging service. The first portion of the inbox interface may be dedicated to, or may primarily provide, message or thread display functionality. The first portion of the inbox may end at a messaging cliff, as described in more detail below. At block 520, the messaging application may receive an instruction to navigate past the first portion of the inbox interface. For example, the messaging application may register a gesture on a touch screen corresponding to a scrolling gesture, where scrolling the interface in accordance with the gesture would cause the interface to scroll beyond the final message or message thread in the first set of messages. Scrolling or navigation may be achieved in other ways as well, such as by interacting with a pointing device (e.g., a computer mouse), voice commands, etc. At block 522, the messaging application may select promotional material for display. For example, the messaging service may maintain a database or other storage of promotional material, and the messaging application may access the database in order to retrieve a number of candidate promotional materials. To determine which of the candidate promotional materials to display, the messaging application and/or a server of the messaging service may apply one or more metrics, such as: a history of interaction between the user and the promoter on the messaging service, a social networking service, or elsewhere; a determined likelihood that the user will find an offer from the promoter desirable; or a proximity of the user to a retail location associated with a promotion, among other possibilities. In an embodiment employing a proximity metric, block 522 may involve detecting a location of a mobile client of a user of the messaging service, and selecting the promotional material based on a proximity of the mobile client to a location associated with a source of the promotional material, such as when the location is determined to be within a predetermined distance from the source. According to some embodiments, block 522 may involve determining if the candidate promotional material is an advertisement or a discount offer; and selecting a presentation technique for the promotional material depending on whether the promotional material is the advertisement or the discount offer, as described above. At block 524, the messaging application may display a module in a second portion of the inbox interface. The module may include some or all of the promotional material selected at block 522. The module may be distinct from the first portion of the inbox interface that provides the message or message thread display functionality. In some embodiments, the module may be configured to exclusively share promotional material with a user of the messaging service. In other embodiments, the module may be dedicated to other messaging service functionality, where the functionality is distinct from functionality for displaying promotional material. In other words, the promotional material may be integrated into other, non-promotional modules, with the promotional material being presented alongside other content of the module. For example, in the “People/States” module, the inbox interface may present a number of friends (e.g., the user's mother, the user's friend, etc.) alongside a promoted content item (e.g., a phone company's customer service representative, which could be a bot). The module containing the promotional content may be one among several modules that are presented in a ranked order. Depending on a level of sponsorship of the module containing the promotional content, the ranked order may be altered. For example, if the sponsors who provided promotional content in the module provide a sufficiently high level of sponsorship (above a threshold amount), the module may be elevated higher than other modules in the modular inbox. At block 526, the messaging application may receive a selection of promotional material. Depending on the actions defined by the promoter who provided the promotional material, a number of different actions may be taken. For example, interacting with the promotional material may take the user to a web page or social networking site associated with the promoter. In some embodiments, interacting with the promotional material may cause a message to be generated (block 528). Based on instructions provided with or associated with the promotional content item selected, message content (potentially including a scannable code) may be automatically generated and transmitted to the inbox owner through the messaging service. The inbox owner may be provided with an option to share the promotional content and/or the generated message with other users. The Cliff The above-described modules may be displayed after a first subset of the inbox owner's messages in the modular inbox. The cut-off location after which the inbox transitions to modules, rather than messages or message threads, is referred to herein as the cliff. FIG. 6 is a flowchart depicting an exemplary process 600 for determining a transition point (cliff) between a first group of messages and a set of one or more modules. At block 602, the messaging application may receive an instruction to display an inbox interface for a messaging service. Block 602 may occur, for example, in response to starting up the messaging application or in response to receiving an instruction to access a home screen or inbox screen in the messaging application. At block 604, the messaging application may determine a number of messages to be displayed in a first portion of the inbox interface containing a subset of the messages. The subset may be a number of messages that is smaller than the totality of all of the messages available in the user's inbox. In some embodiments, the number may fall between a lower threshold for the number of messages threads and an upper threshold for the number of messages. The lower threshold may be a predetermined minimum number of messages/threads. For example, it may be undesirable to display too few messages before non-message content is displayed. Thus, the predetermined minimum may be set to a value (e.g., 2-6 message threads) so that at least a certain number of message threads are displayed prior to non-message content. In some embodiments, the minimum number of messages/threads may be dynamically determined using criteria similar to that discussed below. The upper threshold may be predetermined, or may be dynamically determined. The upper threshold (and the criteria used to determine the upper threshold) may vary from user to user. For example, at block 606, the messaging application may dynamically select an upper threshold based on one or more criteria. The criteria may include, for example, whether the user has participated in at least a predetermined number of conversations over a predetermined period of time. For example, if the user is not particularly active, the maximum number may be set relatively low (as the user likely only needs to see a few of the most recent conversations). On the other hand, if the user is highly active, the user may have a number of message threads in which they are currently participating, and is likely to wish to see these threads at the forefront in the messaging application. Another criterion may be the number of message threads in which there is a currently active conversation (e.g., a conversation in which the most recent message has been received in less than a predetermined threshold amount of time), and/or the number of message threads having unread messages. In some embodiments, the upper threshold may be set such that the first portion of the inbox interface shows all of the threads that include an active conversation and any threads that have unread messages. The last message to meet either of these criteria may be selected to define the upper threshold. Another example of a criteria may be historical interactions with the messaging application. For example, if a user utilizes the messaging application multiple times per day may wish to see more conversations than a user who utilizes the messaging application on a limited basis. Another example of a criteria is the current time of day. If the current time is during the night hours, the user is likely asleep and not messaging particularly actively. Thus, the system may select a relatively small upper threshold. On the other hand, during the day the threshold may be increased since the user is likely to be relatively more active. Yet another example of a criteria is the amount of time since the last active conversation. For example, the messaging application may determine the last thread in which a user had an active conversation in the previous n number of hours (e.g., n=6, 8, 24, etc.). The upper threshold may be set to encompass the number of active message threads within the time window. The criteria may be combined with each other. For example, the messaging application may use the historical interaction information for the messaging application to alter the time window for the amount of time since the last active conversation. If the user is highly active with the messaging application, the time window may be set relatively short (e.g., 6 hours or even less), which would likely still result in a large number of conversations for an active user. On the other hand, an inactive user might have their time window set relatively long (e.g., one week), because the longer list of conversations may remind the user to follow up with people from several days ago, thus prompting the user to higher levels of activity. In another example, if the current time of day is during the user's work hours, the time window may be increased or decreased depending on whether the user actively uses the messaging service for work or not. A user actively using the messaging service for work may wish to see a relatively small number of the most recent messages, whereas a user that does not use the messaging service may wish to see a larger number of messages encompassing those which the user missed while at work. In another embodiment, the number of unread messages and/or active conversations may be combined with the above-described time window. For example, the messaging application may determine the number of unread messages/active conversations within the previous n hours. The upper threshold may be set to encompass these messages and conversations. If there are no unread messages or active conversations within the time period, then the lower threshold determined at step 604 may be utilized. At block 608, the messaging application may display the first set of messages (i.e., the number of most-recent messages and/or threads that do not exceed the lower threshold or the upper threshold, whichever is higher) in a first portion of the inbox interface. Optionally, the messages of the first set of messages may be displayed, but previously-read messages and/or messages without an active conversation may be filtered out and moved to the second subset of messages. At block 610, the messaging application may display non-message content in a second portion of the inbox interface. The second portion may include a number of modules related to functionality of the messaging service that is not directly related to message or thread display functionality, such as the modules described in connection with FIGS. 2A-5D. At block 612, the messaging application may display the second subset of messages. The second subset of messages may be displayed as a result of an instruction to navigate past the modules displayed in block 610. Following these modules, an additional message inbox may be displayed including the previously undisplayed messages/threads. In another embodiment, the messages of the second subset may be collapsed into a menu in the first portion of the inbox, as previously described. Module Ranking When two or more modules are displayed, an inter-module order may be established to define the display order for the modules (e.g., should the Top Contacts Module be displayed before the Photos Module?). If, for example, the user's mother recently came online, then this information may be particularly pertinent and the People/States Module may be a good candidate for an early spot among the modules. On the other hand, if a particular article is being widely shared on the user's social networking service, then a sharing module for sharing articles may be elevated to a top spot. It is noted that the first portion of the display, including the first subset of the messages/message threads, may be treated as a module and may be ranked among the other modules for display. In some embodiments, the messages/threads module may be locked to the top of the inbox interface, although in other embodiments it may be allowed to float among the other modules depending on its rank. In some embodiments, the messages/threads module may be locked to the top slot unless another module is determined to be extremely relevant (e.g., the probability of interaction described in connection with FIGS. 7A and 7B is above a predetermined relatively high threshold), in which case the extremely relevant module may be elevated above the messages/threads module. The inter-module order may be determined based on a dynamically calculated inter-module ranking. FIGS. 7A-7B describe how the inter-module ranking is determined, while FIG. 7C describes how content within a module may be ranked. FIG. 7A is a block diagram providing an overview of a module ranking framework. Generally speaking, the module ranking framework proceeds in two stages. First, the framework determines how likely a person is to interact with the module in question. Second, the framework determines a value of providing the module to the messaging service. The probability is combined with the value to determine the module's rank score. The scores of different modules are compared to each other in order to determine the modules' ordering. In some embodiments, only modules above a predetermined threshold rank score are presented to a user in the second portion of the inbox interface. The framework may evaluate a number of aggregated features 702 and a number of per-user features 704. The aggregated features 702 may use a user base (e.g., the messaging service's or a social networking service's user base) as a proxy for the currently-evaluated user, in order to evaluate the module's general popularity. The aggregated features 702 may include data regarding module usage as aggregated over the user base. For example, the aggregated features 702 may include the number of impressions of a given module in a predetermined period of time (e.g., 30 days) among the user base, the number of times the user base has interacted with the module in a predetermined period of time (e.g., 30 days), etc. The per-user features 704 may include similar information, but may be specific to the user under evaluation (or may otherwise be generated on a per-user basis). For example, the per-user features 704 may include a number of impressions of a module for a given user-module pair, a number of interactions with a module for a given user-module pair, etc. The aggregated features 702 and the per-user features 704 may be retrieved by a feature fetcher 706 for evaluation. The feature fetcher 706 may be, for example, a component of a server that retrieves the features from the messaging service and any applicable module providers. The feature fetcher 706 may aggregate the features and provide the features to a scorer 708. The scorer 708 accepts the features as input and determines a probability of interaction 712 for the modules described by the features. The probability of interaction 712 may represent a likelihood that a given user (or a user in general) will use the module. The scorer 708 may be supplemented by one or more offline trained models 710 that may improve the scorer's 708 predictions in certain contexts. For example, the offline trained models 710 may account for variables such as a user's age group, a user's gender, the user's recent posts to a social networking service, etc. The scorer may determine the probability of interaction 712 according to a formula, such as the one described below in Equation 1: Probability ⁢ ⁢ of ⁢ ⁢ Interaction = λ * # ⁢ ⁢ unit ⁢ ⁢ ⁢ clicks ⁢ ⁢ by ⁢ ⁢ user + 1 # ⁢ ⁢ unit ⁢ ⁢ impressoins ⁢ ⁢ by ⁢ ⁢ user + 2 + ( 1 - λ ) * # ⁢ ⁢ unit ⁢ ⁢ clicks + 1 # ⁢ ⁢ unit ⁢ ⁢ impressions + 2 Equation ⁢ ⁢ 1 Where λ is a personalization multiplier which balances the user's behavior and the user base's aggregated behavior. One of ordinary skill in the art will recognize that Equation 1 is exemplary only, and that other suitable formulae may be used to estimate the probability that a user will interact with a module. The determined probability of interaction 712 may be provided to a value model estimator 714. The value model estimator 714 may determine a value of the module to the messaging service or an associated social networking service. For example, it may be more valuable to the messaging service for the current user to start a new thread with another user (thus encouraging both users to become more active) than for the user to share an article with an already-active user, or for the user to view an advertisement. Thus, the messaging application may elevate the People/States Module above a Sharable Articles Module or a Promotional Materials Module. To this end, the value model estimator 714 may consult a value model definition 716 that provides values for each of the modules accessible through the messaging service. The values provided by the value model definition 716 may be combined with the probability of interaction 712 to determine a final score for ranking 718 for each module. The scores for the modules may be compared to each other to determine relative ranks for the modules. The above-described framework may be employed in an exemplary process 720 for determining an inter-module rank. FIG. 7B depicts an example of such a process 720. At block 722, the messaging service may identify a first module and second module available for display by a client's messaging application. The first module and the second module may provide access to functionality of the messaging service that is distinct from message or thread display functionality. At block 724, the messaging service may determine probabilities of interaction for the first module and the second module, such as the probability of interaction 712 discussed in connection with FIG. 7A. The probabilities of interaction may represent a likelihood that the inbox owner will use the first module and the second module, respectively. As an alternative or in addition to the probability of interaction 712 as calculated n FIG. 7A, each module may provide its own estimation of the likelihood that the inbox owner will use the module. For example, each module may provide a number that reflects the quality of content currently available through the module (e.g., the quality of the content today as compared to the average day). The probability of interaction may be augmented or may take into account other criteria, such as recency. For example, if a user takes a photo with their mobile device and then immediately accesses the messaging application, there may be an elevated likelihood that the user wishes to share the recently captured photograph with their messaging contacts. In another embodiment, the messaging application (or a related social networking service) may identify one or more other users in the photo, and may determine if the current user and the other users are highly connected or are highly likely to message each other. Such a coefficient may be used to elevate the priority of a module for sharing the photo, if other highly connected users are available through the messaging service. At block 726, the messaging service may determine a value of the first module and the second module to the messaging service or a related social networking service. Block 726 may be carried out by the value model estimator 714 of FIG. 7A. At block 728, the message service may determine a ranked order for the first module and the second module by combining the probability determined at block 724 with the value determined at block 726. For example, the probability of interaction may be multiplied by (or otherwise combined with) the value in order to determine a ranking score, and the modules may be ranked in the order of their respective ranking scores. At block 730, the messaging service may determine intra-module rankings for the content of the first and second modules. This may involve arranging the content within the first module and/or the second module in an intra-module ranked order, as described in more detail in connection with FIG. 7C. Depending on whether certain content in a module is particularly relevant, then at block 732 the inter-module ranked order as determined at block 728 may be altered (e.g., to elevate a module having particularly relevant content). At block 734, the first module and the second module may be displayed in the ranked order after a first subset of messages in the inbox interface. FIG. 7C is a flowchart depicting an exemplary process performed by block 730, for determining an intra-module rank, in more detail. In general, the messaging service may be agnostic to the intra-module ranking, thus allowing each module to define its own intra-module ranking for content. At block 738, the messaging service may identify a first content item and a second content item for a given module. The module may provide or define a location from which content items for the module may be retrieved. At block 740, the messaging service may access ranking rules as provided by the module. Depending on the content, for example, the module may designate different criteria to be applied to rank the content. Exemplary criteria for ranking content within a module include a recency of the content, an importance of the content to a user, or an affinity for the content by the user, etc. At block 742, the messaging service may determine a recency of the content, or any other information required by the criteria. The information about the criteria may be retrieved, for example, from a social networking service, the messaging service, a client device, or a remote location associated with the module. At block 744, the system may determine a user affinity for the content items. The user affinity may represent a user affinity score 746 that indicates a likelihood that the inbox owner will enjoy the content. Alternatively or in addition, the user affinity may represent an associate affinity score 748 that indicates a likelihood that associates of the inbox owner (e.g., the inbox owner's friends through a social networking service or contacts through the messaging service) will enjoy the content. At block 750, the messaging service may rank the first and second content items based on the affinity score(s) determined at block 744. Optionally, if one or more of the affinity scores exceeds a predetermined threshold (e.g., indicating that a content item is particularly relevant), then at block 752 the messaging service may flag the module containing the content for adjustment in the inter-module rankings. At block 756, the messaging service may display the first and second content items within the module in the ranked order. Messaging System Overview These examples may be implemented by a messaging system that is provided either locally, at a client device, or remotely (e.g., at a remote server). FIGS. 8A-8C depict various examples of messaging systems, and are discussed in more detail below. FIG. 8A depicts an exemplary centralized messaging system 800, in which functionality for organizing messages asynchronously and/or using threads is integrated into a messaging server. The centralized system 800 may implement some or all of the structure and/or operations of a messaging service in a single computing entity, such as entirely within a single centralized server device 826. The messaging system 800 may include a computer-implemented system having software applications that include one or more components. Although the messaging system 800 shown in FIG. 8A has a limited number of elements in a certain topology, the messaging system 800 may include more or fewer elements in alternate topologies. A messaging service 800 may be generally arranged to receive, store, and deliver messages. The messaging service 800 may store messages while messaging clients 820, such as may execute on client devices 810, are offline and deliver the messages once the messaging clients are available. A client device 810 may transmit messages addressed to a recipient user, user account, or other identifier resolving to a receiving client device 810. In exemplary embodiments, each of the client devices 810 and their respective messaging clients 820 are associated with a particular user or users of the messaging service 800. In some embodiments, the client devices 810 may be cellular devices such as smartphones and may be identified to the messaging service 800 based on a phone number associated with each of the client devices 810. In some embodiments, each messaging client may be associated with a user account registered with the messaging service 800. In general, each messaging client may be addressed through various techniques for the reception of messages. While in some embodiments the client devices 810 may be cellular devices, in other embodiments one or more of the client devices 810 may be personal computers, tablet devices, any other form of computing device. The client 810 may include one or more input devices 812 and one or more output devices 818. The input devices 812 may include, for example, microphones, keyboards, cameras, electronic pens, touch screens, and other devices for receiving inputs including message data, requests, commands, user interface interactions, selections, and other types of input. The output devices 818 may include a speaker, a display device such as a monitor or touch screen, and other devices for presenting an interface to the messaging system 800. The client 810 may include a memory 819, which may be a non-transitory computer readable storage medium, such as one or a combination of a hard drive, solid state drive, flash storage, read only memory, or random access memory. The memory 819 may a representation of an input 814 and/or a representation of an output 816, as well as one or more applications. For example, the memory 819 may store a messaging client 820 and/or a social networking client that allows a user to interact with a social networking service. The input 814 may be textual, such as in the case where the input device 212 is a keyboard. Alternatively, the input 814 may be an audio recording, such as in the case where the input device 812 is a microphone. Accordingly, the input 814 may be subjected to automatic speech recognition (ASR) logic in order to transform the audio recording to text that is processable by the messaging system 800. The ASR logic may be located at the client device 810 (so that the audio recording is processed locally by the client 810 and corresponding text is transmitted to the messaging server 826), or may be located remotely at the messaging server 826 (in which case, the audio recording may be transmitted to the messaging server 826 and the messaging server 826 may process the audio into text). Other combinations are also possible—for example, if the input device 812 is a touch pad or electronic pen, the input 814 may be in the form of handwriting, which may be subjected to handwriting or optical character recognition analysis logic in order to transform the input 812 into processable text. The client 810 may be provided with a network interface 822 for communicating with a network 824, such as the Internet. The network interface 822 may transmit the input 812 in a format and/or using a protocol compatible with the network 824 and may receive a corresponding output 816 from the network 824. The network interface 822 may communicate through the network 824 to a messaging server 826. The messaging server 826 may be operative to receive, store, and forward messages between messaging clients. The messaging server 826 may include a network interface 822, messaging preferences 828, and messaging inbox logic 830. The messaging preferences 828 may include one or more privacy settings for one or more users and/or message threads. For example, the messaging preferences 828 may include a setting that indicates whether to display messages synchronously or asynchronously. Furthermore, the messaging preferences 828 may include one or more settings, including default settings, for the logic described herein. The messaging inbox logic 830 may include the logic for generating and maintaining a modular inbox as described above. For example, the messaging inbox logic 830 may include module logic 832 that is operable to generate modules for the inbox and manage interactions with the modules. The module logic 832 may include, for example, logic similar to that described in connection with FIGS. 3, 4G, 5D, and 6. The messaging inbox logic 830 may further include ranking logic 834 that is operable to perform inter-module and intra-module ranking, such as the logic described in connection with FIGS. 7B-7C. The network interface 822 of the client 810 and/or the messaging server 826 may also be used to communicate through the network 824 with a social networking server 836. The social networking server 836 may include or may interact with a social networking graph 838 that defines connections in a social network. Furthermore, the messaging server 826 may connect to the social networking server 836 for various purposes, such as retrieving connection information, messaging history, event details, etc. from the social network. A user of the client 810 may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over the social networking server 836. The social-networking server 836 may be a network-addressable computing system hosting an online social network. The social networking server 836 may generate, store, receive, and send social-networking data, such as, for example, user-profile data, concept-profile data, social-graph information, or other suitable data related to the online social network. The social networking server 836 may be accessed by the other components of the network environment either directly or via the network 824. The social networking server 836 may include an authorization server (or other suitable component(s)) that allows users to opt in to or opt out of having their actions logged by social-networking server 836 or shared with other systems (e.g., third-party systems, such as the messaging server 826), for example, by setting appropriate privacy settings. A privacy setting of a user may determine what information associated with the user may be logged, how information associated with the user may be logged, when information associated with the user may be logged, who may log information associated with the user, whom information associated with the user may be shared with, and for what purposes information associated with the user may be logged or shared. Authorization servers may be used to enforce one or more privacy settings of the users of social-networking server 836 through blocking, data hashing, anonymization, or other suitable techniques as appropriate. More specifically, one or more of the content objects of the online social network may be associated with a privacy setting. The privacy settings (or “access settings”) for an object may be stored in any suitable manner, such as, for example, in association with the object, in an index on an authorization server, in another suitable manner, or any combination thereof. A privacy setting of an object may specify how the object (or particular information associated with an object) can be accessed (e.g., viewed or shared) using the online social network. Where the privacy settings for an object allow a particular user to access that object, the object may be described as being “visible” with respect to that user. As an example and not by way of limitation, a user of the online social network may specify privacy settings for a user-profile page identify a set of users that may access the work experience information on the user-profile page, thus excluding other users from accessing the information. In particular embodiments, the privacy settings may specify a “blocked list” of users that should not be allowed to access certain information associated with the object. In other words, the blocked list may specify one or more users or entities for which an object is not visible. As an example and not by way of limitation, a user may specify a set of users that may not access photos albums associated with the user, thus excluding those users from accessing the photo albums (while also possibly allowing certain users not within the set of users to access the photo albums). In particular embodiments, privacy settings may be associated with particular elements of the social networking graph 838. Privacy settings of a social-graph element, such as a node or an edge, may specify how the social-graph element, information associated with the social-graph element, or content objects associated with the social-graph element can be accessed using the online social network. As an example and not by way of limitation, a particular concept node corresponding to a particular photo may have a privacy setting specifying that the photo may only be accessed by users tagged in the photo and their friends. In particular embodiments, privacy settings may allow users to opt in or opt out of having their actions logged by social networking server 836 or shared with other systems. In particular embodiments, the privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access. As an example and not by way of limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, and my boss), users within a particular degrees-of-separation (e.g., friends, or friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems, particular applications (e.g., third-party applications, external websites), other suitable users or entities, or any combination thereof. Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner. In response to a request from a user (or other entity) for a particular object stored in a data store, the social networking server 836 may send a request to the data store for the object. The request may identify the user associated with the request. The requested data object may only be sent to the user (or a client system 810 of the user) if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store, or may prevent the requested object from be sent to the user. In the search query context, an object may only be generated as a search result if the querying user is authorized to access the object. In other words, the object must have a visibility that is visible to the querying user. If the object has a visibility that is not visible to the user, the object may be excluded from the search results. In some embodiments, targeting criteria may be used to identify users of the social network for various purposes. Targeting criteria used to identify and target users may include explicit, stated user interests on social-networking server 836 or explicit connections of a user to a node, object, entity, brand, or page on social networking server 836. In addition, or as an alternative, such targeting criteria may include implicit or inferred user interests or connections (which may include analyzing a user's history, demographic, social or other activities, friends' social or other activities, subscriptions, or any of the preceding of other users similar to the user (based, e.g., on shared interests, connections, or events)). Particular embodiments may utilize platform targeting, which may involve platform and “like” impression data; contextual signals (e.g., “Who is viewing now or has viewed recently the page for COCA-COLA?”); light-weight connections (e.g., “check-ins”); connection lookalikes; fans; extracted keywords; EMU advertising; inferential advertising; coefficients, affinities, or other social-graph information; friends-of-friends connections; pinning or boosting; deals; polls; household income, social clusters or groups; products detected in images or other media; social- or open-graph edge types; geo-prediction; views of profile or pages; status updates or other user posts (analysis of which may involve natural-language processing or keyword extraction); events information; or collaborative filtering. Identifying and targeting users may also implicate privacy settings (such as user opt-outs), data hashing, or data anonymization, as appropriate. The centralized embodiment depicted in FIG. 8A may be well-suited to deployment as a new system or as an upgrade to an existing system, because the logic for pivoting to a group conversation (e.g., the logic of the account identifier 832 and/or the logic of the account notifier 834) are incorporated into the messaging server 826. In contrast, FIG. 8B depicts an exemplary distributed messaging system 880, in which functionality for recognizing productive intent and generating a list of suggested recipients is distributed and remotely accessible from the messaging server. Examples of a distributed system 880 include a client-server architecture, a 3-tier architecture, an N-tier architecture, a tightly-coupled or clustered architecture, a peer-to-peer architecture, a master-slave architecture, a shared database architecture, and other types of distributed systems. Many of the components depicted in FIG. 8B are identical to those in FIG. 8A, and a description of these elements is not repeated here for the sake of brevity. The primary difference between the centralized embodiment and the distributed embodiment is the addition of a separate threading server 882, which hosts the thread creation component 832 and the thread display component 834. The threading server 882 may be distinct from the messaging server 826 but may communicate with the messaging server 826, either directly or through the network 824, to provide the functionality of the account identifier 832 and the account notifier 834 to the messaging server 826. The embodiment depicted in FIG. 8B may be particularly well suited to allow exemplary embodiments to be deployed alongside existing messaging systems, for example when it is difficult or undesirable to replace an existing messaging server. Additionally, in some cases the messaging server 826 may have limited resources (e.g. processing or memory resources) that limit or preclude the addition of the additional pivot functionality. In such situations, the capabilities described herein may still be provided through the separate pivot server 882. FIG. 8C illustrates an example of a social networking graph 838. In exemplary embodiments, a social networking service may store one or more social graphs 838 in one or more data stores as a social graph data structure via the social networking service. The social graph 838 may include multiple nodes, such as user nodes 854 and concept nodes 856. The social graph 838 may furthermore include edges 858 connecting the nodes. The nodes and edges of social graph 838 may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or queryable indexes of nodes or edges of social graph 838. The social graph 838 may be accessed by a social-networking server 826, client system 810, third-party system, or any other approved system or device for suitable applications. A user node 854 may correspond to a user of the social-networking system. A user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over the social-networking system. In exemplary embodiments, when a user registers for an account with the social-networking system, the social-networking system may create a user node 854 corresponding to the user, and store the user node 854 in one or more data stores. Users and user nodes 854 described herein may, where appropriate, refer to registered users and user nodes 854 associated with registered users. In addition or as an alternative, users and user nodes 854 described herein may, where appropriate, refer to users that have not registered with the social-networking system. In particular embodiments, a user node 854 may be associated with information provided by a user or information gathered by various systems, including the social-networking system. As an example and not by way of limitation, a user may provide their name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. In particular embodiments, a user node 854 may be associated with one or more data objects corresponding to information associated with a user. In particular embodiments, a user node 854 may correspond to one or more webpages. A user node 854 may be associated with a unique user identifier for the user in the social-networking system. In particular embodiments, a concept node 856 may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with the social-network service or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located within the social-networking system or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; another suitable concept; or two or more such concepts. A concept node 556 may be associated with information of a concept provided by a user or information gathered by various systems, including the social-networking system. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, a concept node 856 may be associated with one or more data objects corresponding to information associated with concept node 856. In particular embodiments, a concept node 856 may correspond to one or more webpages. In particular embodiments, a node in social graph 838 may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible to the social-networking system. Profile pages may also be hosted on third-party websites associated with a third-party server. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to a particular concept node 856. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node 854 may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. A business page may comprise a user-profile page for a commerce entity. As another example and not by way of limitation, a concept node 856 may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node 856. In particular embodiments, a concept node 856 may represent a third-party webpage or resource hosted by a third-party system. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “eat”), causing a client system to send to the social-networking system a message indicating the user's action. In response to the message, the social-networking system may create an edge (e.g., an “eat” edge) between a user node 854 corresponding to the user and a concept node 856 corresponding to the third-party webpage or resource and store edge 858 in one or more data stores. In particular embodiments, a pair of nodes in social graph 838 may be connected to each other by one or more edges 858. An edge 858 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge 858 may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication, the social-networking system may send a “friend request” to the second user. If the second user confirms the “friend request,” the social-networking system may create an edge 858 connecting the first user's user node 854 to the second user's user node 854 in social graph 838 and store edge 858 as social-graph information in one or more data stores. In the example of FIG. 8C, social graph 838 includes an edge 858 indicating a friend relation between user nodes 854 of user “Amanda” and user “Dorothy.” Although this disclosure describes or illustrates particular edges 858 with particular attributes connecting particular user nodes 854, this disclosure contemplates any suitable edges 858 with any suitable attributes connecting user nodes 854. As an example and not by way of limitation, an edge 858 may represent a friendship, family relationship, business or employment relationship, fan relationship, follower relationship, visitor relationship, subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected in social graph 838 by one or more edges 858. In particular embodiments, an edge 858 between a user node 854 and a concept node 856 may represent a particular action or activity performed by a user associated with user node 854 toward a concept associated with a concept node 856. As an example and not by way of limitation, as illustrated in FIG. 8C, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to an edge type or subtype. A concept-profile page corresponding to a concept node 856 may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons, the social-networking system may create a “favorite” edge or a “check in” edge in response to a user's action corresponding to a respective action. As another example and not by way of limitation, a user (user “Carla”) may listen to a particular song (“Across the Sea”) using a particular application (SPOTIFY, which is an online music application). In this case, the social-networking system may create a “listened” edge 858 and a “used” edge (as illustrated in FIG. 8C) between user nodes 854 corresponding to the user and concept nodes 856 corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover, the social-networking system may create a “played” edge 858 (as illustrated in FIG. 8C) between concept nodes 856 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played” edge 858 corresponds to an action performed by an external application (SPOTIFY) on an external audio file (the song “Across the Sea”). Although this disclosure describes particular edges 858 with particular attributes connecting user nodes 854 and concept nodes 856, this disclosure contemplates any suitable edges 858 with any suitable attributes connecting user nodes 854 and concept nodes 856. Moreover, although this disclosure describes edges between a user node 854 and a concept node 856 representing a single relationship, this disclosure contemplates edges between a user node 854 and a concept node 856 representing one or more relationships. As an example and not by way of limitation, an edge 858 may represent both that a user likes and has used at a particular concept. Alternatively, another edge 858 may represent each type of relationship (or multiples of a single relationship) between a user node 854 and a concept node 856 (as illustrated in FIG. 8C between user node 854 for user “Edwin” and concept node 856 for “SPOTIFY”). In particular embodiments, the social-networking system may create an edge 858 between a user node 854 and a concept node 856 in social graph 838. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user's client system) may indicate that he or she likes the concept represented by the concept node 856 by clicking or selecting a “Like” icon, which may cause the user's client system to send to the social-networking system a message indicating the user's liking of the concept associated with the concept-profile page. In response to the message, the social-networking system may create an edge 858 between user node 854 associated with the user and concept node 856, as illustrated by “like” edge 858 between the user and concept node 856. In particular embodiments, the social-networking system may store an edge 858 in one or more data stores. In particular embodiments, an edge 858 may be automatically formed by the social-networking system in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge 858 may be formed between user node 854 corresponding to the first user and concept nodes 856 corresponding to those concepts. Although this disclosure describes forming particular edges 858 in particular manners, this disclosure contemplates forming any suitable edges 858 in any suitable manner. The social graph 838 may further comprise a plurality of product nodes. Product nodes may represent particular products that may be associated with a particular business. A business may provide a product catalog to a consumer-to-business service and the consumer-to-business service may therefore represent each of the products within the product in the social graph 838 with each product being in a distinct product node. A product node may comprise information relating to the product, such as pricing information, descriptive information, manufacturer information, availability information, and other relevant information. For example, each of the items on a menu for a restaurant may be represented within the social graph 838 with a product node describing each of the items. A product node may be linked by an edge to the business providing the product. Where multiple businesses provide a product, each business may have a distinct product node associated with its providing of the product or may each link to the same product node. A product node may be linked by an edge to each user that has purchased, rated, owns, recommended, or viewed the product, with the edge describing the nature of the relationship (e.g., purchased, rated, owns, recommended, viewed, or other relationship). Each of the product nodes may be associated with a graph id and an associated merchant id by virtue of the linked merchant business. Products available from a business may therefore be communicated to a user by retrieving the available product nodes linked to the user node for the business within the social graph 838. The information for a product node may be manipulated by the social-networking system as a product object that encapsulates information regarding the referenced product. As such, the social graph 838 may be used to infer shared interests, shared experiences, or other shared or common attributes of two or more users of a social-networking system. For instance, two or more users each having an edge to a common business, product, media item, institution, or other entity represented in the social graph 838 may indicate a shared relationship with that entity, which may be used to suggest customization of a use of a social-networking system, including a messaging system, for one or more users. The embodiments described above may be performed by a messaging architecture, an example of which is next described with reference to FIG. 9. Messaging Architecture FIG. 9 illustrates an embodiment of a plurality of servers implementing various functions of a messaging service 900. It will be appreciated that different distributions of work and functions may be used in various embodiments of a messaging service 900. The messaging service 900 may comprise a domain name front end 902. The domain name front end 902 may be assigned one or more domain names associated with the messaging service 900 in a domain name system (DNS). The domain name front end 902 may receive incoming connections and distribute the connections to servers providing various messaging services. The messaging service 902 may comprise one or more chat servers 904. The chat servers 904 may comprise front-end servers for receiving and transmitting user-to-user messaging updates such as chat messages. Incoming connections may be assigned to the chat servers 904 by the domain name front end 902 based on workload balancing. The messaging service 900 may comprise backend servers 908. The backend servers 908 may perform specialized tasks in the support of the chat operations of the front-end chat servers 904. A plurality of different types of backend servers 908 may be used. It will be appreciated that the assignment of types of tasks to different backend serves 908 may vary in different embodiments. In some embodiments some of the back-end services provided by dedicated servers may be combined onto a single server or a set of servers each performing multiple tasks divided between different servers in the embodiment described herein. Similarly, in some embodiments tasks of some of dedicated back-end servers described herein may be divided between different servers of different server groups. The messaging service 900 may comprise one or more offline storage servers 910. The one or more offline storage servers 910 may store messaging content for currently-offline messaging clients in hold for when the messaging clients reconnect. The messaging service 900 may comprise one or more sessions servers 912. The one or more session servers 912 may maintain session state of connected messaging clients. The messaging service 900 may comprise one or more presence servers 914. The one or more presence servers 914 may maintain presence information for the messaging service 900. Presence information may correspond to user-specific information indicating whether or not a given user has an online messaging client and is available for chatting, has an online messaging client but is currently away from it, does not have an online messaging client, and any other presence state. The messaging service 900 may comprise one or more push storage servers 916. The one or more push storage servers 916 may cache push requests and transmit the push requests to messaging clients. Push requests may be used to wake messaging clients, to notify messaging clients that a messaging update is available, and to otherwise perform server-side-driven interactions with messaging clients. The messaging service 900 may comprise one or more group servers 918. The one or more group servers 918 may maintain lists of groups, add users to groups, remove users from groups, and perform the reception, caching, and forwarding of group chat messages. The messaging service 900 may comprise one or more block list servers 920. The one or more block list servers 920 may maintain user-specific block lists, the user-specific incoming-block lists indicating for each user the one or more other users that are forbidden from transmitting messages to that user. Alternatively or additionally, the one or more block list servers 920 may maintain user-specific outgoing-block lists indicating for each user the one or more other users that that user is forbidden from transmitting messages to. It will be appreciated that incoming-block lists and outgoing-block lists may be stored in combination in, for example, a database, with the incoming-block lists and outgoing-block lists representing different views of a same repository of block information. The messaging service 900 may comprise one or more last seen information servers 922. The one or more last seen information servers 922 may receive, store, and maintain information indicating the last seen location, status, messaging client, and other elements of a user's last seen connection to the messaging service 900. The messaging service 900 may comprise one or more key servers 924. The one or more key servers may host public keys for public/private key encrypted communication. The messaging service 900 may comprise one or more profile photo servers 926. The one or more profile photo servers 926 may store and make available for retrieval profile photos for the plurality of users of the messaging service 900. The messaging service 900 may comprise one or more spam logging servers 928. The one or more spam logging servers 928 may log known and suspected spam (e.g., unwanted messages, particularly those of a promotional nature). The one or more spam logging servers 928 may be operative to analyze messages to determine whether they are spam and to perform punitive measures, in some embodiments, against suspected spammers (users that send spam messages). The messaging service 900 may comprise one or more statistics servers 930. The one or more statistics servers may compile and store statistics information related to the operation of the messaging service 900 and the behavior of the users of the messaging service 900. The messaging service 900 may comprise one or more web servers 932. The one or more web servers 932 may engage in hypertext transport protocol (HTTP) and hypertext transport protocol secure (HTTPS) connections with web browsers. The messaging service 900 may comprise one or more chat activity monitoring servers 934. The one or more chat activity monitoring servers 934 may monitor the chats of users to determine unauthorized or discouraged behavior by the users of the messaging service 900. The one or more chat activity monitoring servers 934 may work in cooperation with the spam logging servers 928 and block list servers 920, with the one or more chat activity monitoring servers 934 identifying spam or other discouraged behavior and providing spam information to the spam logging servers 928 and blocking information, where appropriate to the block list servers 920. The messaging service 900 may comprise one or more sync servers 936. The one or more sync servers 936 may sync the messaging system 500 with contact information from a messaging client, such as an address book on a mobile phone, to determine contacts for a user in the messaging service 900. The messaging service 900 may comprise one or more multimedia servers 938. The one or more multimedia servers may store multimedia (e.g., images, video, audio) in transit between messaging clients, multimedia cached for offline endpoints, and may perform transcoding of multimedia. The messaging service 900 may comprise one or more payment servers 940. The one or more payment servers 940 may process payments from users. The one or more payment servers 940 may connect to external third-party servers for the performance of payments. The messaging service 900 may comprise one or more registration servers 942. The one or more registration servers 942 may register new users of the messaging service 900. The messaging service 900 may comprise one or more voice relay servers 944. The one or more voice relay servers 944 may relay voice-over-Internet-protocol (VoIP) voice communication between messaging clients for the performance of VoIP calls. The above-described methods may be embodied as instructions on a computer readable medium or as part of a computing architecture. FIG. 10 illustrates an embodiment of an exemplary computing architecture 1000 suitable for implementing various embodiments as previously described. In one embodiment, the computing architecture 1000 may comprise or be implemented as part of an electronic device, such as a computer 1001. The embodiments are not limited in this context. As used in this application, the terms “system” and “component” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 1000. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces. The computing architecture 1000 includes various common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, and so forth. The embodiments, however, are not limited to implementation by the computing architecture 1000. As shown in FIG. 10, the computing architecture 1000 comprises a processing unit 1002, a system memory 1004 and a system bus 1006. The processing unit 1002 can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Celeron®, Core (2) Duo®, Itanium®, Pentium®, Xeon®, and XScale® processors; and similar processors. Dual microprocessors, multi-core processors, and other multi-processor architectures may also be employed as the processing unit 1002. The system bus 1006 provides an interface for system components including, but not limited to, the system memory 1004 to the processing unit 1002. The system bus 1006 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Interface adapters may connect to the system bus 1006 via a slot architecture. Example slot architectures may include without limitation Accelerated Graphics Port (AGP), Card Bus, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), and the like. The computing architecture 1000 may comprise or implement various articles of manufacture. An article of manufacture may comprise a computer-readable storage medium to store logic. Examples of a computer-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of logic may include executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Embodiments may also be at least partly implemented as instructions contained in or on a non-transitory computer-readable medium, which may be read and executed by one or more processors to enable performance of the operations described herein. The system memory 1004 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. In the illustrated embodiment shown in FIG. 10, the system memory 1004 can include non-volatile memory 1008 and/or volatile memory 1010. A basic input/output system (BIOS) can be stored in the non-volatile memory 1008. The computing architecture 1000 may include various types of computer-readable storage media in the form of one or more lower speed memory units, including an internal (or external) hard disk drive (HDD) 1012, a magnetic floppy disk drive (FDD) 1014 to read from or write to a removable magnetic disk 1016, and an optical disk drive 1018 to read from or write to a removable optical disk 1020 (e.g., a CD-ROM or DVD). The HDD 1012, FDD 1014 and optical disk drive 1020 can be connected to the system bus 1006 by an HDD interface 1022, an FDD interface 1024 and an optical drive interface 1026, respectively. The HDD interface 1022 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE 694 interface technologies. The drives and associated computer-readable media provide volatile and/or nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives and memory units 1008, 1012, including an operating system 1028, one or more application programs 1030, other program modules 1032, and program data 1034. In one embodiment, the one or more application programs 1030, other program modules 1032, and program data 1034 can include, for example, the various applications and/or components of the messaging system 500. A user can enter commands and information into the computer 1001 through one or more wire/wireless input devices, for example, a keyboard 1036 and a pointing device, such as a mouse 1038. Other input devices may include microphones, infra-red (IR) remote controls, radio-frequency (RF) remote controls, game pads, stylus pens, card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors, styluses, and the like. These and other input devices are often connected to the processing unit 1002 through an input device interface 1040 that is coupled to the system bus 1006, but can be connected by other interfaces such as a parallel port, IEEE 694 serial port, a game port, a USB port, an IR interface, and so forth. A monitor 1042 or other type of display device is also connected to the system bus 1006 via an interface, such as a video adaptor 1044. The monitor 1042 may be internal or external to the computer 1001. In addition to the monitor 1042, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth. The computer 1001 may operate in a networked environment using logical connections via wire and/or wireless communications to one or more remote computers, such as a remote computer 1044. The remote computer 1044 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 1001, although, for purposes of brevity, only a memory/storage device 1046 is illustrated. The logical connections depicted include wire/wireless connectivity to a local area network (LAN) 1048 and/or larger networks, for example, a wide area network (WAN) 1050. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, for example, the Internet. When used in a LAN networking environment, the computer 1001 is connected to the LAN 1048 through a wire and/or wireless communication network interface or adaptor 1052. The adaptor 1052 can facilitate wire and/or wireless communications to the LAN 1048, which may also include a wireless access point disposed thereon for communicating with the wireless functionality of the adaptor 1052. When used in a WAN networking environment, the computer 1001 can include a modem 1054, or is connected to a communications server on the WAN 1050, or has other means for establishing communications over the WAN 1050, such as by way of the Internet. The modem 1054, which can be internal or external and a wire and/or wireless device, connects to the system bus 1006 via the input device interface 1040. In a networked environment, program modules depicted relative to the computer 1001, or portions thereof, can be stored in the remote memory/storage device 1046. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used. The computer 1001 is operable to communicate with wire and wireless devices or entities using the IEEE 802 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE 802.13 over-the-air modulation techniques). This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth™ wireless technologies, among others. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi networks use radio technologies called IEEE 802.13x (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wire networks (which use IEEE 802.3-related media and functions). FIG. 11 is a block diagram depicting an exemplary communications architecture 1100 suitable for implementing various embodiments as previously described. The communications architecture 1100 includes various common communications elements, such as a transmitter, receiver, transceiver, radio, network interface, baseband processor, antenna, amplifiers, filters, power supplies, and so forth. The embodiments, however, are not limited to implementation by the communications architecture 1100. As shown in FIG. 11, the communications architecture 1100 includes one or more clients 1102 and servers 1104. The clients 1102 may implement the client device 510. The servers 1104 may implement the server device 526. The clients 1102 and the servers 1104 are operatively connected to one or more respective client data stores 1106 and server data stores 1108 that can be employed to store information local to the respective clients 1102 and servers 1104, such as cookies and/or associated contextual information. The clients 1102 and the servers 1104 may communicate information between each other using a communication framework 1110. The communications framework 1110 may implement any well-known communications techniques and protocols. The communications framework 1110 may be implemented as a packet-switched network (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), a circuit-switched network (e.g., the public switched telephone network), or a combination of a packet-switched network and a circuit-switched network (with suitable gateways and translators). The communications framework 1110 may implement various network interfaces arranged to accept, communicate, and connect to a communications network. A network interface may be regarded as a specialized form of an input output interface. Network interfaces may employ connection protocols including without limitation direct connect, Ethernet (e.g., thick, thin, twisted pair 10/100/1000 Base T, and the like), token ring, wireless network interfaces, cellular network interfaces, IEEE 802.11a-x network interfaces, IEEE 802.16 network interfaces, IEEE 802.20 network interfaces, and the like. Further, multiple network interfaces may be used to engage with various communications network types. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and unicast networks. Should processing requirements dictate a greater amount speed and capacity, distributed network controller architectures may similarly be employed to pool, load balance, and otherwise increase the communicative bandwidth required by clients 1102 and the servers 1104. A communications network may be any one and the combination of wired and/or wireless networks including without limitation a direct interconnection, a secured custom connection, a private network (e.g., an enterprise intranet), a public network (e.g., the Internet), a Personal Area Network (PAN), a Local Area Network (LAN), a Metropolitan Area Network (MAN), an Operating Missions as Nodes on the Internet (OMNI), a Wide Area Network (WAN), a wireless network, a cellular network, and other communications networks. FIG. 12 illustrates an embodiment of a device 1200 for use in a multicarrier OFDM system, such as the messaging system 500. The device 1200 may implement, for example, software components 1202 as described with reference to the messaging component logic 600, the intent determination logic 700, and the group selection logic 800. The device 1200 may also implement a logic circuit 1204. The logic circuit 1204 may include physical circuits to perform operations described for the messaging system 500. As shown in FIG. 12, device 1200 may include a radio interface 1206, baseband circuitry 1208, and a computing platform 1210, although embodiments are not limited to this configuration. The device 1200 may implement some or all of the structure and/or operations for the messaging system 500 and/or logic circuit 1204 in a single computing entity, such as entirely within a single device. Alternatively, the device 1200 may distribute portions of the structure and/or operations for the messaging system 500 and/or logic circuit 1204 across multiple computing entities using a distributed system architecture, such as a client-server architecture, a 3-tier architecture, an N-tier architecture, a tightly-coupled or clustered architecture, a peer-to-peer architecture, a master-slave architecture, a shared database architecture, and other types of distributed systems. The embodiments are not limited in this context. In one embodiment, the radio interface 1206 may include a component or combination of components adapted for transmitting and/or receiving single carrier or multi-carrier modulated signals (e.g., including complementary code keying (CCK) and/or orthogonal frequency division multiplexing (OFDM) symbols) although the embodiments are not limited to any specific over-the-air interface or modulation scheme. The radio interface 1206 may include, for example, a receiver 1212, a transmitter 1214 and/or a frequency synthesizer 1216. The radio interface 1206 may include bias controls, a crystal oscillator and/or one or more antennas 1218. In another embodiment, the radio interface 1206 may use external voltage-controlled oscillators (VCOs), surface acoustic wave filters, intermediate frequency (IF) filters and/or RF filters, as desired. Due to the variety of potential RF interface designs an expansive description thereof is omitted. The baseband circuitry 1208 may communicate with the radio interface 1206 to process receive and/or transmit signals and may include, for example, an analog-to-digital converter 1220 for down converting received signals, and a digital-to-analog converter 1222 for up-converting signals for transmission. Further, the baseband circuitry 1208 may include a baseband or physical layer (PHY) processing circuit 1224 for PHY link layer processing of respective receive/transmit signals. The baseband circuitry 1208 may include, for example, a processing circuit 1226 for medium access control (MAC)/data link layer processing. The baseband circuitry 1208 may include a memory controller 1228 for communicating with the processing circuit 1226 and/or a computing platform 1210, for example, via one or more interfaces 1230. In some embodiments, the PHY processing circuit 1224 may include a frame construction and/or detection module, in combination with additional circuitry such as a buffer memory, to construct and/or deconstruct communication frames, such as radio frames. Alternatively or in addition, the MAC processing circuit 1226 may share processing for certain of these functions or perform these processes independent of the PHY processing circuit 1224. In some embodiments, MAC and PHY processing may be integrated into a single circuit. The computing platform 1210 may provide computing functionality for the device 1200. As shown, the computing platform 1210 may include a processing component 1232. In addition to, or alternatively of, the baseband circuitry 1208, the device 1200 may execute processing operations or logic for the messaging system 500 and logic circuit 1204 using the processing component 1232. The processing component 1232 (and/or the PHY 1224 and/or MAC 1226) may comprise various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. The computing platform 1210 may further include other platform components 1234. Other platform components 1234 include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays), power supplies, and so forth. Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. The device 1200 may be, for example, an ultra-mobile device, a mobile device, a fixed device, a machine-to-machine (M2M) device, a personal digital assistant (PDA), a mobile computing device, a smart phone, a telephone, a digital telephone, a cellular telephone, user equipment, eBook readers, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, game devices, television, digital television, set top box, wireless access point, base station, node B, evolved node B (eNB), subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combination thereof. Accordingly, functions and/or specific configurations of the device 1200 described herein, may be included or omitted in various embodiments of the device 1200, as suitably desired. In some embodiments, the device 1200 may be configured to be compatible with protocols and frequencies associated one or more of the 3GPP LTE Specifications and/or IEEE 1402.16 Standards for WMANs, and/or other broadband wireless networks, cited herein, although the embodiments are not limited in this respect. Embodiments of device 1200 may be implemented using single input single output (SISO) architectures. However, certain implementations may include multiple antennas (e.g., antennas 1218) for transmission and/or reception using adaptive antenna techniques for beamforming or spatial division multiple access (SDMA) and/or using MIMO communication techniques. The components and features of the device 1200 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures. Further, the features of the device 1200 may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “logic” or “circuit.” It will be appreciated that the exemplary device 1200 shown in the block diagram of FIG. 12 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would be necessarily be divided, omitted, or included in embodiments. At least one computer-readable storage medium 1236 may include instructions that, when executed, cause a system to perform any of the computer-implemented methods described herein. General Notes on Terminology Some embodiments may be described using the expression “one embodiment” or “an embodiment” along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Moreover, unless otherwise noted the features described above are recognized to be usable together in any combination. Thus, any features discussed separately may be employed in combination with each other unless it is noted that the features are incompatible with each other. With general reference to notations and nomenclature used herein, the detailed descriptions herein may be presented in terms of program procedures executed on a computer or network of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities. Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein, which form part of one or more embodiments. Rather, the operations are machine operations. Useful machines for performing operations of various embodiments include general purpose digital computers or similar devices. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. Various embodiments also relate to apparatus or systems for performing these operations. This apparatus may be specially constructed for the required purpose or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The procedures presented herein are not inherently related to a particular computer or other apparatus. Various general purpose machines may be used with programs written in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description given. It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” “third,” and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects. What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. 17098738 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:37AM Apr 27th, 2022 08:37AM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 19th, 2022 12:00AM Apr 9th, 2021 12:00AM https://www.uspto.gov?id=US11310315-20220419 Techniques for directive-based messaging synchronization Techniques for directive-based messaging synchronization are described. In one embodiment, an apparatus may comprise a local network component operative to receive a directive package at a messaging client on a client device; and a local database synchronization component operative to execute the directive package with a messaging-sync virtual machine to modify a local messaging database of the messaging client; and refresh a user interface component of the messaging client in response to modifying the local messaging database of the messaging client. Other embodiments are described and claimed. 11310315 1. A method, comprising: receiving, by a messaging client executing on a client device, a plurality of executable directives defined by a messaging-database-sync-specific instruction set associated with a first version of a plurality of versions of the messaging client, the plurality of executable directives derived from operations that are agnostic to the plurality of versions of the messaging client; and executing the plurality of executable directives by a messaging-sync virtual machine to modify a messaging database of the messaging client. 2. The method of claim 1, wherein modifying the messaging database comprises one or more of: (i) adding a new message to the messaging database, (ii) removing a deleted message from the messaging database, (iii) storing a read receipt in the messaging database, and (iv) modifying an element stored in the messaging database. 3. The method of claim 1, wherein executing the plurality of executable directives synchronizes the messaging database of the messaging client on the client device with a messaging database of another instance of the messaging client on a different device. 4. The method of claim 1, wherein executing the plurality of executable directives to modify the messaging database comprises: determining that a thread for a received message does not exist in the messaging database; and creating the thread for the received message in the messaging database. 5. The method of claim 1, each version of the messaging client associated with a respective messaging-database-sync-specific instruction set of a plurality of messaging-database-sync-specific instruction sets, each of the plurality of messaging-database-sync-specific instruction sets based on translations of the operations that are agnostic to the plurality of versions of the messaging client. 6. The method of claim 5, the messaging-database-sync-specific instruction set supporting conditional statements to modify the messaging database, the plurality of messaging-database-sync-specific instruction sets including the messaging-database-sync-specific instruction set. 7. The method of claim 1, further comprising: modifying a graphical interface of the messaging client based on the modified messaging database. 8. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to: receive, by a messaging client, a plurality of executable directives defined by a messaging-database-sync-specific instruction set associated with a first version of a plurality of versions of the messaging client, the plurality of executable directives derived from operations that are agnostic to the plurality of versions of the messaging client; and execute the plurality of executable directives by a messaging-sync virtual machine to modify a messaging database of the messaging client. 9. The computer-readable storage medium of claim 8, wherein modifying the messaging database comprises one or more of: (i) adding a new message to the messaging database, (ii) removing a deleted message from the messaging database, (iii) storing a read receipt in the messaging database, and (iv) modifying an element stored in the messaging database. 10. The computer-readable storage medium of claim 8, wherein executing the plurality of executable directives synchronizes the messaging database of the messaging client on a device comprising the processor with a messaging database of another instance of the messaging client on a different device. 11. The computer-readable storage medium of claim 8, wherein the instructions to execute the plurality of executable directives to modify the messaging database comprise instructions that when executed by the processor, cause the processor to: determine that a thread for a received message does not exist in the messaging database; and create the thread for the received message in the messaging database. 12. The computer-readable storage medium of claim 8, each version of the messaging client associated with a respective messaging-database-sync-specific instruction set of a plurality of messaging-database-sync-specific instruction sets, each of the plurality of messaging-database-sync-specific instruction sets based on translations of the operations that are agnostic to the plurality of versions of the messaging client. 13. The computer-readable storage medium of claim 12, wherein the messaging-database-sync-specific instruction set supports conditional statements to modify the messaging database, the plurality of messaging-database-sync-specific instruction sets including the messaging-database-sync-specific instruction set. 14. The computer-readable storage medium of claim 8, comprising instructions that when executed by the processor, cause the processor to: modify a graphical interface of the messaging client based on the modified messaging database. 15. An apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive, by a messaging client executing on the processor, a plurality of executable directives defined by a messaging-database-sync-specific instruction set associated with a first version of a plurality of versions of the messaging client, the plurality of executable directives derived from operations that are agnostic to the plurality of versions of the messaging client; and execute the plurality of executable directives by a messaging-sync virtual machine to modify a messaging database of the messaging client. 16. The apparatus of claim 15, wherein modifying the messaging database comprises one or more of: (i) adding a new message to the messaging database, (ii) removing a deleted message from the messaging database, (iii) storing a read receipt in the messaging database, and (iv) modifying an element stored in the messaging database. 17. The apparatus of claim 15, wherein executing the plurality of executable directives synchronizes the messaging database of the messaging client on the apparatus with a messaging database of another instance of the messaging client on a different device. 18. The apparatus of claim 15, wherein the instructions to execute the plurality of executable directives to modify the messaging database comprise instructions that when executed by the processor, cause the processor to: determine that a thread for a received message does not exist in the messaging database; and create the thread for the received message in the messaging database. 19. The apparatus of claim 15, each version of the messaging client associated with a respective messaging-database-sync-specific instruction set of a plurality of messaging-database-sync-specific instruction sets, each of the plurality of messaging-database-sync-specific instruction sets based on translations of the operations that are agnostic to the plurality of versions of the messaging client. 20. The apparatus of claim 19, wherein the messaging-database-sync-specific instruction set supports conditional statements to modify the messaging database, the plurality of messaging-database-sync-specific instruction sets including the messaging-database-sync-specific instruction set. 20 RELATED APPLICATIONS This application is a continuation of, claims the benefit of and priority to previously filed U.S. patent application Ser. No. 16/237,282, titled “TECHNIQUES FOR DIRECTIVE-BASED MESSAGING SYNCHRONIZATION,” filed Dec. 31, 2018, which is hereby incorporated by reference in its entirety. This application is related to the United States Patent Application titled “Techniques for a Database-Driven Messaging User Interface,” U.S. patent application Ser. No. 16/237,273, filed on Dec. 31, 2018, which is hereby incorporated by reference in its entirety. This application is related to the United States Patent Application titled “Techniques for In-Place Directive Execution,” U.S. patent application Ser. No. 16/237,060, filed on Dec. 31, 2018, which is hereby incorporated by reference in its entirety. This application is related to the United States Patent Application titled “Techniques for Server-Side Messaging Data Providers,” U.S. patent application Ser. No. 16/237,289, filed on Dec. 31, 2018, which is hereby incorporated by reference in its entirety. This application is related to the United States Patent Application titled “Techniques for Backend-Specific Cursor Tracking,” U.S. patent application Ser. No. 16/237,297, filed on Dec. 31, 2018, which is hereby incorporated by reference in its entirety. BACKGROUND Mobile devices may run applications, commonly known as “apps,” on behalf of their users. These applications may execute as processes on a device. These application may engage in network activity on the mobile device, such as may use wireless signals, including Wi-Fi, cellular data, and/or other technologies. Cellular carriers may provide cellular data communication to their cellular customers. For example, smart phones and other mobile devices may run web browsers that may be used while on the cellular network to retrieve web pages. Additionally, many applications that may be pre-installed or user-installed on a mobile device may use cellular data communication to access remote data, such as resources available on the Internet. SUMMARY The following presents a simplified summary in order to provide a basic understanding of some novel embodiments described herein. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Some concepts are presented in a simplified form as a prelude to the more detailed description that is presented later. Various embodiments are generally directed to techniques for directive-based messaging synchronization. In one embodiment, for example, an apparatus may comprise a local network component operative to receive a directive package at a messaging client on a client device; and a local database synchronization component operative to execute the directive package with a messaging-sync virtual machine to modify a local messaging database of the messaging client; and refresh a user interface component of the messaging client in response to modifying the local messaging database of the messaging client. Other embodiments are described and claimed. To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings. These aspects are indicative of the various ways in which the principles disclosed herein can be practiced and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an embodiment of a messaging synchronization system. FIG. 2 illustrates an embodiment of a social graph. FIG. 3 illustrates an embodiment of a messaging synchronization system performing a messaging synchronization exchange. FIG. 4A illustrates an embodiment of a messaging synchronization flow for an initial messaging synchronization of a messaging synchronization system. FIG. 4B illustrates an embodiment of a messaging synchronization flow for a messaging synchronization resumption of a messaging synchronization system. FIG. 5 illustrates an embodiment of an execution of a directive package. FIG. 6 illustrates an embodiment of a logic flow for the system of FIG. 1. FIG. 7 illustrates an embodiment of a centralized system for the system of FIG. 1. FIG. 8 illustrates an embodiment of a distributed system for the system of FIG. 1. FIG. 9 illustrates an embodiment of a computing architecture. FIG. 10 illustrates an embodiment of a communications architecture. FIG. 11 illustrates an embodiment of a radio device architecture. DETAILED DESCRIPTION Users access a messaging system using a messaging client. Messaging servers of the messaging system interoperate with the messaging client to update the state of the messaging client. This may reflect the user receiving a message from another user via the messaging system, information about one or more of a user's contacts, messaging system notifications, or any other providing of information by the messaging system to the messaging client. Various techniques may be used to update the state of a messaging client. In some embodiments, a messaging system may use a messaging synchronization scheme based around a complex messaging client using a complex messaging synchronization protocol where the synchronization protocol represents each of the various possible synchronization actions (e.g., providing a message, modifying contact state) via explicit synchronization commands each associated with a particular action and predefined in the synchronization protocol and on the messaging client. This technique has multiple disadvantages. The complexity of the client results in a large messaging client in terms of application binary and therefore increases both the storage space used for it on the client and the time used to load the messaging client into active memory on the client device. Further, a great many modifications to the operation of the messaging system involve modifications to the messaging client and therefore an application update on the client device. Alternatively, a more streamlined synchronization protocol can be used. Instead of a complex synchronization protocol with individual messaging operations individually represented in the protocol, the synchronization protocol is built around updating a local database of the messaging client on the client device. The user interface of the messaging client is built to update itself exclusively using queries to the local database. As such, the messaging client is updated by the messaging servers updating the local database on the client device and then the user interface refreshing based on the updates to the local database. This may result in a smaller binary size for the messaging client as compared to other techniques, reducing the storage space used on the client device and the time used to load the messaging client into memory, increasing responsiveness. Further, many changes to the operation of the messaging client may be implemented by exclusively updating the operation of the messaging servers without any need to update the messaging client application. Updates to the local database may be performed by sending database-update directives from the messaging servers to the messaging client. Database-update directives are commands instructing the messaging client how to update the local database. The database-update directives use a messaging-database-update command set specific to messaging database updating. A messaging-sync-specific virtual machine in the messaging client executes the database-update directives to update the local database and thereby provide messaging service to the messaging client. The directives are generated on the messaging servers for the messaging client. The messaging servers generate the directives specifically for the messaging client, based on, for instance, the version of the messaging client as different versions of the messaging client may use different database schema and/or rely on different formats for messaging data. The messaging servers execute messaging providers to generate the directives. A first layer of messaging providers determine high level operations that are agnostic to the messaging client version. These high level operations represent updates to the messaging client that are more recent than the current state of the messaging client. These high level operations are then translated to version-specific directives by a second layer of messaging providers specific to a particular database schema used by one or more versions of the messaging client. These version-specific directives are then provided to the messaging client to updates it local database. As such, a messaging system may be implemented with a small, efficient client that uses little storage space on the client device and loads quickly. Further, the messaging system may be more significantly updated without modification of the messaging client application as compared to other synchronization techniques. Further, the messaging provider system may reduce the complexity of supporting multiple client versions. Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives consistent with the claimed subject matter. It is worthy to note that “a” and “b” and “c” and similar designators as used herein are intended to be variables representing any positive integer. Thus, for example, if an implementation sets a value for a=5, then a complete set of components 122 illustrated as components 122-1 through 122-a may include components 122-1, 122-2, 122-3, 122-4 and 122-5. The embodiments are not limited in this context. FIG. 1 illustrates a block diagram for a messaging synchronization system 100. In one embodiment, the messaging synchronization system 100 may comprise a computer-implemented system having software applications comprising one or more components. Although the messaging synchronization system 100 shown in FIG. 1 has a limited number of elements in a certain topology, it may be appreciated that the messaging synchronization system 100 may include more or less elements in alternate topologies as desired for a given implementation. The client messaging synchronization server system 110 may comprise one or more client messaging synchronization servers operated by a messaging platform as part of the messaging synchronization system 100. A client messaging synchronization server may comprise an Internet-accessible server, with the network 120 connecting the various devices of the messaging synchronization system 100 comprising, at least in part, the Internet. A user may own and operate a smartphone device 150. The smartphone device 150 may comprise an iPhone® device, an Android® device, a Blackberry® device, or any other mobile computing device conforming to a smartphone form. The smartphone device 150 may be a cellular device capable of connecting to a network 120 via a cell system 130 using cellular signals 135. In some embodiments and in some cases the smartphone device 150 may additionally or alternatively use Wi-Fi or other networking technologies to connect to the network 120. The smartphone device 150 may execute a messaging client, web browser, or other local application to access the client messaging synchronization server system 110. The same user may own and operate a tablet device 160. The tablet device 150 may comprise an iPad® device, an Android® tablet device, a Kindle Fire® device, or any other mobile computing device conforming to a tablet form. The tablet device 160 may be a Wi-Fi device capable of connecting to a network 120 via a Wi-Fi access point 140 using Wi-Fi signals 145. In some embodiments and in some cases the tablet device 160 may additionally or alternatively use cellular or other networking technologies to connect to the network 120. The tablet device 160 may execute a messaging client, web browser, or other local application to access the client messaging synchronization server system 110. The same user may own and operate a personal computer device 180. The personal computer device 180 may comprise a Mac OS® device, Windows® device, Linux® device, or other computer device running another operating system. The personal computer device 180 may be an Ethernet device capable of connecting to a network 120 via an Ethernet connection. In some embodiments and in some cases the personal computer device 180 may additionally or alternatively use cellular, Wi-Fi, or other networking technologies to the network 120. The personal computer device 180 may execute a messaging client, web browser 170, or other local application to access the client messaging synchronization server system 110. A messaging client may be a dedicated messaging client. A dedicated messaging client may be specifically associated with a messaging provider administering the messaging platform including the client messaging synchronization server system 110. A dedicated messaging client may be a general client operative to work with a plurality of different messaging providers including the messaging provider administering the messaging platform including the client messaging synchronization server system 110. The messaging client may be a component of an application providing additional functionality. For example, a social networking service may provide a social networking application for use on a mobile device for accessing and using the social networking service. The social networking service may include messaging functionality such as may be provided by one or more elements of the client messaging synchronization server system 110. It will be appreciated that messaging servers for the client messaging synchronization server system 110 may be one component of a computing device for the social networking service, with the computing device providing additional functionality of the social networking service. Similarly, the social networking application may provide both messaging functionality and additional social networking functionality. In some cases a messaging endpoint may retain state between user sessions and in some cases a messaging endpoint may relinquish state between user session. A messaging endpoint may use a local store to retain the current state of a message inbox. This local store may be saved in persistent storage such that the state may be retrieved between one session and the next, including situations in which, for example, a local application is quit or otherwise removed from memory or a device is powered off and on again. Alternatively, a messaging endpoint may use a memory cache to retain the current state of a message inbox but refrain from committing the state of the message inbox to persistent storage. The messaging endpoint may use a local store that is replicated across multiple devices, which may include one or both of other client devices and server devices. A messaging endpoint that retains the state of a message inbox may comprise a dedicated messaging application or a messaging utility integrated into another local application, such as a social networking application. A messaging endpoint that relinquishes state of a message inbox may comprise messaging access implemented within a web browser. In one embodiment, a web browser, such as web browser 170 executing on personal computer device 180, may execute HTML code that interacts with the messaging server to present messaging functionality to a user. A user may save and retrieve data from a plurality of devices, including the smartphone device 150, tablet device 160, and personal computer device 180. The user may use a first messaging application on the smartphone device 150, a second messaging application on the tablet device 160, and the web browser 170 on the personal computer device 180. The first and second messaging applications may comprise installations of the same application on both devices. The first and second messaging applications may comprise a smartphone-specific and a tablet-specific version of a common application. The first and second messaging application may comprise distinct applications. The user may benefit from having their message inbox, application configurations, and/or other data kept consistent between their devices. A user may use their smartphone device 150 on the cell system 130 while away from their home, sending and receiving messages via the cells system 130. The user may stop by a coffee shop, or other location offering Wi-Fi, and connect their tablet device 160 to a Wi-Fi access point 140. The tablet device 160 may retrieve its existing known state for the message inbox and receive updates that have happened since the last occasion on which the tablet device 160 had access to a network, including any messages sent by the smartphone device 150 and that may have been received by the user while operating the smartphone device 150. The user may then return home and access their message inbox using a web browser 170 on a personal computer device 180. The web browser 170 may receive a snapshot of the current state of the message inbox from the client messaging synchronization server system 110 due to it not maintaining or otherwise not having access to an existing state for the message inbox. The web browser 170 may then retrieve incremental updates for any new changes to the state of the message inbox so long as it maintains a user session with the client messaging synchronization server system 110, discarding its known state for the message inbox at the end of the session, such as when the web browser 170 is closed by the user. Without limitation, an update may correspond to the addition of a message to a mailbox, a deletion of a message from a mailbox, and a read receipt. The client messaging synchronization server system 110 may use knowledge generated from interactions in between users. The client messaging synchronization server system 110 may comprise a component of a social-networking system and may use knowledge generated from the broader interactions of the social-networking system. As such, to protect the privacy of the users of the client messaging synchronization server system 110 and the larger social-networking system, client messaging synchronization server system 110 may include an authorization server (or other suitable component(s)) that allows users to opt in to or opt out of having their actions logged by the client messaging synchronization server system 110 or shared with other systems (e.g., third-party systems), for example, by setting appropriate privacy settings. A privacy setting of a user may determine what information associated with the user may be logged, how information associated with the user may be logged, when information associated with the user may be logged, who may log information associated with the user, whom information associated with the user may be shared with, and for what purposes information associated with the user may be logged or shared. Authorization servers or other authorization components may be used to enforce one or more privacy settings of the users of the client messaging synchronization server system 110 and other elements of a social-networking system through blocking, data hashing, anonymization, or other suitable techniques as appropriate. FIG. 2 illustrates an example of a social graph 200. In particular embodiments, a social-networking system may store one or more social graphs 200 in one or more data stores as a social graph data structure. In particular embodiments, social graph 200 may include multiple nodes, which may include multiple user nodes 202 and multiple concept nodes 204. Social graph 200 may include multiple edges 206 connecting the nodes. In particular embodiments, a social-networking system, client system, third-party system, or any other system or device may access social graph 200 and related social-graph information for suitable applications. The nodes and edges of social graph 200 may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or queryable indexes of nodes or edges of social graph 200. In particular embodiments, a user node 202 may correspond to a user of the social-networking system. As an example and not by way of limitation, a user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over the social-networking system. In particular embodiments, when a user registers for an account with the social-networking system, the social-networking system may create a user node 202 corresponding to the user, and store the user node 202 in one or more data stores. Users and user nodes 202 described herein may, where appropriate, refer to registered users and user nodes 202 associated with registered users. In addition or as an alternative, users and user nodes 202 described herein may, where appropriate, refer to users that have not registered with the social-networking system. In particular embodiments, a user node 202 may be associated with information provided by a user or information gathered by various systems, including the social-networking system. As an example and not by way of limitation, a user may provide their name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. In particular embodiments, a user node 202 may be associated with one or more data objects corresponding to information associated with a user. In particular embodiments, a user node 202 may correspond to one or more webpages. A user node 202 may be associated with a unique user identifier for the user in the social-networking system. In particular embodiments, a concept node 204 may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with the social-network service or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located within the social-networking system or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; another suitable concept; or two or more such concepts. A concept node 204 may be associated with information of a concept provided by a user or information gathered by various systems, including the social-networking system. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, a concept node 204 may be associated with one or more data objects corresponding to information associated with concept node 204. In particular embodiments, a concept node 204 may correspond to one or more webpages. In particular embodiments, a node in social graph 200 may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible to the social-networking system. Profile pages may also be hosted on third-party websites associated with a third-party server. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to a particular concept node 204. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node 202 may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. A business page such as business page 205 may comprise a user-profile page for a commerce entity. As another example and not by way of limitation, a concept node 204 may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node 204. In particular embodiments, a concept node 204 may represent a third-party webpage or resource hosted by a third-party system. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “eat”), causing a client system to send to the social-networking system a message indicating the user's action. In response to the message, the social-networking system may create an edge (e.g., an “eat” edge) between a user node 202 corresponding to the user and a concept node 204 corresponding to the third-party webpage or resource and store edge 206 in one or more data stores. In particular embodiments, a pair of nodes in social graph 200 may be connected to each other by one or more edges 206. An edge 206 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge 206 may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication, the social-networking system may send a “friend request” to the second user. If the second user confirms the “friend request,” the social-networking system may create an edge 206 connecting the first user's user node 202 to the second user's user node 202 in social graph 200 and store edge 206 as social-graph information in one or more data stores. In the example of FIG. 2, social graph 200 includes an edge 206 indicating a friend relation between user nodes 202 of user “Amanda” and user “Dorothy.” Although this disclosure describes or illustrates particular edges 206 with particular attributes connecting particular user nodes 202, this disclosure contemplates any suitable edges 206 with any suitable attributes connecting user nodes 202. As an example and not by way of limitation, an edge 206 may represent a friendship, family relationship, business or employment relationship, fan relationship, follower relationship, visitor relationship, subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected in social graph 200 by one or more edges 206. In particular embodiments, an edge 206 between a user node 202 and a concept node 204 may represent a particular action or activity performed by a user associated with user node 202 toward a concept associated with a concept node 204. As an example and not by way of limitation, as illustrated in FIG. 2, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to a edge type or subtype. A concept-profile page corresponding to a concept node 204 may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons, the social-networking system may create a “favorite” edge or a “check in” edge in response to a user's action corresponding to a respective action. As another example and not by way of limitation, a user (user “Carla”) may listen to a particular song (“Across the Sea”) using a particular application (SPOTIFY, which is an online music application). In this case, the social-networking system may create a “listened” edge 206 and a “used” edge (as illustrated in FIG. 2) between user nodes 202 corresponding to the user and concept nodes 204 corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover, the social-networking system may create a “played” edge 206 (as illustrated in FIG. 2) between concept nodes 204 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played” edge 206 corresponds to an action performed by an external application (SPOTIFY) on an external audio file (the song “Across the Sea”). Although this disclosure describes particular edges 206 with particular attributes connecting user nodes 202 and concept nodes 204, this disclosure contemplates any suitable edges 206 with any suitable attributes connecting user nodes 202 and concept nodes 204. Moreover, although this disclosure describes edges between a user node 202 and a concept node 204 representing a single relationship, this disclosure contemplates edges between a user node 202 and a concept node 204 representing one or more relationships. As an example and not by way of limitation, an edge 206 may represent both that a user likes and has used at a particular concept. Alternatively, another edge 206 may represent each type of relationship (or multiples of a single relationship) between a user node 202 and a concept node 204 (as illustrated in FIG. 2 between user node 202 for user “Edwin” and concept node 204 for “SPOTIFY”). In particular embodiments, the social-networking system may create an edge 206 between a user node 202 and a concept node 204 in social graph 200. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user's client system) may indicate that he or she likes the concept represented by the concept node 204 by clicking or selecting a “Like” icon, which may cause the user's client system to send to the social-networking system a message indicating the user's liking of the concept associated with the concept-profile page. In response to the message, the social-networking system may create an edge 206 between user node 202 associated with the user and concept node 204, as illustrated by “like” edge 206 between the user and concept node 204. In particular embodiments, the social-networking system may store an edge 206 in one or more data stores. In particular embodiments, an edge 206 may be automatically formed by the social-networking system in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge 206 may be formed between user node 202 corresponding to the first user and concept nodes 204 corresponding to those concepts. Although this disclosure describes forming particular edges 206 in particular manners, this disclosure contemplates forming any suitable edges 206 in any suitable manner. The social graph 200 may further comprise a plurality of product nodes. Product nodes may represent particular products that may be associated with a particular business. A business may provide a product catalog to the consumer-to-business service 110 and the consumer-to-business service 110 may therefore represent each of the products within the product in the social graph 200 with each product being in a distinct product node. A product node may comprise information relating to the product, such as pricing information, descriptive information, manufacturer information, availability information, and other relevant information. For example, each of the items on a menu for a restaurant may be represented within the social graph 200 with a product node describing each of the items. A product node may be linked by an edge to the business providing the product. Where multiple businesses provide a product, each business may have a distinct product node associated with its providing of the product or may each link to the same product node. A product node may be linked by an edge to each user that has purchased, rated, owns, recommended, or viewed the product, with the edge describing the nature of the relationship (e.g., purchased, rated, owns, recommended, viewed, or other relationship). Each of the product nodes may be associated with a graph id and an associated merchant id by virtue of the linked merchant business. Products available from a business may therefore be communicated to a user by retrieving the available product nodes linked to the user node for the business within the social graph 200. The information for a product node may be manipulated by the social-networking system as a product object that encapsulates information regarding the referenced product. FIG. 3 illustrates an embodiment of a messaging synchronization system 100 performing a messaging synchronization exchange 310. The messaging synchronization system 100 may comprise a plurality of components. In some embodiments, these plurality of components may be distributed among a plurality of servers. In other embodiments, a single server may implement the plurality of components. In some embodiments, a plurality of servers may be executed by a single server device. In other embodiments, the plurality of servers may be executed by a plurality of server devices. In some embodiments, multiple instances of the various components and various servers may be executed to provide redundancy, improved scaling, and other benefits. Similarly, a client device may execute a plurality of components as part of a local client application. A client device may communicate with other devices using wireless transmissions to exchange network traffic. Exchanging network traffic, such as may be included in the exchange of messaging or database synchronization transactions, may comprise sending and receiving network traffic via a network interface controller (NIC). A NIC comprises a hardware component connecting a computer device, such as client device, to a computer network. The NIC may be associated with a software network interface empowering software applications to access and use the NIC. Network traffic may be received over the computer network as signals transmitted over data links. The network traffic may be received by capturing these signals and interpreting them. The NIC may receive network traffic over the computer network and send the network traffic to memory storage accessible to software applications using a network interface application programming interface (API). The network interface controller may be used for the network activities of the embodiments described herein, including the interoperation of the clients and servers through network communication. For example, a client device sending or receiving messaging synchronization information to or from a server may be interpreted as using the network interface controller for network access to a communications network for the sending or receiving of data. The messaging synchronization system 100 is operative to synchronize a local messaging database 329 with the current user state for a messaging system. The local messaging database 329 is stored on a client device 320 and used with a messaging application for the messaging system. The messaging synchronization system 100 includes a client messaging synchronization server system 110 providing transport for the providing of messaging synchronization information to a client device 320. Providing messaging synchronization information to a client device 320 is performed in a messaging synchronization exchange 310. The client device 320 may comprise a plurality of components. The components may comprise elements of a local client application comprising a messaging client 325 executing on the client device 320. In general, the local application may comprise, without limitation, a messaging application and/or a social-networking application. In some embodiments, the messaging synchronization may be performed for a local messaging database 329 exclusively used by the messaging application 325 of which the components are an element. In other embodiments, the messaging synchronization may be performed for a local messaging database 329 used by a plurality of applications on the client device 320. The messaging client 325 on the client device 320 may comprise a local network component 326. The local network component 326 may be generally arranged to manage the messaging synchronization exchange 310 with a server-side client communication component 330 that in which information is exchanged between the client device 320 and the client messaging synchronization server system 110 to provide messaging services to the client device 320. The messaging client 325 on the client device 320 may comprise a local database synchronization component 323. The local database synchronization component 323 may be generally arranged to update a local messaging database 329 based on the messaging synchronization exchange 310 between the client device 320 and the client messaging synchronization server system 110. The local database synchronization component 323 may receive directives for updating the local messaging database 329 from the client messaging synchronization server system 110 and update the local messaging database 329 by executing the directives. The messaging client 325 on the client device 320 may comprise a database access component 331. The database access component 331 may be generally arranged to access the local messaging database 329 and to intermediate between the local messaging database 329 and the local user interface components 321. The database access component 331 executes queries on behalf of the local user interface components 321 to provide them with updated data and modifies the local messaging database 329 in response to user interactions with the local user interface components 321. The messaging client 325 on the client device 320 may comprise a plurality of local user interface components 321. The plurality of local user interface components 321 collectively comprise a user interface for the messaging client to the user of the client device 320 via the hardware components of the client device 320. The user interface may comprise visual elements, auditory elements, and/or other elements. The plurality of local user interface components 321 may be exclusively updated with the local messaging database 329 as an intermediary, such that all information used to update the user interface for the messaging client is provided to the plurality of local user interface components 321 via the modification of local messaging database 329 by directives processed by the local database synchronization component 323. The client device 320 receives messaging services by interacting with, exchanging information with, a server-side client communication component 330. The server-side client communication component 330 is the access point for the client device 320 to the messaging system, with communication with the server-side client communication component 330 performed via a messaging synchronization exchange 310. The server-side client communication component 330 receives requests and other information from the client device 320 and provides information to the client device 320 from the other components of the messaging synchronization system 100. A server-side client database management component 340 arranges directives for execution by the local database synchronization component 323 of a client device. The server-side client database management component 340 arranges the directives to, at least in part, deliver messages to the client device 320. Messaging, as well as other information, may be retrieved from a message 365 via a message queue management component, from a messaging backend store 370, and from a client information store 350. The messaging backend store 370 may comprise a long-term message archive that may be used to retrieve archived messages, which may be all messages that aren't sufficiently currently as to have not yet been archived by a message archival process. The client information store 350 may generally store client information other than messages. A message queue 365 may queue—store and place an ordering on—a plurality of messages. The message queue 365 may comprise a representation of messages in a strict linear order. The message queue 365 may be organized as a data unit according to a variety of techniques. The message queue 365 may be stored in semi-persistent memory, persistent storage, both semi-persistent memory and persistent storage, or a combination of the two. The message queue 365 may be organized according to a variety of data structures, including linked lists, arrays, and other techniques for organizing queues. The message queue 365 may generally comprise a first-in-first-out (FIFO) queue in which no update will be removed or retrieved from the queue before any updates that were received prior to it. The message queue 365 may be managed by a message queue management component 360. The message queue management component 360 is generally arranged to provide messages for distribution to client devices, such as client device 320, to a server-side client database management component 340, which thereafter arranges directives to store the messages in the local messaging database 329 for eventual display to the user of the client device 320. In some embodiments, a message queue 365 may be specifically associated with the user of client device 320, such as by being uniquely associated within the messaging synchronization system 100 with a user account for the user of client device 320. The message queue 365 may be a single queue used for all client devices used by this user. In these embodiments, each user of the messaging synchronization system 100 may have a message queue associated with their account, this message queue used to send messages to one or more client devices for that user. FIG. 4A illustrates an embodiment of a messaging synchronization flow 400 for an initial messaging synchronization of a messaging synchronization system 100. An initial messaging synchronization is performed when a messaging client is initially configured. This may occur when a messaging client is initially installed on a client device 420. This may occur when a user first accesses or creates their account with the messaging system on the client device 420. The client device 420 performs a client initial connection 413 with a gateway 440. The gateway 440 is the entry point for the client device 420 into the client messaging synchronization server system 110. The gateway 440 corresponds to the server-side client communication component 330 described with reference to FIG. 3. The client initial connection 413 comprises a request to the gateway 440 to start synching a local messaging database for the client device 420, such as the local messaging database 329 described with reference to FIG. 3. The gateway 440 then performs a gateway sync request 416 with a broker 460 in response to the client initial connection 413. The broker 460 generates the directives used to update the local messaging database of the client device 420. The broker 460 corresponds to the server-side client database management component 340 described with reference to FIG. 3. The gateway sync request 416 requests on behalf of the client device 420 that the broker 460 generate directives to perform the initial synchronization 423 of the client device 420. The gateway sync request 416 includes identifying information associated with the client initial connection 413, such as may include a user identifier for the user of the client device 420, a messaging client identifier for a version of the messaging client, and/or other identifying information. The broker 460 then performs a broker initial sync 419 with a data backend 480 in response to the gateway sync request 416. The broker initial sync 419 retrieves data from the data backend 480 to perform the initial sync of the client device 420. The data backend 480 may comprise the message queue management component 360, message queue 365, messaging backend store 370, and/or client information store 350 as described with reference to FIG. 3. The data backend 480 may comprise alternatively or additionally comprise other data storage components. The data backend 480 provides the initial sync data 423 to the broker 460. The broker 460 assembles the initial sync data 423 into directives that, when executed on the client device 420, update the local messaging database on the client device 420 to initialize the messaging state on the client device 420. These initial sync directives 426 are then provided to the gateway 440. The gateway 440 then provides them to the client device 420, which executes them to update its local messaging database. FIG. 4B illustrates an embodiment of a messaging synchronization flow 450 for a messaging synchronization resumption of a messaging synchronization system 100. A messaging synchronization resumption is performed when a messaging client resumes synchronization with the messaging system. The resumption of synchronization may correspond to a client connecting to the messaging system, via the gateway 440, after an initial synchronization has already been performed. The resumption of synchronization may generally correspond to a client requesting updates from the messaging system. The client device 420 performs a client reconnection 463 with the gateway 440. The client reconnection 463 comprises a request to the gateway 440 to restart synching of the local messaging database for the client device 420. The gateway 440 then performs a gateway sync resume 466 with the broker 460 in response to the client reconnection 463. The gateway sync resume 466 requests on behalf of the client device 420 that the broker 460 generate directives to perform the a resumed synchronization of the client device 420. The gateway sync resume 466 includes identifying information associated with the client reconnection 463, such as may include a user identifier for the user of the client device 420, a messaging client identifier for a version of the messaging client, and/or other identifying information. The broker 460 then performs a broker sync session 469 with the data backend 480 in response to the gateway sync resume 466. The broker sync session 469 retrieves data from the data backend 480 to perform a sync to update the client device 420 to the current state of a user's messaging inbox and general messaging state. The data backend 480 provides the sync data 473 to the broker 460. The broker 460 assembles the sync data 473 into sync directives 476 that, when executed on the client device 420, update the local messaging database on the client device 420 to the updated messaging state for the user on the client device 420. These sync directives 476 are then provided to the gateway 440. The gateway 440 then provides the sync directives 476 to the client device 420 in a sync delivery 479, which executes them to update its local messaging database. FIG. 5 illustrates an embodiment of an execution of a directive package 510. The client messaging synchronization server system 100 provides a directive package 510 to the messaging client 325 on a client device 320. The directive package 510 comprises a plurality of executable directives within a messaging-database-sync-specific instruction set. The messaging-database-sync-specific instruction set defines the available set of executable directives from which the plurality of executable directives of the directive package 510 is constructed. The local network component 326 of the messaging client 325 on the client device 320 receives the directive package 510 from the client messaging synchronization server system 110 and provides it to the local database synchronization component 323. The local database synchronization component 323 comprises a messaging-sync virtual machine 550. The messaging-sync virtual machine 550 is operative to execute directives defined by the messaging-database-sync-specific instruction set according to the definition of the directives according to the messaging-database-sync-specific instruction set. The messaging-sync virtual machine 550 may be implemented according to known virtual machine techniques for implementing a defined instruction set. The local database synchronization component 323 executes the directive package 510 with the messaging-sync virtual machine 550 to modify the local messaging database 329 of the messaging client 325. The executable directives of the directive package 510 specify database modifications 530 for the local messaging database 329. The messaging-sync virtual machine 550 performs the database modifications 530 based on the executable directives. The messaging-database-sync-specific instruction set defines directives for adding database elements to the local messaging database 329, deleting database elements from the local messaging database 329, and for modifying database elements of the local messaging database 329. The messaging-database-sync-specific instruction set supports conditional statements, wherein different branching paths of execution or selected between based on the evaluation of the conditional statements. Conditional statements may be evaluated based on queries to the local messaging database 329, such that the results of the queries to the local messaging database 329 determine the path of execution of the executable directives of the directive package 510. Once the database modifications 530 prompted by the directive package 510 are complete, the local database synchronization component 323 refreshes one or more local user interface components 321 of the messaging client 320 in response to modifying the local messaging database 329 of the messaging client 320. The local database synchronization component 323 refreshes the one or more local user interface components 321 by notifying the database access component 331 of the update to the local messaging database 329. In response to the notification, the database access component 331 interoperates with the local user interface components 321 to refresh the local user interface of the messaging client 325 such that the database modifications 530 driven by the executable directives of the directive package 510 are represented in the user interface. The directive package may comprise a received message update. A received message update comprises an incoming messages for the user of the messaging client 325, such as a user-to-user message addressed to the user or group message for a group of which the user is a member. The received message update comprises directives to perform database modifications 530 that add the incoming message to the local messaging database 329. Notifying the local user interface to refresh will then make the incoming message available to the user of the client device 320 once the refresh is performed. To control the amount of client-device-specific data stored by the client messaging synchronization server system 110, the client messaging synchronization server system 110 may be configured such that it doesn't maintain a record, or doesn't maintain a complete record, of what messaging data is stored on the client device 320. Instead, it may use conditional statements to accommodate different possible scenarios for what messaging data is stored on the client device. As such, a received message update may comprise conditional statements to handle alternative cases for the handling of an incoming message. For instance, an incoming message may be associated with a message thread not represented in the local messaging database 329 on the client device 320. The local messaging database 329 will generally only contain a subset of the messaging information associated with a user account to control the amount of storage space used on the client device 320. As such, in some instances, an incoming message may be associated with a message thread not stored on the client device 320 and the client messaging synchronization server system 110 may lack sufficient information about the state of the local messaging database 329 to know whether or not the messaging data for the message thread is stored on the client device 320. To accommodate the lack of client state information the directive package 510 may therefore comprise conditional statements such that a single directive package 510 can produce different database modifications 530 based on the state of the local messaging database 329. A received message update may therefore comprise a conditional local thread creation portion for a message thread associated with the received message update. The local database synchronization component 323 determines whether the message thread is represented in the local messaging database 329. Determining whether the message thread is represented in the local messaging database 329 may comprise performing a query, defined by the executable directives of the directive package 510, against the local messaging database 329 that indicates whether or not the message thread is represented in the local messaging database 329. The messaging-sync virtual machine 550 then branches based on the results of the query. The local database synchronization component 323, using the messaging-sync virtual machine 550, executes the conditional local thread creation portion where the message thread is not represented in the local messaging database. The conditional local thread creation portion comprises messaging data to represent the message thread in the local messaging database 329 so that the incoming message may be displayed in-context in a message thread by the local user interface components 321 for the user of the client device 320. The conditional local thread creation portion may comprise a title of the message thread, a list of users in the message thread, and/or other message thread information. The conditional local thread creation portion may comprise a contact information bundle for the message thread, the contact information bundle comprising contact information for one or more participants in the message thread. This message thread information, such as the contact information for the one or more participants in the message thread, may then be displayed by the local user interface components 321. The contact information may comprise, at least in part, social-networking information, such as may be represented in a social graph 200. The directive package 510 may comprise timing instructions for a next update request for the messaging client 325. The timing instructions instruct the messaging client 325 when to request further updates from the client messaging synchronization server system 110. The messaging-sync virtual machine 550 of the local database synchronization component 323 executes the timing instructions, thereby causing the messaging client 325 to wait until the timing instructions indicate the next update request should be performed and then perform the next update request on the indicated timing of the timing instructions. The client messaging synchronization server system 110 then sends a subsequent directive package in response to the next update request. Included herein is a set of flow charts representative of exemplary methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, for example, in the form of a flow chart or flow diagram, are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation. FIG. 6 illustrates one embodiment of a logic flow 600. The logic flow 600 may be representative of some or all of the operations executed by one or more embodiments described herein. In the illustrated embodiment shown in FIG. 6, the logic flow 600 may receive a directive package at a messaging client on a client device at block 602. The logic flow 600 may execute the directive package with a messaging-sync virtual machine to modify a local messaging database of the messaging client at block 604. The logic flow 600 may refresh a user interface component of the messaging client in response to modifying the local messaging database of the messaging client at block 606. The embodiments are not limited to this example. FIG. 7 illustrates a block diagram of a centralized system 700. The centralized system 700 may implement some or all of the structure and/or operations for the messaging synchronization system 100 in a single computing entity, such as entirely within a single centralized server device 710. The centralized server device 710 may comprise any electronic device capable of receiving, processing, and sending information for the messaging synchronization system 100. Examples of an electronic device may include without limitation an ultra-mobile device, a mobile device, a personal digital assistant (PDA), a mobile computing device, a smart phone, a telephone, a digital telephone, a cellular telephone, ebook readers, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, game devices, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combination thereof. The embodiments are not limited in this context. The centralized server device 710 may execute processing operations or logic for the messaging synchronization system 100 using a processing component 730. The processing component 730 may comprise various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. The centralized server device 710 may execute communications operations or logic for the messaging synchronization system 100 using communications component 740. The communications component 740 may implement any well-known communications techniques and protocols, such as techniques suitable for use with packet-switched networks (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), circuit-switched networks (e.g., the public switched telephone network), or a combination of packet-switched networks and circuit-switched networks (with suitable gateways and translators). The communications component 740 may include various types of standard communication elements, such as one or more communications interfaces, network interfaces, network interface cards (NIC), radios, wireless transmitters/receivers (transceivers), wired and/or wireless communication media, physical connectors, and so forth. By way of example, and not limitation, communication media 712 includes wired communications media and wireless communications media. Examples of wired communications media may include a wire, cable, metal leads, printed circuit boards (PCB), backplanes, switch fabrics, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, a propagated signal, and so forth. Examples of wireless communications media may include acoustic, radio-frequency (RF) spectrum, infrared and other wireless media. The centralized server device 710 may implement the client messaging synchronization server system 110. The centralized server device 710 may communicate with a plurality of client devices 720 over a communications media 712 using communications signals 714 via the communications component 740. The client devices 720 receive messaging services from the client messaging synchronization server system 110. The signals 714 sent over media 712 may comprise directives sent from the client messaging synchronization server system 110 to client devices 720, update requests and outgoing database updates from the client devices 720 to the client messaging synchronization server system 110, and/or other messaging data. FIG. 8 illustrates a block diagram of a distributed system 800. The distributed system 800 may distribute portions of the structure and/or operations for the messaging synchronization system 100 across multiple computing entities. Examples of distributed system 800 may include without limitation a client-server architecture, a 3-tier architecture, an N-tier architecture, a tightly-coupled or clustered architecture, a peer-to-peer architecture, a master-slave architecture, a shared database architecture, and other types of distributed systems. The embodiments are not limited in this context. The distributed system 800 may comprise a plurality of server devices 81-. In general, the plurality of server devices 810 may be the same or similar to the centralized server device 710 as described with reference to FIG. 7. For instance, the server devices 810 may each comprise a processing component 830 and a communications component 840 which are the same or similar to the processing component 730 and the communications component 740, respectively, as described with reference to FIG. 7. In another example, the server devices 810 may communicate over a communications media 812 using communications signals 814 via the communications components 840. The server devices 810 may each execute a client messaging synchronization server 850 corresponding to the client messaging synchronization server system 100 as described with references to FIG. 1. The client messaging synchronization servers may execute one or more server-side client communication components, server-side client database management components, message queue management components, message queues, message backend stores, client information stores, and/or any other messaging servers. The server devices 810 may communicate with a plurality of client devices 820 over a communications media 812 using communications signals 814 via the communications component 840. The client devices 820 receive messaging services from the client messaging synchronization servers. The signals 814 sent over media 812 may comprise directives sent from the client messaging synchronization servers to client devices 820, update requests and outgoing database updates from the client devices 820 to the client messaging synchronization servers, and/or other messaging data. FIG. 9 illustrates an embodiment of an exemplary computing architecture 900 suitable for implementing various embodiments as previously described. In one embodiment, the computing architecture 900 may comprise or be implemented as part of an electronic device. Examples of an electronic device may include those described with reference to FIG. 8, among others. The embodiments are not limited in this context. As used in this application, the terms “system” and “component” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 900. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces. The computing architecture 900 includes various common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, and so forth. The embodiments, however, are not limited to implementation by the computing architecture 900. As shown in FIG. 9, the computing architecture 900 comprises a processing unit 904, a system memory 906 and a system bus 908. The processing unit 904 can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Celeron®, Core (2) Duo®, Itanium®, Pentium®, Xeon®, and XScale® processors; and similar processors. Dual microprocessors, multi-core processors, and other multi-processor architectures may also be employed as the processing unit 904. The system bus 908 provides an interface for system components including, but not limited to, the system memory 906 to the processing unit 904. The system bus 908 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Interface adapters may connect to the system bus 908 via a slot architecture. Example slot architectures may include without limitation Accelerated Graphics Port (AGP), Card Bus, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), and the like. The computing architecture 900 may comprise or implement various articles of manufacture. An article of manufacture may comprise a computer-readable storage medium to store logic. Examples of a computer-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of logic may include executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Embodiments may also be at least partly implemented as instructions contained in or on a non-transitory computer-readable medium, which may be read and executed by one or more processors to enable performance of the operations described herein. The system memory 906 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. In the illustrated embodiment shown in FIG. 9, the system memory 906 can include non-volatile memory 910 and/or volatile memory 912. A basic input/output system (BIOS) can be stored in the non-volatile memory 910. The computer 902 may include various types of computer-readable storage media in the form of one or more lower speed memory units, including an internal (or external) hard disk drive (HDD) 914, a magnetic floppy disk drive (FDD) 916 to read from or write to a removable magnetic disk 918, and an optical disk drive 920 to read from or write to a removable optical disk 922 (e.g., a CD-ROM or DVD). The HDD 914, FDD 916 and optical disk drive 920 can be connected to the system bus 908 by a HDD interface 924, an FDD interface 926 and an optical drive interface 928, respectively. The HDD interface 924 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. The drives and associated computer-readable media provide volatile and/or nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives and memory units 910, 912, including an operating system 930, one or more application programs 932, other program modules 934, and program data 936. In one embodiment, the one or more application programs 932, other program modules 934, and program data 936 can include, for example, the various applications and/or components of the messaging synchronization system 100. A user can enter commands and information into the computer 902 through one or more wire/wireless input devices, for example, a keyboard 938 and a pointing device, such as a mouse 940. Other input devices may include microphones, infra-red (IR) remote controls, radio-frequency (RF) remote controls, game pads, stylus pens, card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors, styluses, and the like. These and other input devices are often connected to the processing unit 904 through an input device interface 942 that is coupled to the system bus 908, but can be connected by other interfaces such as a parallel port, IEEE 1394 serial port, a game port, a USB port, an IR interface, and so forth. A monitor 944 or other type of display device is also connected to the system bus 908 via an interface, such as a video adaptor 946. The monitor 944 may be internal or external to the computer 902. In addition to the monitor 944, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth. The computer 902 may operate in a networked environment using logical connections via wire and/or wireless communications to one or more remote computers, such as a remote computer 948. The remote computer 948 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 902, although, for purposes of brevity, only a memory/storage device 950 is illustrated. The logical connections depicted include wire/wireless connectivity to a local area network (LAN) 952 and/or larger networks, for example, a wide area network (WAN) 954. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, for example, the Internet. When used in a LAN networking environment, the computer 902 is connected to the LAN 952 through a wire and/or wireless communication network interface or adaptor 956. The adaptor 956 can facilitate wire and/or wireless communications to the LAN 952, which may also include a wireless access point disposed thereon for communicating with the wireless functionality of the adaptor 956. When used in a WAN networking environment, the computer 902 can include a modem 958, or is connected to a communications server on the WAN 954, or has other means for establishing communications over the WAN 954, such as by way of the Internet. The modem 958, which can be internal or external and a wire and/or wireless device, connects to the system bus 908 via the input device interface 942. In a networked environment, program modules depicted relative to the computer 902, or portions thereof, can be stored in the remote memory/storage device 950. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used. The computer 902 is operable to communicate with wire and wireless devices or entities using the IEEE 802 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE 802.11 over-the-air modulation techniques). This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth™ wireless technologies, among others. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi networks use radio technologies called IEEE 802.11x (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wire networks (which use IEEE 802.3-related media and functions). FIG. 10 illustrates a block diagram of an exemplary communications architecture 1000 suitable for implementing various embodiments as previously described. The communications architecture 1000 includes various common communications elements, such as a transmitter, receiver, transceiver, radio, network interface, baseband processor, antenna, amplifiers, filters, power supplies, and so forth. The embodiments, however, are not limited to implementation by the communications architecture 1000. As shown in FIG. 10, the communications architecture 1000 comprises includes one or more clients 1002 and servers 1004. The clients 1002 may comprise messaging client. The servers 1004 may comprise messaging servers. The clients 1002 and the servers 1004 are operatively connected to one or more respective client data stores 1008 and server data stores 1010 that can be employed to store information local to the respective clients 1002 and servers 1004, such as cookies and/or associated contextual information. The clients 1002 and the servers 1004 may communicate information between each other using a communication framework 1006. The communications framework 1006 may implement any well-known communications techniques and protocols. The communications framework 1006 may be implemented as a packet-switched network (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), a circuit-switched network (e.g., the public switched telephone network), or a combination of a packet-switched network and a circuit-switched network (with suitable gateways and translators). The communications framework 1006 may implement various network interfaces arranged to accept, communicate, and connect to a communications network. A network interface may be regarded as a specialized form of an input output interface. Network interfaces may employ connection protocols including without limitation direct connect, Ethernet (e.g., thick, thin, twisted pair 10/100/1000 Base T, and the like), token ring, wireless network interfaces, cellular network interfaces, IEEE 802.11a-x network interfaces, IEEE 802.16 network interfaces, IEEE 802.20 network interfaces, and the like. Further, multiple network interfaces may be used to engage with various communications network types. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and unicast networks. Should processing requirements dictate a greater amount speed and capacity, distributed network controller architectures may similarly be employed to pool, load balance, and otherwise increase the communicative bandwidth required by clients 1002 and the servers 1004. A communications network may be any one and the combination of wired and/or wireless networks including without limitation a direct interconnection, a secured custom connection, a private network (e.g., an enterprise intranet), a public network (e.g., the Internet), a Personal Area Network (PAN), a Local Area Network (LAN), a Metropolitan Area Network (MAN), an Operating Missions as Nodes on the Internet (OMNI), a Wide Area Network (WAN), a wireless network, a cellular network, and other communications networks. FIG. 11 illustrates an embodiment of a device 1100 for use in a multicarrier OFDM system, such as the messaging synchronization system 100. Device 1100 may implement, for example, software components 1160 as described with reference to messaging synchronization system 100 and/or a logic circuit 1135. The logic circuit 1135 may include physical circuits to perform operations described for the messaging synchronization system 100. As shown in FIG. 11, device 1100 may include a radio interface 1110, baseband circuitry 1120, and computing platform 1130, although embodiments are not limited to this configuration. The device 1100 may implement some or all of the structure and/or operations for the messaging synchronization system 100 and/or logic circuit 1135 in a single computing entity, such as entirely within a single device. Alternatively, the device 1100 may distribute portions of the structure and/or operations for the messaging synchronization system 100 and/or logic circuit 1135 across multiple computing entities using a distributed system architecture, such as a client-server architecture, a 3-tier architecture, an N-tier architecture, a tightly-coupled or clustered architecture, a peer-to-peer architecture, a master-slave architecture, a shared database architecture, and other types of distributed systems. The embodiments are not limited in this context. In one embodiment, radio interface 1110 may include a component or combination of components adapted for transmitting and/or receiving single carrier or multi-carrier modulated signals (e.g., including complementary code keying (CCK) and/or orthogonal frequency division multiplexing (OFDM) symbols) although the embodiments are not limited to any specific over-the-air interface or modulation scheme. Radio interface 1110 may include, for example, a receiver 1112, a transmitter 1116 and/or a frequency synthesizer 1114. Radio interface 1110 may include bias controls, a crystal oscillator and/or one or more antennas 1118. In another embodiment, radio interface 1110 may use external voltage-controlled oscillators (VCOs), surface acoustic wave filters, intermediate frequency (IF) filters and/or RF filters, as desired. Due to the variety of potential RF interface designs an expansive description thereof is omitted. Baseband circuitry 1120 may communicate with radio interface 1110 to process receive and/or transmit signals and may include, for example, an analog-to-digital converter 1122 for down converting received signals, a digital-to-analog converter 1124 for up converting signals for transmission. Further, baseband circuitry 1120 may include a baseband or physical layer (PHY) processing circuit 1156 for PHY link layer processing of respective receive/transmit signals. Baseband circuitry 1120 may include, for example, a processing circuit 1128 for medium access control (MAC)/data link layer processing. Baseband circuitry 1120 may include a memory controller 1132 for communicating with processing circuit 1128 and/or a computing platform 1130, for example, via one or more interfaces 1134. In some embodiments, PHY processing circuit 1126 may include a frame construction and/or detection module, in combination with additional circuitry such as a buffer memory, to construct and/or deconstruct communication frames, such as radio frames. Alternatively or in addition, MAC processing circuit 1128 may share processing for certain of these functions or perform these processes independent of PHY processing circuit 1126. In some embodiments, MAC and PHY processing may be integrated into a single circuit. The computing platform 1130 may provide computing functionality for the device 1100. As shown, the computing platform 1130 may include a processing component 1140. In addition to, or alternatively of, the baseband circuitry 1120, the device 1100 may execute processing operations or logic for the messaging synchronization system 100 and logic circuit 1135 using the processing component 1140. The processing component 1140 (and/or PHY 1126 and/or MAC 1128) may comprise various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. The computing platform 1130 may further include other platform components 1150. Other platform components 1150 include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays), power supplies, and so forth. Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. Device 1100 may be, for example, an ultra-mobile device, a mobile device, a fixed device, a machine-to-machine (M2M) device, a personal digital assistant (PDA), a mobile computing device, a smart phone, a telephone, a digital telephone, a cellular telephone, user equipment, eBook readers, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, game devices, television, digital television, set top box, wireless access point, base station, node B, evolved node B (eNB), subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combination thereof. Accordingly, functions and/or specific configurations of device 1100 described herein, may be included or omitted in various embodiments of device 1100, as suitably desired. In some embodiments, device 1100 may be configured to be compatible with protocols and frequencies associated one or more of the 3GPP LTE Specifications and/or IEEE 1102.16 Standards for WMANs, and/or other broadband wireless networks, cited herein, although the embodiments are not limited in this respect. Embodiments of device 1100 may be implemented using single input single output (SISO) architectures. However, certain implementations may include multiple antennas (e.g., antennas 1118) for transmission and/or reception using adaptive antenna techniques for beamforming or spatial division multiple access (SDMA) and/or using MIMO communication techniques. The components and features of device 1100 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures. Further, the features of device 1100 may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “logic” or “circuit.” It should be appreciated that the exemplary device 1100 shown in the block diagram of FIG. 11 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would be necessarily be divided, omitted, or included in embodiments. At least one computer-readable storage medium may comprise instructions that, when executed, cause a system to perform any of the computer-implemented methods described herein. Some embodiments may be described using the expression “one embodiment” or “an embodiment” along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Further, some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. With general reference to notations and nomenclature used herein, the detailed descriptions herein may be presented in terms of program procedures executed on a computer or network of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities. Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of one or more embodiments. Rather, the operations are machine operations. Useful machines for performing operations of various embodiments include general purpose digital computers or similar devices. Various embodiments also relate to apparatus or systems for performing these operations. This apparatus may be specially constructed for the required purpose or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The procedures presented herein are not inherently related to a particular computer or other apparatus. Various general purpose machines may be used with programs written in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description given. It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” “third,” and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects. What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. 17226227 meta platforms, inc. USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 20th, 2022 02:25PM Apr 20th, 2022 02:25PM Facebook Technology Software & Computer Services
nasdaq:fb Facebook Apr 19th, 2022 12:00AM Apr 6th, 2020 12:00AM https://www.uspto.gov?id=US11310240-20220419 Generating and utilizing digital visual codes to grant privileges via a networking system One or more embodiments of the disclosure include systems and methods that generate and utilize digital visual codes. In particular, in one or more embodiments, the disclosed systems and methods generate digital visual codes comprising a plurality of digital visual code points arranged in concentric circles, a plurality of anchor points, and an orientation anchor surrounding a digital media item. In addition, the disclosed systems and methods embed information in the digital visual code points regarding an account of a first user of a networking system. In one or more embodiments, the disclosed systems and methods display the digital visual codes via a computing device of the first user, scan the digital visual codes via a second computing device, and provide privileges to the second computing device in relation to the account of the first user in the networking system based on the scanned digital visual code. 11310240 1. A computer-implemented method comprising: providing, for display via a first computing device of a first user of a social networking system, a digital visual code encoding an action identifier and a user identifier for the first user of the social networking system; receiving, from a second computing device associated with a second user, the user identifier and the action identifier, wherein the user identifier and the action identifier are obtained by the second computing device by scanning the digital visual code displayed on the first computing device; and based on receiving the user identifier and the action identifier from the second computing device: accessing credentials for the first user of the social networking system; and utilizing the credentials for the first user to perform an action corresponding to the action identifier. 2. The computer-implemented method of claim 1, further comprising: generating the digital visual code by embedding the action identifier and the user identifier into a digital array comprising a plurality of digital visual code points. 3. The computer-implemented method of claim 2, wherein embedding the action identifier and the user identifier comprises: generating one or more hashes based on the action identifier and the user identifier; transforming the one or more hashes to one or more binary codes comprising a plurality of bits; and embedding the action identifier and the user identifier into the digital array by affirmatively marking digital visual code points from the plurality of digital visual code points based on the plurality of bits relative to one or more anchor points in the digital array. 4. The computer-implemented method of claim 1, further comprising: detecting a digital visual code modification event comprising at least one of: passage of a threshold period of time or scanning of the digital visual code a threshold number of times; and in response to detecting the digital visual code modification event, generating a modified digital visual code such that the digital visual code is no longer operable to cause performance of the action. 5. The computer-implemented method of claim 1, wherein the digital visual code is a single-use digital visual code. 6. The computer-implemented method of claim 5, further comprising: receiving a request to perform the action from a third computing device associated with a third user, the request comprising the user identifier and the action identifier; and in response to determining that the digital visual code is a single-use digital visual code and in response to determining that the user identifier and the action identifier have already been received from the second computing device, rejecting the request to perform the action from the third computing device. 7. The computer-implemented method of claim 1, further comprising: receiving, from the second computing device associated with the second user, an additional user identifier of a third user of the social networking system, wherein additional user identifier of the third user is obtained by the second computing device by scanning a third digital visual code displayed on a third computing device of the third user; and based on receiving the user identifier and the additional user identifier, performing the action with regard to the first user and the third user. 8. The computer-implemented method of claim 1, wherein the action comprises at least one of: making a payment, adding a contact to a list of contacts, initiating a communication thread, or sending a calendar event. 9. A system comprising: at least one processor; and at least one non-transitory computer readable storage medium storing instructions that, when executed by the at least one processor, cause the system to: provide, for display via a first computing device of a first user of a social networking system, a digital visual code encoding an action identifier and a user identifier for the first user of the social networking system; receive, from a second computing device associated with a second user, the user identifier and the action identifier, wherein the user identifier and the action identifier are obtained by the second computing device by scanning the digital visual code displayed on the first computing device; and based on receiving the user identifier and the action identifier from the second computing device: access credentials for the first user of the social networking system; and utilize the credentials for the first user to perform an action corresponding to the action identifier. 10. The system of claim 9, further comprising instructions that, when executed by the at least one processor, cause the system to: detect a digital visual code modification event comprising at least one of: passage of a threshold period of time or scanning of the digital visual code a threshold number of times; and in response to detecting the digital visual code modification event, generate a modified digital visual code such that the digital visual code is no longer operable to cause performance of the action. 11. The system of claim 9, wherein the digital visual code is a single-use digital visual code. 12. The system of claim 11, further comprising instructions that, when executed by the at least one processor, cause the system to: receive a request to perform the action from a third computing device associated with a third user, the request comprising the user identifier and the action identifier; and in response to determining that the digital visual code is a single-use digital visual code and in response to determining that the user identifier and the action identifier have already been received from the second computing device, reject the request to perform the action from the third computing device. 13. The system of claim 9, further comprising instructions that, when executed by the at least one processor, cause the system to: receive, from the second computing device associated with the second user, an additional user identifier of a third user of the social networking system, wherein additional user identifier of the third user is obtained by the second computing device by scanning a third digital visual code displayed on a third computing device of the third user; and based on receiving the user identifier and the additional user identifier, perform the action with regard to the first user and the third user. 14. The system of claim 9, wherein the action comprises at least one of: making a payment, adding a contact to a list of contacts, initiating a communication thread, or sending a calendar event. 15. A non-transitory computer readable medium storing instructions thereon that, when executed by at least one processor, cause a computer system to: provide, for display via a first computing device of a first user of a social networking system, a digital visual code encoding an action identifier and a user identifier for the first user of the social networking system; receive, from a second computing device associated with a second user, the user identifier and the action identifier, wherein the user identifier and the action identifier are obtained by the second computing device by scanning the digital visual code displayed on the first computing device; and based on receiving the user identifier and the action identifier from the second computing device: access credentials for the first user of the social networking system; and utilize the credentials for the first user to perform an action corresponding to the action identifier. 16. The non-transitory computer readable medium of claim 15, further comprising instructions that, when executed by the at least one processor, cause the computer system to: detect a digital visual code modification event comprising at least one of: passage of a threshold period of time or scanning of the digital visual code a threshold number of times; and in response to detecting the digital visual code modification event, generate a modified digital visual code such that the digital visual code is no longer operable to cause performance of the action. 17. The non-transitory computer readable medium of claim 15, wherein the digital visual code is a single-use digital visual code. 18. The non-transitory computer readable medium of claim 17, further comprising instructions that, when executed by the at least one processor, cause the computer system to: receive a request to perform the action from a third computing device associated with a third user, the request comprising the user identifier and the action identifier; and in response to determining that the digital visual code is a single-use digital visual code and in response to determining that the user identifier and the action identifier have already been received from the second computing device, reject the request to perform the action from the third computing device. 19. The non-transitory computer readable medium of claim 15, further comprising instructions that, when executed by the at least one processor, cause the computer system to: receive, from the second computing device associated with the second user, an additional user identifier of a third user of the social networking system, wherein additional user identifier of the third user is obtained by the second computing device by scanning a third digital visual code displayed on a third computing device of the third user; and based on receiving the user identifier and the additional user identifier, perform the action with regard to the first user and the third user. 20. The non-transitory computer readable medium of claim 15, wherein the action comprises at least one of: making a payment, adding a contact to a list of contacts, initiating a communication thread, or sending a calendar event. 20 CROSS-REFERENCE TO RELATED APPLICATIONS The present application is a continuation of U.S. application Ser. No. 16/264,800, filed on Feb. 1, 2019, which is a continuation of U.S. application Ser. No. 15/237,071, filed Aug. 15, 2016 issued as U.S. Pat. No. 10,237,277. The aforementioned applications are hereby incorporated by reference in their entirety. BACKGROUND In recent years, individuals and business have increasingly turned to mobile computing devices to interact with others. For example, individuals routinely utilize mobile computing devices to send and receive electronic communications, create and coordinate digital calendaring events, or facilitate payment transactions. Although computing devices and corresponding digital systems allow users to interact in a variety of ways, conventional digital systems still have a variety of problems. For example, in order to interact with others, many conventional digital systems require users to first identify and/or digitally connect with a third party. For instance, in order to send a digital message or payment to another individual, conventional digital systems require a user to somehow identify information corresponding to the other individual. This typically involves asking for, and manually entering, identifying information (e.g., a phone number, e-mail address, or bank account information) or searching through a list of users provided by the digital system. Users often express frustration with such conventional digital systems. Indeed, the process of searching for, or otherwise trying to obtain, an identifier corresponding to other users of a digital system often requires an extensive amount of time and leads to user frustration. For example, upon meeting a new person, users often express frustration with the process of exchanging and manually entering personal information into mobile devices. Similarly, although some conventional digital systems provide searching user lists, identifying other parties utilizing such lists is often unreliable (e.g., users often incorrectly select a different user with a similar name) and inconvenient (e.g., users often take a significant amount of time to identify the correct user). One will appreciate that such problems are exacerbated when searching among millions or billions of users many with the same or similar names or other identifying information. SUMMARY One or more embodiments described below provide benefits and/or solve one or more of the foregoing or other problems in the art with systems and methods for generating and utilizing digital visual codes to identify individuals or businesses within a networking system. In particular, in one or more embodiments, the disclosed systems and methods generate digital visual codes utilizing a digital array comprising a plurality of digital visual code points arranged in patterns. For instance, in one or more embodiments, the disclosed systems utilize a digital array to generate a digital visual code by embedding user identifiers in the digital visual code points of the digital visual array. Users can then utilize the digital visual code to identify users and gain privileges with a networking application. For instance, a user of a first device can display a digital visual code, a second device can scan the digital visual code, and the second device uses the scanned code to identify the first user. For example, in one or more embodiments, the disclosed systems and methods generate a digital visual code by embedding an identifier of an account of a first user with a networking system into a digital array comprising a plurality of digital visual code points and one or more anchor points. In particular, the disclosed systems and methods affirmatively mark digital visual code points from the plurality of digital visual code points in accordance with the identifier of the first user and connect adjacent affirmative digital visual code points in the digital visual code. In addition, the disclosed systems and methods provide the digital visual code to a first remote client device of the first user. Moreover, the disclosed systems and methods receive, from a second remote client device of a second user, the identifier of the first user obtained by scanning and decoding the digital visual code. Furthermore, in response to receiving the identifier from the second remote client device of the second user, the disclosed systems and methods identify the account of the first user with the networking system and grant one or more privileges to the second remote client device of the second user in relation to the account of the first user with the networking system. By utilizing digital visual codes, the disclosed systems and methods assist users in quickly, accurately, and securely identifying (and interacting with) other individuals, groups, and/or businesses via a networking system. For example, the disclosed systems and methods allow a first user to quickly (i.e., without manually exchanging identifying information or searching through user lists) identify a second user by scanning a digital visual code displayed on a computing device of the second user. Moreover, utilizing the digital visual code, the disclosed systems and methods allow a first user to easily interact with a second user. Indeed, in one or more embodiments the disclosed systems and methods operate in conjunction with a networking system (e.g., a digital communication system and/or digital social networking system) that allow users to utilize digital visual codes to quickly and efficiently interact via the networking system. For example, a first user can access information regarding an account of a second user (e.g., a user profile), add the second user as a contact (e.g., add as a “friend” in the digital social networking system), initiate payment transactions with the second user, invite the second user to an event, etc. Additional features and advantages of will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary embodiments. The features and advantages of such embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary embodiments as set forth hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings. FIGS. 1A-1E illustrate a representation of generating a digital visual code embedding an identifier of an account of a user in accordance with one or more embodiments. FIGS. 2A-2C illustrate sequence diagrams of a plurality of steps in methods of generating and utilizing a digital visual code in accordance with one or more embodiments. FIGS. 3A-3E illustrate a user interface of a computing device for displaying and scanning digital visual codes in accordance with one or more embodiments. FIG. 4 illustrates a schematic diagram of a digital identification system in accordance with one or more embodiments. FIG. 5 illustrates a schematic diagram of a network environment in which the methods and systems disclosed herein may be implemented in accordance with one or more embodiments. FIG. 6 illustrates a flow chart of a method of generating and utilizing a digital visual code in accordance with one or more embodiments. FIG. 7 illustrates another flow chart of a method of utilizing a digital visual code in accordance with one or more embodiments. FIG. 8 illustrates a block diagram of an exemplary computing device in accordance with one or more embodiments. FIG. 9 illustrates a network environment of a social networking system according one or more embodiments. FIG. 10 illustrates an example social graph of a social networking system in accordance with one or more embodiments. DETAILED DESCRIPTION One or more embodiments of the present invention include a digital identification system that generates and utilizes digital visual codes. In particular, in one or more embodiments, the digital identification system utilizes personalized digital visual codes to quickly, efficiently, and securely identify a third party. More specifically, in one or more embodiments, the digital identification system generates a digital visual code that allows a first user to identify and interact with a second user. To illustrate, in one or more embodiments, the digital identification system generates a digital visual code comprising a plurality of digital visual code points arranged in a pattern that embed an identifier of a first user. A first computing device of the first user can display the digital visual code, a second computing device of a second user can scan and decode the digital visual code, and the digital identification system can identify the first user based on the digital visual code. For example, in one or more embodiments, the digital identification system generates a digital visual code by embedding an identifier of an account of a first user with a networking system into a digital array comprising a plurality of digital visual code points and one or more anchor points. In particular, the digital identification system affirmatively marks digital visual code points from the plurality of digital visual code points in accordance with the identifier of the first user and connects adjacent affirmative digital visual code points. In addition, the digital identification system provides the digital visual code to a first remote client device of the first user. Moreover, the digital identification system receives, from a second remote client device of a second user, the identifier of the first user obtained by scanning and decoding the digital visual code. Furthermore, in response to receiving the identifier from the second remote client device of the second user, the digital identification system identifies the account of the first user with the networking system and grants one or more privileges to the second remote client device of the second user in relation to the account of the first user with the networking system. By utilizing digital visual codes, the digital identification system quickly, efficiently, and securely identifiers and provides privileges to users so they can more easily interact utilizing computing devices. Indeed, rather than searching through lists of other users or manually exchanging information, the digital identification system utilizes digital visual codes to allow users to conveniently and reliably identify and interact with other users. As mentioned above, the digital identification system can utilize digital visual codes to provide one or more privileges. For example, in one or more embodiments, the digital identification system is implemented in conjunction with a digital communication application. Accordingly, upon receiving an identifier corresponding to a first user of a first computing device from a second computing device of a second user, the digital identification system can initiate an electronic communication thread between the first user and the second user. Similarly, the digital identification system can utilize the digital communication application to initiate a payment transaction between the first user and the second user. Moreover, in one or more embodiments, the digital identification system is implemented in conjunction with a social networking system (and corresponding social networking application). Accordingly, the digital identification system can identify the second user and establish a connection between the first user and the second user (e.g., add the first user as a “friend” of the second user). Similarly, the digital identification system can send an invitation from the first user to the second user corresponding to a digital event. As mentioned above, in one or more embodiments, the digital identification system generates digital visual codes. In particular, the digital identification system can embed an identifier of an account of a user with a networking system in a digital visual code by encoding the identifier in a plurality of digital visual code points in a digital array. For instance, the digital identification system can generate a digital array of digital code points arranged in a plurality of concentric circles. The digital identification system can convert a username, user ID, or hash to a binary code and affirmatively mark the digital visual code points in the plurality of concentric circles in accordance with the bits of the binary code. Furthermore, in one or more embodiments, the digital identification system also connects adjacent digital visual code points in the digital array. For instance, in one or more embodiments, the digital identification system connects adjacent digital visual code points to aid in more efficient capture of the digital visual code by a scanning device and to improve the appearance of the resulting digital visual code. Indeed, by connecting adjacent digital visual code points, users often find digital visual codes of the digital identification system more contiguous and visually appealing than other conventional systems. In addition, in one or more embodiments, the digital identification system generates a digital visual code with one or more anchor points and orientation anchors. In particular, in one or more embodiments, the digital identification system generates a digital visual code with anchor points that assist in identifying a relative location of digital visual points in the digital visual code and to make scanning and decoding digital visual codes more efficient. Similarly, the digital identification system can utilize orientation anchors to determine a proper orientation or alignment in decoding the digital visual code. For example, experimenters have determined that utilizing an orientation anchor as described herein can increase the speed of recognizing and decoding digital visual codes by a factor of four. Furthermore, in one or more embodiments, the digital identification system provides digital visual codes in a conjunction with one or more digital media items. For example, the digital identification system can provide a digital visual code with digital visual code points in concentric circles surrounding a user profile picture or video. In this manner, users can more easily recognize the digital visual code as an object encoding an identifier. Moreover, utilizing a digital media item can further increase the security of digital visual codes by allowing users to visually confirm that a digital visual code corresponds to a desired account or action. In addition to an identifier of an account of a user, the digital identification system can also embed other information into a digital visual code. For example, the digital identification system can embed an action (i.e., an action identifier) in a digital visual code. To illustrate, the digital identification system can generate a digital visual code that embeds an action identifier indicating that a first user wishes to make a payment to a second user. The second user can scan the digital visual code and automatically initiate a payment transaction between the first user and the second user. In addition to an action identifier corresponding to a payment transaction, the digital identification system can embed a variety of other action identifiers. For instance, the digital identification system can embed action identifiers corresponding to sending an event invitation, initiating a communication thread, or connecting via a social networking system. Moreover, the digital identification system can embed other information, such as a payment amount, a coupon or discount, an expiration time, or calendar event information into a digital visual code. The digital identification system can also modify digital visual codes. For example, after a certain amount of time (or a certain number of uses), the digital identification system can modify a digital visual code corresponding to a user and/or action. In this manner, the digital identification system can ensure that digital visual codes are not improperly utilized or otherwise compromised. The digital identification system can also modify digital visual codes to improve the efficiency of granting privileges. For example, in one or more embodiments, digital identification system generates a single-use, verified digital visual code. In particular, the digital identification system can generate a single-use, verified digital visual code that, when scanned, causes the digital identification system to grant privileges to a second user without additional express verification from a first user. In this manner, the digital identification system can allow users to automatically connect, complete payment transactions, and/or or invite to events without any additional user interaction, while also ensuring that verified digital visual codes are not improperly exploited. Turning now to FIGS. 1A-1E, additional detail will be provided regarding generating and utilizing digital visual codes in accordance with one or more embodiments of the digital identification system. In particular, FIG. 1A illustrates a digital array 100 comprising a plurality of digital visual code points 102a-102n surrounding a digital media item area 104. Furthermore, the digital array 100 includes four anchor points 106a-106d and an orientation anchor 108. As outlined below, the digital identification system can encode an identifier in the digital array 100 to create a digital visual code. As used herein, the term “digital visual code” refers to a machine-readable matrix. In particular, the term “digital visual code” includes a plurality of modules (e.g., dots, blocks, or other shapes) encoded in a digital array that can be read by an imaging (e.g., scanning) device. For example, in one or more embodiments, a digital visual code comprises a plurality of marked digital visual code points arranged in a digital array of concentric circles surrounding a digital media item. In addition to a plurality of concentric circles, digital visual codes can also include other designs or shapes (e.g., a rounded circle or other shape). In addition to digital visual code points, as described in greater detail below, a digital visual code can also include one or more anchor points and/or orientation anchors. As used herein, the term “digital array” refers to a matrix of digital visual code points. In particular, the term “digital array” includes a plurality of digital visual code points arranged in a pattern or shape. For example, as illustrated in FIG. 1A, the digital array 100 comprises the plurality of digital visual code points 102a-102n arranged in four concentric circles around the digital media item area 104. As used herein, the term “digital visual code points” refers to entries or digital items in a digital array. In particular, the term “digital visual code points” refers to digital items in a digital visual array that can be toggled (e.g., on or off, marked or unmarked). Indeed, in relation to the embodiment of FIG. 1A, the digital visual code points 102a-102n can be marked to encode one or more identifiers. As shown in FIG. 1A, a digital array can also include one or more anchor points and/or one or more orientation anchors. As used herein, the term “anchor points” refers to a digital item embedded in a digital visual code that provides a visual reference for the location of digital visual code points. In particular, the term “anchor points” includes a circle, square, dot, or other shape in a digital visual code that provides a visual reference to assist an imaging device to decode a digital visual code and/or digital visual code points. Specifically, in relation to FIG. 1A, the digital array 100 comprises the four anchor points 106a-106d, which are represented as a dot encompassed by a larger circle. As described in greater detail below, the digital identification system utilizes the anchor points 106a-106d to accurately identify the location of digital visual code points in decoding a digital visual code. Similarly, as used herein, the term “orientation anchor” refers to a digital item embedded in a digital visual code that provides a visual reference for orientation of a digital visual code and/or digital visual code points. In particular, the term “orientation anchor” refers to a standard image portraying a shape, symbol, mark, or brand with a particular orientation (e.g., rotation in relation to the remainder of the digital visual code). To illustrate, with regard to the embodiment of FIG. 1A, the digital array 100 comprises the orientation anchor 108 which is represented as a circle with a brand symbol (i.e., a trademark for the FACEBOOK® MESSENGER® software program). As described in greater detail below, the digital identification system can utilize the orientation anchor 108 to orient a digital visual code. In one or more embodiments, the digital identification system generates digital visual code points, anchor points, orientation points, and/or the digital media item utilizing particular dimensions. For example, in one or more embodiments, the digital identification system increases the speed and efficiency of scanning and decoding a digital visual code by utilizing defined dimensions amongst the digital visual code points, the anchor points, orientation points, and/or the digital media item area. To illustrate, with regard to FIG. 1A, the digital visual code points 102a-102n each have a diameter of one unit. The anchor points 106a-106d and the orientation anchor 108 each have a diameter of five units (relative to the one unit of the digital visual code points 102a-102n). Moreover, the digital media item area 104 has a diameter of 22 units (relative to the digital visual code points 102a-102n). In addition, each concentric circle in the array is separated by a distance of one unit, while the digital media item 104 is a distance of two units from the inner-most concentric circle of digital visual codes. It will be appreciated that the digital identification system can utilize different relative dimensions in other embodiments. For example, the digital visual code points 102a-102n can each have a diameter of one unit, the anchor points 106a-106d can each have a diameter of five units, the orientation anchor 108 can have a diameter of eight units, and the digital media item area 104 can have a diameter of thirty-eight units. In addition, although FIG. 1A illustrates a particular shape of a digital array with a particular arrangement of digital visual code points (i.e., digital visual code points in four concentric circles) anchor points, and an orientation anchor, it will be appreciated that the digital identification system can utilize a digital array with a different shape, with a different arrangement of digital visual code points, a different number and arrangement of anchor points, and a different number and arrangement of orientation anchors. For example, rather than utilizing four anchor points aligned with horizontal and vertical axes of the digital array, the digital identification system can utilize a different number (e.g., two, five, at least three, or some other number) of anchor points arranged at some other location in relation to the digital array (e.g., above, below, outside, or some other location). Similarly, rather than arranging digital visual code points in concentric circles, the digital identification system can generate a digital array with digital visual code points in a variety of other shapes and arrangements (e.g., rows and columns in a rectangle, triangle, rounded square, or squircle). Moreover, although FIG. 1A illustrates an orientation anchor with a particular brand symbol, it will be appreciated that the digital identification system can generate an orientation anchor utilizing a different symbol. For example, in one or more embodiments, the digital identification system allows a user to generate an orientation anchor with a particular symbol or mark. To illustrate, the digital identification system can provide a user interface that permits a business to select an orientation anchor that incorporates a trade mark or brand specific to the business. Similarly, the digital identification system can provide a user interface that permits an individual to select an orientation anchor with a design, symbol, or character of their choosing. As mentioned previously, in one or more embodiments, the digital identification system embeds one or more identifiers into a digital array to generate a digital visual code. As used herein, the term “identifier” refers to a sequence of characters used to identify a person or thing. In particular, the term “identifier” includes a username, user ID, hash, or binary code indicating one or more users (and/or one or more accounts of one or more users). Depending on the embodiment, the digital identification system can utilize a username, user ID, hash, or binary code corresponding to a user account as an identifier and encode the identifier into a digital visual code. As used herein, the term “account” refers to a collection of data associated with one or more users. In particular, the term “account” includes protected information regarding a user of a networking system that can be accessed by users and/or computing devices having one or more privileges. For example, the term “account” can include user contact information, user history, user communications, user posts, user comments, digital media items, purchases, interactions, contacts (e.g., friends), digital events corresponding to digital calendars, demographic information, payment transactions, and/or payment information. As used herein the term “networking system” refers to a plurality of computing devices connected via a network to transfer information between the computing devices. For example, a networking system includes a social networking system (as described in greater detail below). Similarly, a networking system includes a digital communication system (i.e., a plurality of computing devices for sending electronic communications across a network between users). For example, FIG. 1B illustrates a representation of generating an identifier corresponding to an account of a user of a networking system in accordance with one or more embodiments. In particular, FIG. 1B illustrates the digital identification system converting a username 110 corresponding to a user account into a binary code 112. As an example specific to an embodiment in which the digital identification system is integrated with a social networking system, a user may have a username, a user ID, and a hash associated with their profile/account with the social networking system. The username can comprise a string of characters chosen by the user by which the user chooses to identify themselves on the social networking system. The username may not be unique to the user. For example, the user may select Joe Smoe as the username. There may be ten, hundreds, or even thousands of other users that selected Joe Smoe as a username. The social networking system can associate one or more of a user ID or a unique identifier with the user to uniquely identify the user. For example, a user ID for the user may comprise a unique string of characters based on the username. In particular, in one or more embodiments, the user ID can comprise the username with a random string, such as joe.smoe1466, where 1466 is a randomly generated string. The user ID can allow the networking system to identify and link to an account/data of the user. In still further embodiments, the networking system can utilize a hash. For example, the hash for the user can be a string of numbers or alphanumeric characters (such as 606664070) obtained by performing a hashing algorithm on the user ID. In one or more embodiments, the digital identification system transforms the username, user ID, and/or hash into a binary code. For example, the digital identification system can transform alphanumeric characters into binary to generate a binary code corresponding to the user. Thus, as illustrated in relation to FIG. 1B, the digital identification system transforms the username 110 into the binary code 112. More specifically, the digital identification system can identify a user ID corresponding to the username, transform the user ID into a hash using a hashing algorithm, and transform the hash into the binary code 112. It will be appreciated that although FIG. 1B illustrates generating a binary code from a username of a single user, the digital identification system can generate a binary code that reflects a group, a plurality of users, or a business. For example, in one or more embodiments, the digital identification system operates in conjunction with a social networking system where users can create and join groups (e.g., a collection of users within the social networking system that share a common interest, such as users that like running or users that are fans of a particular sport). The digital identification system can generate a binary code corresponding to the group. Similarly, the digital identification system can generate a binary code corresponding to a plurality of users. For example, the digital identification system can generate a binary code that reflects a username, user ID, or hash of a plurality of users of a social networking system. As discussed in greater detail below, in this manner, the digital identification system can embed information regarding a plurality of users in a digital visual code. Furthermore, in addition to identifying users, the digital identification system can also generate a binary code (or other identifier) that reflects other information. For example, the digital identification system can generate an action identifier corresponding to a particular action. As used herein, the term “action identifier” refers to digital information indicating an action. For example, an action identifier includes a code that indicates an action for a computing device to perform. An action identifier can take a variety of forms. In one or more embodiments, the digital identification system can generate a binary code that reflects a particular action. For example, the digital identification system can generate a binary code that indicates initiating a payment transaction, initiating a messaging thread, adding a user to a digital event, connecting to an individual (e.g., adding an individual as a “friend”), or some other action. An action identifier can comprise a binary code separate from (e.g., in addition to) a user identifier or the action identifier can be part of the same binary code as a user identifier (e.g., the digital identification system can generate a binary code that identifies both a user and an action). In addition, the digital identification system can generate a binary code (or other identifier) that reflects information regarding a first user, a transaction, an event, or other information. For example, the digital identification system can generate a binary code that reflects a price, a product, a coupon, a time, a place, or other information. As mentioned above, in one or more embodiments, the digital embeds one or more identifiers into a digital array to generate a digital visual code. In particular, FIG. 1C illustrates embedding an identifier into the digital array 100. Specifically, FIG. 1C illustrates embedding the binary code 112 into the digital array 100. As shown, the digital identification system embeds the bits of the binary array into corresponding digital visual code points 102a-102n of the digital array 100. In particular, the digital identification system affirmatively marks digital visual code points based on the binary code 112 to embed the binary code 112 into the digital array 100. Thus, in one or more embodiments, bits marked as “1” in the binary code 112 are encoded in the digital array 100 by affirmatively marking a corresponding digital visual code point in the digital array 100. Specifically, in relation to FIG. 1C, the affirmatively marked digital visual code points 120a-120n correspond to “1” values in the binary code 112. The unmarked digital visual code points 122a-122n corresponding to “0” values in the binary code 112. In addition to encoding an identifier corresponding to an account of a user, the digital identification system can also embed other information in a digital visual code. For example, as mentioned previously, in one or more embodiments, the digital identification system embeds an action identifier in a digital visual code. For example, the digital identification system can generate the binary code 112 such that the binary code 112 includes an indication to begin a message thread, connect a contact (e.g., add a “friend”), initiate a payment transaction, apply a discount to a purchase (e.g., a coupon), initiate a telephone call, share contact information, or send an invitation to an event. Moreover, the digital identification system can then encode the binary code 112 that reflects the action identifier into the digital array 100. As discussed previously, the digital identification system can also connect affirmatively marked digital visual code points to generate a digital visual code. In particular, in one or more embodiments, the digital identification system connects adjacent digital visual code points with digital curves and removes unmarked digital visual code points. For example, FIG. 1D illustrates a digital visual code 130 based on the affirmatively marked digital visual codes points 120a-120n. As shown, the digital visual code 130 incudes the digital curve 132. The digital identification system generates the digital curve 132 based on the adjacent, affirmatively marked digital visual code points 120a-120e. In particular, the digital identification system determines that the digital visual code points 120a-120e are affirmatively marked and adjacent digital visual code points within a concentric circle of the digital array 100. Accordingly, the digital identification system connects the digital visual code points 120a-120e with the digital curve 132 in generating the digital visual code 130. In one or more embodiments, the digital identification system also removes digital visual code points that are not affirmatively marked. For example, the digital visual code points 122a, 122b are not affirmatively marked. Accordingly, the digital visual code 130 leaves open space corresponding to the location of the digital visual code points 122a, 122b. Aside from toggling digital visual code points, the digital identification system can also embed information in the digital visual code in other ways. For example, in one or more embodiments, the digital identification system can vary the color or thickness of digital curves (and/or digital visual code points) to further embed additional information. For example, the digital identification system can utilize a thick digital curve to encode additional bits from a binary code or to indicate a verified digital visual code. Furthermore, in addition to connecting adjacent points within concentric circles, in one or more embodiments, the digital identification system can also connect adjacent points across concentric circles. For example, the digital identification system can connect two digital visual code points in two different concentric circles by a line or digital curve. As mentioned above, a digital visual code can also include a digital media item. For example, FIG. 1D illustrates the digital identification system embedding a digital media item 134 in the digital media item area 104. As used herein, the term “digital media item” refers to any digital item capable of producing a visual representation. For instance, the term “digital visual media” includes digital images, digital video, digital animations, digital illustrations, etc. As used herein, the term “digital image” refers to any digital symbol, picture, icon, or illustration. For example, the term “digital image” includes digital files with the following, or other, file extensions: JPG, TIFF, BMP, PNG, RAW, or PDF. Similarly, as used herein, the term “digital video” refers to a digital sequence of images. For example, the term “digital video” includes digital files with the following, or other, file extensions: FLV, GIF, MOV, QT, AVI, WMV, MP4, MPG, MPEG, or M4V. With regard to the embodiment of FIG. 1D, the digital media item 134 comprises a profile picture (or video) corresponding to an account of a user. Specifically, a user selects a digital image (or video) and associates the digital image with a user account (e.g., uploads the digital image to a remote server storing account information). The digital identification system can access the digital image (or video) and display the digital image (or video) as part of the digital visual code 130. In one or more embodiments, the digital identification system selects a digital media item that relates to a particular action (e.g., an action identifier). For example, if the digital identification system embeds an action identifier corresponding to sending a payment, the digital identification system can select a digital media item based on the embedded action identifier that relates to a payment transaction. For example, the digital identification system can overlay an image of a dollar bill (or a dollar sign with a payment amount) onto a profile picture. Similarly, the digital identification system can overlay a video of money falling (or some other video related to a payment transaction). The digital visual code can select and embed similar digital media items in relation to other actions (i.e., action indicators) embedded in a digital visual code. For instance, a digital visual code embedding an action indicator for sending an invitation to an event can include a digital media item (e.g., picture or video) related to the event (e.g., an image of a calendar or an image of the location of the event). Although the embodiment of FIG. 1D illustrates the digital visual code 130 with a particular arrangement of curves, digital visual code points, anchor points, orientation anchor(s), and digital media item(s), it will be appreciated that different embodiments can utilize different arrangements. For example, FIG. 1E illustrates three digital visual codes 140, 150, and 160 with alternative arrangements of various components. For instance, the digital visual code 140 utilizes digital visual code points in concentric circles, similar to the digital visual code 130 of FIG. 1D. However, the digital visual code 140 utilizes three concentric circles of digital visual code points (and connected curves) rather than four. Moreover, the digital visual code 140 includes anchor points 142a-142d outside of the concentric circles of digital visual code points. Furthermore, unlike the digital visual code 130, the digital visual code 150 utilizes digital visual code points within a rounded square. In addition, rather than utilizing anchor points that comprise dots within a circle, the digital visual code 150 utilizes anchor points 152a-152d that comprise dots within a rounded square. Moreover, like the digital visual code 150, the digital visual code 160 comprises a plurality of digital visual code points within a rounded square. However, the digital visual code 160 connects adjacent digital visual code points within a digital array in two different dimensions (i.e., vertically and horizontally). Moreover, rather than utilizing four anchor points, the digital visual code 160 utilizes three anchor points 162a-162c. As discussed above, the digital identification system can generate and utilize digital visual codes to provide one or more privileges to a client device. For example, FIGS. 2A-2C illustrates sequence diagrams comprising steps that a digital identification system 200 performs in generating and utilizing digital visual codes. In particular, FIG. 2A illustrates steps that the digital identification system 200 performs in generating a digital visual code and providing privileges. FIGS. 2B-2C illustrate steps in various embodiments that provide particular privileges based on the digital visual code of FIG. 2A. As used herein, the term “privilege” refers to a right to access digital information and/or perform an action via a computing device. In particular, the term “privilege” includes a right to access information in relation to an account of a user of a networking system and/or perform an action via a computing device in relation to the user. The term “privilege” can include a digital item, such as an access token, provided to a computing device for utilization in accessing information. The term “privilege” can also include unlocking features or capabilities of a software application. To illustrate, the digital identification system can grant a user a privilege to invite another user to access information regarding a user (e.g., view a user's profile information), connect via a networking system (e.g., add another user as a “friend”), and/or obtain an event invitation. Additional detail regarding privileges available via the digital identification system is described below. As shown in FIG. 2A, the digital identification system 200 can be implemented by a first client device 202, a second client device 204, and server device(s) 206. The digital identification system 200 can cause each of the first client device 202, the second client device 204, and the server device(s) 206 to perform a variety of steps (e.g., steps 208-258) as described below. In one or more embodiments, the first client device 202 and/or the second client device 204 comprise computing devices operably connected to an imaging (e.g., scanning) device. For example, in one or more embodiments, the first client device 202 and the second client device 204 are operably connected to one or more scanning devices capable of scanning digital visual codes. More particularly, in one or more embodiments, the first client device 202 and the second client device 204 comprise mobile devices, such as smartphones or tablets, which include scanning devices capable of scanning digital visual codes. In addition to mobile devices, the first client device 202 and the second client device 204 can comprise a variety of other types of computing device. Additional detail regarding such computing devices is provided below (e.g., in relation to FIG. 8). As illustrated in FIG. 2A, in addition to the first client device 202 and the second client device 204, the digital identification system 200 can also be implemented in part by the server device(s) 206. The server device(s) 206 may generate, store, receive, and/or transmit data. The server device(s) 206 can comprise a data server, a communication server, and/or a web-hosting server. Moreover, in relation to FIGS. 2A-2C, the server device(s) 206 host a networking system utilized by a first user of the first client device and a second user of the second client device. As shown in FIG. 2A, the digital identification system 200 can utilize the server device(s) 206 to perform the step 208 of generating a digital visual code and embedding an identifier. Indeed, as described above, the server device(s) 206 can generate a digital visual code that includes an identifier of an account associated with a user. As described above, the server device(s) 206 can also embed other information in the digital visual code (e.g., an action identifier, additional information regarding a user, an event, and/or a payment transaction). As shown in FIG. 2A, the server device(s) 206 can also perform the step 210 of providing the digital visual code to the first client device 202. Moreover, upon receiving (and/or storing) the digital visual code, the first client device can perform the step 212 of displaying the digital visual code. For instance, the first client device 202 can display the digital visual code via a display device operatively coupled to the first client device 202 (e.g., a touchscreen of a mobile phone). The first client device 202 can also perform the step 214 of providing the digital visual code to the second client device 204. For example, a user can show the digital visual code displayed on the first client device 202 to the second client device 204. To illustrate, the step 214 can comprise a user of the first client device moving a display screen of the first client device 202 into view of a scanning device of the second client device 204. As illustrated in FIG. 2A, the second client device 204 can also perform the step 216 of scanning the digital visual code. As mentioned previously, the second client device 204 can comprise a smartphone with an imaging (e.g., scanning) device. Thus, the step 216 can comprise utilizing a smartphone to scan the digital visual code. In one or more embodiments, the client devices 202, 204 include an installed software application capable of analyzing, processing, decoding, and/or interpreting digital visual codes. For example, in one or more embodiments, the server device(s) 206 provide a software application (e.g., a digital communication application or social networking application) that includes data for interpreting and/or decoding a particular digital visual code format (e.g., the same digital code format utilized by the server device(s) 206 in relation to step 208 to generate the digital visual code). As shown in FIG. 2A, the second client device 204 can also perform the step 218 of determining an identifier. In particular, the second client device 204 can determine the identifier from the digital visual code. For instance, the step 218 can comprise decoding the digital visual code (e.g., utilizing an installed software application capable of processing the digital visual code). For example, upon scanning the digital visual code, the second client device 204 can decode and identify information embedded in the digital code. In particular, the second client device 204 can determine an identifier of an account associated with the first user of the first client device 202 embedded in the digital visual code. Similarly, the second client device 204 can identify action identifiers or other information embedded in the digital visual code. More specifically, in one or more embodiments, determining the identifier comprises decoding a binary code from the digital visual code. Moreover, determining the identifier can also comprise identifying a hash from the binary code. As shown in FIG. 2A, the second client device 204 can also perform the step 220 of sending the identifier to the server device(s) 206. For example, the second client device 204 can send a binary code and/or hash to the server device(s) 206. Moreover, upon receiving the identifier, the server device(s) 206 can perform the step 222 of providing privileges. For instance, in one or more embodiments, the server device(s) 206 receive the identifier and identify the first user of the first client device 202 based on the identifier. To illustrate, the server device(s) 206 can receive a binary code or hash and identify an account corresponding to the first user based on the binary code or hash. The server device(s) 206 can authenticate the binary code or hash, and then provide privileges to the second client device 204. As discussed above, the server device(s) 206 can provide a variety of privileges. For example, in one or more embodiments, the server device(s) 206 can provide access to information in an account associated with the first user of the first client device 202. For instance, the server device(s) can provide information regarding common connections (e.g., shared “friends) between the first user of the first client device 202 and the second user of the second client device 204. Similarly, the server device(s) 206 can provide a username, user ID, contact information, location information, employment information, or other information regarding the first user stored in the first user's account. The step 222 of providing privileges can also involve a variety of additional steps unique to particular privileges. For instance, FIGS. 2B-2C illustrate additional steps in relation to different privileges provided by the digital identification system 200. For example, FIG. 2B illustrates the digital identification system 200 performing (via the first client device 202, the second client device 204, and the server device(s) 206) steps 230-242 in completing a payment transaction between a first user of the first client device 202 and a second user of the second client device 204. In particular, FIG. 2B shows that upon receiving an identifier from the second client device 204, the server device(s) 206 can receive a payment request. In particular, as shown, the second client device 204 can perform the step 230 of sending a payment request to the server device(s) 206. For example, in one or more embodiments, upon sending the identifier to the server device(s) 206, the server device(s) 206 can provide a privilege to the second client device 204, allowing the second client device 204 to initiate a payment transaction with the first user of the first client device 202. Specifically, the server device(s) 206 can enable functionality within a software application running on the second client device 204. For instance, the server device(s) 206 can enable one or more selectable elements via a user interface of the second client device 204 that allow the second user of the second client device 204 to request a payment transaction from the first user. Upon user interaction with the one or more selectable elements, the second client device 204 sends the payment request to the server device(s) 206 (e.g., identifying the first user, an amount, and/or a product). As shown in FIG. 2B, upon receiving the payment request, the server device(s) 206 can perform the step 232 of sending a payment confirmation request to the first client device 202. In addition, the first client device 202 can perform the step 234 of approving the payment. For example, in one or more embodiments, the first client device 202 provides a user interface to the first user of the first client device 202 requesting approval of the payment (e.g., displaying a payment amount, a payor, a payee, a purchase item, or other information regarding the payment transaction for approval). Moreover, in one or more embodiments, the step 234 comprises obtaining and authenticating verification credentials (e.g., password, passcode, log in information, or other credentials). As illustrated in FIG. 2B, the first client device 202 also performs the step 236 of sending a payment confirmation to the server device(s) 206. Moreover, upon receiving the payment confirmation, the server device(s) 206 can perform the step 238 of transferring payment. For example, in a payment transaction where the first user of the first client device is purchasing a product from the second user of the second client device, the server device(s) 206 can transfer funds from a payment account of the first user to a payment account of the second user. In one or more embodiments, the server device(s) 206 operate in conjunction with a payment network to perform the step 238. For example, the server device(s) 206 can operate in conjunction with a payment network that comprises one or more systems operable to transfer funds between two or more financial accounts. For example, the payment network can comprise a payment processing system, a card network system, a sending banking system (associated with a payor financial account), and a receiving banking system (associated with a payee financial account). In addition, as shown in FIG. 2B, upon transferring payment, the server device(s) 206 can also perform the steps 240, 242 of sending payment completion messages to the second client device 204 and the first client device 202. For example, the server device(s) 206 can send payment completion messages that indicate the payment transaction is complete together with details regarding the payment transaction (e.g., payment time, payment transaction ID, payment amount, products purchased, payor and/or payee). Although FIGS. 2A-2B illustrates particular steps in a method of completing a payment transaction utilizing the digital identification system 200, it will be appreciated that the digital identification system 200 can perform additional, fewer, or alternative steps. For example, as mentioned previously, in one or more embodiments, the digital identification system 200 embeds an action identifier or other information in a digital visual code to facilitate interaction between users. For instance, in relation to a payment transaction, the digital identification system 200 can embed information regarding the payment transaction in the digital visual code. To illustrate, in one or more embodiments, the step 208 comprises embedding an action identifier corresponding to the payment transaction in the digital visual code. Specifically, in relation to the step 208, the server device(s) 206 can receive a request from the first client device 202 for a digital visual code for initiating a payment transaction. The server device(s) 206 can generate a binary code corresponding to an identifier of an account associated with the first user and also corresponding to an action identifier for initiating a payment transaction from the first user of the first client device 202. The binary code can also include additional information regarding the payment transaction, such as a payment amount, a product, and/or a coupon. By embedding a payment transaction action identifier in a digital visual code, the digital identification system 200 can further streamline the process of completing a payment transaction. For example, at the steps, 216, 218, the second client device 204 can scan the digital visual code and determine the payment transaction identifier (and/or other information, such as a coupon) embedded in the digital visual code. The second client device 204 can perform the step 220 and send the payment transaction identifier to the server device(s) 206. The server device(s) 206 can then omit the step 230 of receiving a separate payment request (and the second client device 204 need not provide payment transaction information). Rather, upon receiving the payment transaction identifier, the server device(s) 206 can automatically request payment confirmation (e.g., verification credentials) from the first client device 202. In one or more embodiments, the digital identification system 200 further streamlines the method of completing a payment transaction by utilizing a verified digital visual code. A verified digital visual code is a digital visual code that provides privileges without additional confirmation or verification. In particular, a verified digital visual code includes a digital visual code that automatically provides privileges to the second client device 204 upon scanning the digital visual code from the first client device 202. For example, in relation to FIGS. 2A-2B, the digital identification system 200 can generate a verified digital visual code at the step 208. For example, the digital identification system 200 can embed a unique indicator in digital visual code points to indicate a verified digital visual code. The second client device 204 can identify a verified digital visual code at the step 218 and send an indication of the verified digital visual code to the server device(s) 206. Upon receiving an indication of the verified digital visual code, the server device(s) 206 can automatically perform the step 238 of transferring payment (i.e., without sending a payment confirmation request at step 232, or receiving a payment confirmation from the first client device 202 at steps 234, 236). In this manner, verified digital visual codes can further streamline completing payment transactions between users. Indeed, from the perspective of the first user of the first client device and the second user of the second client device 204, upon scanning the verified digital visual code via the second client device 204, the payment transaction is automatically completed. In one or more embodiments, the digital identification system 200 includes additional security measures in relation to verified digital visual codes to ensure that they are not abused (e.g., scanned and improperly utilized to transfer payment without the first user's authorization). For example, in one or more embodiments, the digital identification system 200 generates verified digital visual code that are only authorized for a single-use. In other words, the digital identification system 200 via the server device(s) 206 will only provide privileges a single time upon receiving the verified digital visual code. For instance, in one or more embodiments, after receiving a verified digital visual code, the digital identification system 200 de-activates the verified digital visual code so that it will no longer provide privileges. More specifically, in one or more embodiments, the server device(s) 206 maintain a database of digital visual codes and corresponding user accounts, activities, or other information. Upon receiving a verified digital visual code, the digital identification system 200 can modify the database of digital visual codes such that the verified digital visual code is no longer associated with a user account, activities, or other information. Thus, if a third client device obtains an image of the verified digital visual code (after the second client device utilized the verified digital visual code), the third client device could not obtain any privileges from the verified digital visual code. In addition to single-use modification of a digital visual code database, in one or more embodiments, the digital visual code utilizes additional security measures in relation to verified digital visual codes. For example, prior to generating (or displaying) a digital visual code, in one or more embodiments, the digital identification system 200 requires the first client device 202 to provide verification credentials (e.g., a password, passcode, or fingerprint). In this manner, the digital identification system 200 can ensure that an unauthorized user of the first client device cannot generate or utilize a verified digital visual code. To further ensure that verified digital visual codes are not erroneously captured and utilized by a third party, the digital identification system 200 can also require a unique user interaction with the first client device 202 to display the verified digital visual code. For example, the digital identification system 200 can require that the first user of the first client device 202 interact with a touchscreen in a particular manner (e.g., two-finger swipe, three-finger tap, double touch in opposite extremes of a touchscreen) to avoid accidentally or prematurely displaying a verified digital visual code. In addition to enabling users to complete a payment transaction utilizing digital visual codes, in one or more embodiments, the digital identification system 200 also makes it easier and faster to utilize computing devices to connect individual users of a networking system. For example, in embodiments that are implemented in conjunction with a social networking system, the digital identification system 200 can facilitate sending friend requests and providing information between a first and second user of the social networking system. For instance, FIG. 2C illustrates the digital identification system 200 performing additional steps (via the first client device 202, the second client device 204, and the server device(s) 206) in a method of connecting users of a social networking system utilizing a digital visual code. In particular, the second client device 204 can perform the step 248 of sending a friend request to the server device(s) 206. Moreover, as shown, the server device(s) 206 can perform the step 250 of sending a friend confirmation request to the first client device 202. For example, the server device(s) 206 can send a notification to the first client device 202 indicating that the second user of the second client device 204 seeks to connect (i.e., add as a “friend”) the first user of the first client device 202. As shown in FIG. 2C, upon receiving the friend request, the first client device can perform the step 252 of approving the friend request. For example, the first client device 202 can provide a user interface with a message requesting approval from the first user of the first device 202 to accept the friend request of the second user of the second client device 204. As illustrated, upon the first user of the first client device 202 approving the friend request, the first client device 202 performs the step 254 of sending a friend request confirmation to the server device(s) 206. In response, the server device(s) 206 perform the step 255 of connecting users as friends. For example, the server device(s) 206 can maintain a database of users of the networking system that have connected as friends, and the server device(s) 206 can modify the database to indicate that the first user and the second user are connected as friends, with associated privileges (e.g., allowing the second client device 204 to obtain information and interact with the first user, view social media posts by the first user, view social media comments by the first user, view contact information of the first user, or view other information stored in an account associated with the first user). Moreover, as shown in FIG. 2C, the digital identification system 200 can also perform the steps 256, 258 of sending friend confirmation messages to the first client device 202 and the second client device 204. As discussed above, in one or more embodiments, the digital identification system 200 can embed action identifiers in the digital identification system 200. In relation to FIGS. 2A, 2C, the digital identification system 200 can embed an action identifier corresponding to a friend request. Accordingly, at the step 218 the second client device 204 can determine the action identifier corresponding to the friend request and send the action identifier to the server device(s) 206. Upon receiving the action identifier, the server device(s) 206 can automatically send the friend confirmation request (at the step 250, without the step 248). In this manner, the digital identification system 200 can further streamline connecting users of a networking system utilizing digital visual codes. As mentioned above, in one or more embodiments, the digital identification system 200 can further facilitate connecting users of a networking system by providing a verified digital visual code. In particular, in one or more embodiments, the digital identification system 200 provides a verified digital visual code that avoids the needs for confirmation and/or approval from the first client device 202. For example, in relation to FIGS. 2A, 2C, the digital identification system 200 can generate a verified digital visual code (at the step 208). Upon receiving an indication of the verified digital visual code (at the step 220), the server device(s) 206 can automatically connect the first user and the second user as friends (at step 255). Thus, the digital identification system 200 can avoid steps 248, 250, 252, and 254 by utilizing a verified digital visual code. Similar to FIGS. 2B, 2C, the digital identification system 200 can also enable a variety of additional privileges. For example, upon receiving an identifier based on a digital visual code the digital identification system 200 can initiate a messaging thread between the first user of the first client device 202 and the second user of the second client device 204. As described above, the digital identification system 200 can embed messaging action indicators in digital visual codes and/or generate verified digital visual codes with messaging action indicators to automatically initiate a message thread upon scanning a digital visual code. Similarly, upon receiving a digital visual code, the digital identification system 200 can also add or invite users to a digital event. For example, upon receiving an identifier from the second client device 204, the server device(s) 206 can add (or invite) the second user of the second client device 204 to an event associated with the first user of the first client device 202. As described above, the digital identification system 200 can embed digital event action indicators in digital visual codes and/or generate verified digital visual codes to automatically add (or invite) a user to a digital event upon scanning a digital visual code. Although FIGS. 2A-2C describe digital visual codes in relation to a first user and a second user, it will be appreciated that the digital identification system 200 can also generate and utilize digital visual codes corresponding to a group or a plurality of users. For example, the digital identification system 200 can generate a digital visual code that embeds an identifier corresponding to a defined group on a social networking system, and thus facilitate payment transactions, connections, events, messages, or other interaction with the group. Similarly, the digital identification system 200 can generate a digital visual code that embeds one or more identifiers corresponding to a plurality of users. For example, a plurality of individuals meeting a new friend for the first time can utilize the digital identification system 200 to generate a digital visual code that embeds identifiers corresponding to the plurality of individuals. The new friend can scan the digital visual code to identify and interact with the plurality of individuals from the single digital visual code. For example, the new friend can scan the digital visual code and automatically begin a message thread with the plurality of individuals, initiate a payment transaction with the plurality of individuals, connect with the plurality of individuals, call the plurality of individuals, or plan digital events with the plurality of individuals. Similarly, a plurality of users at a restaurant can utilize the digital identification system 200 to generate a digital visual code with an identifier corresponding to the plurality of users. A representative of the restaurant can utilize a mobile device to scan the digital visual code and enter into a payment transaction with the plurality of users (e.g., payment for a meal). In addition, in one or more embodiments, the digital identification system 200 can also refresh, update, and/or modify digital visual codes. Indeed, as discussed previously in relation to verified digital visual codes, the digital identification system 200 can maintain a database of digital visual codes (e.g., identifiers) and corresponding users, groups, actions, and/or information. The digital identification system 200 can refresh, update, and/or modify the database and generate refreshed, updated, and/or modified digital visual codes. For example, in one or more embodiments, the digital identification system 200 modifies digital visual codes after the digital visual code has been utilized a certain number of times (e.g., one time or five times). Similarly, the digital identification system 200 can modify digital visual codes after a certain period of time (e.g., one day, one week, or one month). The digital identification system 200 can send modified digital visual codes to a client device and then utilize the modified digital visual codes to provide privileges to one or more computing devices. Turning now to FIGS. 3A-3E additional detail will be provided regarding a user interface for generating and utilizing digital visual codes in accordance with one or more embodiments. Indeed, as mentioned above, in one or more embodiments, the digital identification system 200 is implemented in conjunction with a networking application (such as a digital communication application and/or social networking application) that provides a user interface for displaying and/or scanning digital visual codes in addition to identifying and interacting with other users. In particular, FIG. 3A illustrates a first computing device 300 displaying a user interface 302 corresponding to a digital communication application 304 (e.g., the FACEBOOK® MESSENGER® software application) in accordance with one or more embodiments of the digital identification system 200. Specifically, as shown in FIG. 3A, the user interface 302 comprises a plurality of communication threads 306a-306n. Upon user interaction with one of the plurality of communication threads 306a-306n (e.g., a touch gesture) the digital identification system 200 can modify the user interface 302 to display one or more communications corresponding to a particular communication thread. Moreover, the digital identification system 200 can modify the user interface 302 to enable a user to draft, send, review, and/or receive additional electronic communications. The digital identification system 200 can modify the user interface 302 to display a digital visual code. In particular, FIG. 3A illustrates a user information element 308. Upon user interaction with the user information element 308, the user interface 302 provides a digital visual code corresponding to the user. For example, FIG. 3B illustrates a digital visual code 310 upon user interaction with user information element 308. With regard to the embodiment of FIG. 3B, the first computing device 300 receives the digital visual code 310 from a remote server. In particular, as described above, the remote server generates the digital visual code based on an identifier of the user (“Joe Smoe”) in relation to a social networking system. Although a remote server generates the digital visual code in relation to FIG. 3B, in one or more embodiments, the first computing device 300 generates the digital visual code 310. For example, the first computing device 300 can install a software application capable of generating digital visual codes of a particular format based on an identifier corresponding to the user. The first computing device 300 can access the identifier corresponding to the user and generate the digital visual code based on the identifier. As shown in FIG. 3B, in addition to the digital visual code 310, the user interface 302 also includes an action-specific digital visual code element 312. Upon user interaction with the action-specific digital visual code element 312, the digital identification system 200 can generate a digital visual code that embeds an action identifier. For example, in one or more embodiments, upon user interaction with the action-specific digital visual code element 312, the digital identification system 200 provides a plurality of selectable elements that correspond to particular actions. For example, the user interface 302 can provide a selectable element for initiating a payment transaction, a selectable element for sending contact information, a selectable element for adding a friend, a selectable element for inviting a contact to an event, or other actions described herein. The digital information system can also obtain (e.g., based on user input) additional information corresponding to the particular action, such as a payment amount or event details. Upon selection of a particular action and user input of information corresponding to the action, the digital identification system 200 (either at the first computing device 300 or via a remote server) can generate a new digital visual code corresponding to the particular action and/or information. In this manner, the digital information system can generate digital visual codes specific to particular actions. Furthermore, as shown in FIG. 3B, the user interface 302 also includes a verified digital visual code element 314 (e.g., an element to generate a one-time use digital visual code). In particular, in one or more embodiments, upon user interaction with the verified digital visual code element 314, the digital identification system 200 provides a user interface for providing verification credentials (e.g., a username, code, or password). Moreover, in one or more embodiments, the digital identification system 200 then provides a user interface for selection of a particular action (e.g., initiate a payment or another action) and/or additional information (e.g., payment amount or other information). In response, the digital identification system 200 can generate (either at the first computing device 300 or a remote server) a verified digital visual code. As describe above, in one or more embodiments, the verified digital visual code is only valid for a single use. Furthermore, in one or more embodiments of the digital identification system 200, the verified digital visual code allows a user to avoid subsequent confirmation, approval, or authentication after scanning the verified digital visual code. As mentioned previously, in one or more embodiments, the digital identification system 200 grants privileges to a second computing device based on the second computing device scanning a digital visual code. FIGS. 3C-3E illustrate scanning a digital visual code by a second client device and obtaining privileges in accordance with one or more embodiments. In particular, FIG. 3C illustrates a second computing device 320 of a second user displaying the user interface 302 corresponding to the digital communication application 304. As shown in FIG. 3C, the user interface 302 (upon selection of the “people” element) displays a plurality of options for identifying other users together with a plurality of contacts corresponding to the second user (e.g., friends of the second user in relation to a social networking system). Specifically, the user interface 302 displays a selectable scan code element 326. Upon user interaction with the scan code element 326, the user interface 302 provides tools for scanning a digital visual code. For example, FIG. 3D illustrates a user interface for scanning digital visual codes according to one or more embodiments. In particular, FIG. 3D illustrates the second computing device 320 upon selection of the scan code element 326. Specifically, the user interface 302 of FIG. 3D includes an image feed 322 from an imaging device (i.e., a scanning device) operatively coupled to the second computing device 320. In particular, the image feed 322 displays sequential images captured by the imaging device in real time. As shown, the second user of the second computing device 320 can utilize the image feed 322 to arrange the imaging device to scan the digital visual code 310. Indeed, in FIG. 3D, the image feed 322 shows that the imaging device of the computing device 320 is aimed at the digital visual code 310 displayed on the first computing device 300. Accordingly, the imaging device affixed to the computing device 320 scans the digital visual code 310. In particular, the computing device 320 scans the digital visual code 310 and determines an identifier embedded in the digital visual code 310. As described above, in one or more embodiments, the digital identification system 200 utilizes the second computing device 320 to scan and decode the digital visual code 310 based on the anchor points 327a-327d and the orientation anchor 329. Indeed, it will be appreciated that in capturing the digital visual code 310, the second computing device 320 can be rotated at a variety of angles relative to the second computing device 320. Accordingly, the digital visual code 310 may be rotated and/or distorted, interfering with the ability of the digital information system to decode the digital visual code. In relation to the embodiment of FIG. 3D the digital identification system 200 utilizes the orientation anchor 329 to properly orient the digital visual code. In particular, the digital identification system 200 compares the rotation/orientation of the orientation anchor 329 with an image of the orientation anchor at a standard orientation stored on the second computing device 320. The digital identification system 200 can then properly orient the remainder of the digital visual code based on the orientation of the orientation anchor 329. Similarly, with regard to FIG. 3D, the digital identification system 200 also utilizes the anchor points 327a-327d to decode the digital visual code 310. In particular, the digital identification system 200 identifies the location of digital visual code points (e.g., affirmatively marked digital visual code points and unmarked digital visual code points) in relation to the anchor points. Accordingly, the anchor points 327a-327d provide a reference for identifying the relative location of the digital visual code points, and determining what digital visual code points are marked and/or unmarked. Based on which digital visual code points are marked or unmarked, the second computing device 320 can decode an identifier. For example, the digital identification system 200 can determine a binary code from the digital visual code points (e.g., generating a binary code of “0” and “1” bits based on whether corresponding digital visual code points are marked or unmarked). Moreover, the digital identification system 200 can convert the binary code to a hash, user ID, and/or username. In this manner, the digital identification system 200 can determine an identifier from a digital visual code. In addition to the image feed 322, the user interface 302 of FIG. 3D also comprises a selectable user code element 324 and a selectable scan code element 326. As shown, the selectable scan code element 326 is currently activated, thus the user interface 302 displays the camera feed and the first computing device 300 searches for a digital visual code to scan. Upon user interaction with the user code element 324, the user interface 302 can also display a digital visual code corresponding to the second user. Furthermore, as illustrated in FIG. 3D, the user interface 302 also includes the camera roll element 328. Upon user interaction with the camera roll element 328, the digital communication application 304 can access a repository of digital media items stored on the second computing device 320 and identify digital visual codes in the repository of digital media items. For example, if the second user utilizes the computing device to capture a digital image or digital video comprising a digital visual code, the digital identification system 200 can access the digital image or digital video from the repository of digital media items and determine an identifier. As discussed above, upon scanning a digital visual code (and sending an identifier embedded in the digital visual code to a remote server), in one or more embodiments, the digital identification system 200 provides one or more privileges to a client device (and/or a user of a client device). For instance, the digital identification system 200 can provide information corresponding to an account of another user and/or begin a message thread between users. For example, FIG. 3E shows the second computing device 320 upon scanning the digital visual code 310 from the first computing device 300. In particular, FIG. 3E illustrates the user interface 302 upon the second computing device 320 obtaining privileges to initiate electronic communications with the user of the first computing device 300 and access information from an account of the first user of the first computing device 300. Specifically, the user interface 302 comprises a message thread area 330 and a message composition area 332. Upon user interaction with the message composition area 332, the second user of the second computing device 320 can compose and send a message to the user of the first computing device 300, which will appear in the message thread area 330. Moreover, the user interface 302 contains a user identification element 334. The user identification element 334 provides information from an account of the user of the first computing device 300 to the second user of the second computing device 320. Indeed, as shown, the user identification element 334 indicates a number of mutual connections (e.g., common “friends” on a social networking system), an occupation, and a location corresponding to the first user of the first computing device 300. Upon user interaction with the user identification element 334, the second computing device 320 can obtain and provide additional information corresponding to the user of the first computing device 300. Moreover, the second computing device 320 can provide options for further interactions with the user of the first computing device. Although FIG. 3E illustrates the user interface 302 with elements for composing and displaying messages, it will be appreciated that the digital identification system 200 can provide a variety of other privileges upon scanning a digital visual element. For example, rather than providing a user interface for initiate a message thread, upon scanning the digital visual code 310, the second computing device 320 can generate a user interface for initiating a payment transaction with the user of the first computing device 300, connecting with the user of the first computing device 300, or scheduling events with the user of the first computing device 300. Although FIGS. 3A-3E illustrate displaying a digital visual code on a computing device, in one or more embodiments, the digital visual code can also be displayed without a computing device. For example, in one or more embodiments, the digital identification system 200 generates a digital visual code that is affixed to a real-world item (e.g., a piece of art, a car, an advertisement, or a product). For example, a first user can print a digital visual code and place the digital visual code on a product for sale. A second user can scan the digital visual code placed on the product and utilize the digital identification system 200 to identify and interact with the first user (e.g., via a social networking system). Similarly, an artist can create a page on a social networking system that contains articles regarding the artist's work. The artist can affix a digital visual code to a piece of art work. A user can then scan the digital visual code and obtain information regarding the artist's work. Furthermore, although FIGS. 1A-1E and 3A-3E illustrate a particular branding associated with the digital visual codes, it will be appreciated that the digital identification system 200 can incorporate other colors, themes, shapes, icons, and brands in generating a digital visual code. For example, in one or more embodiments, the digital identification system 200 provides a user interface that allows users to modify the appearance of their digital visual code. Indeed, as discussed above, users can select a digital media item to utilize in conjunct ion with a digital visual code. In addition, users can select a color of a digital visual code. Similarly, users can select an icon or brand to utilize as an orientation anchor. Similarly, users can select a trademark or other image to surround the digital visual code. In this manner, the digital identification system 200 enables individuals and businesses to generate unique digital visual codes that reflect a particular style, theme, or brand. Turning now to FIG. 4, additional detail will be provided regarding various components and capabilities of the digital identification system 200 in accordance with one or more embodiments. In particular, FIG. 4 illustrates an example embodiment of a digital identification system 400 (e.g., the digital identification system 200 described above) in accordance with one or more embodiments. As shown, the digital identification system 400 includes a first client device 402 (e.g., the first client device 202 or the first computing device 300), server device(s) 404 (e.g., the server device(s) 206), and a second client device 406 (e.g., the second client device 204 or the second computing device 320). As shown, the digital identification system 400 includes various components on the first client device 402, the server device(s) 404, and the second client device 406. For example, FIG. 4 illustrates that the client devices 402, 406 each include a user interface manager 408, a user input detector 410, a digital visual code manager 412, a client application 414, a scanning device 416, and a device storage manager 418 (comprising digital visual code 418a and user identifier 418b). Furthermore, as shown, the server device(s) 404 include a networking application 420 that includes a communication manager 422, a digital visual code engine 424, an identification facility 426, an authentication facility 428, a social graph 430 (comprising node information 430a and edge information 430b) and a server storage manager 432 (comprising user accounts 432a and a digital visual code database 432b). As mentioned, the client devices 402, 406 can include the user interface manager 408. The user interface manager 408 can provide, manage, and/or control a graphical user interface (or simply “user interface”) for use with the digital identification system 400. In particular, the user interface manager 408 may facilitate presentation of information by way of an external component of the client devices 402, 406. For example, the user interface manager 408 may display a user interface by way of a display screen associated with the client devices 402, 406. The user interface may be composed of a plurality of graphical components, objects, and/or elements that allow a user to perform a function. The user interface manager 408 can present, via the client devices 402, 406, a variety of types of information, including text, images, video, audio, characters, or other information. Moreover, the user interface manager 408 can provide a variety of user interfaces specific to any variety of functions, programs, applications, plug-ins, devices, operating systems, and/or components of the client devices 402, 406 (e.g., the user interface 302). The user interface manager 408 can provide a user interface with regard to a variety of operations or applications (e.g., the digital communication application 304 and/or the client application 414). For example, the user interface manager 408 can provide a user interface that facilitates composing, sending, or receiving an electronic communication. Similarly, the user interface manager 408 can generate a user interface that facilitates creating, modifying, searching for, and/or inviting to a digital event. Moreover, the user interface manager 408 can generate a user interface for interacting via a social networking system, such as viewing social media posts, viewing social media comments, connecting with other social networking system users, etc. Additional details with respect to various example user interface elements are described throughout with regard to various embodiments containing user interfaces. In addition to the user interface manager 408, as shown in FIG. 4, the client devices 402, 406 also include the user input detector 410. The user input detector 410 can detect, identify, monitor, receive, process, capture, and/or record various types of user input. For example, the user input detector 410 may be configured to detect one or more user interactions with respect to a user interface. As referred to herein, a “user interaction” refers to conduct performed by a user (or a lack of conduct performed by a user) to control the function of a computing device. “User input,” as used herein, refers to input data generated in response to a user interaction. The user input detector 410 can operate in conjunction with any number of user input devices or computing devices (in isolation or in combination), including personal computers, laptops, smartphones, smart watches, tablets, touchscreen devices, televisions, personal digital assistants, mouse devices, keyboards, track pads, or stylus devices. The user input detector 410 can detect and identify various types of user interactions with user input devices, such as select events, drag events, scroll events, and so forth. For example, in the event the client devices 402, 406 includes a touch screen, the user input detector 410 can detect one or more touch gestures (e.g., swipe gestures, tap gestures, pinch gestures, or reverse pinch gestures) from a user that forms a user interaction. Furthermore, the user input detector 410 can detect or identify user input in any form. For example, the user input detector 410 can detect a user interaction with respect to a variety of user interface elements, such as selection of a graphical button, a drag event within a graphical object, or a particular touch gesture directed to one or more graphical objects or graphical elements of a user interface. Similarly, the user input detector 410 can detect user input directly from one or more user input devices. The user input detector 410 can communicate with, and thus detect user input with respect to, a variety of programs, applications, plug-ins, operating systems, user interfaces, or other implementations in software or hardware. For example, the user input detector 410 can recognize user input of an electronic communication and/or event details provided in conjunction with the client application 414. As further illustrated in FIG. 4, the client devices 402, 406 include the digital visual code manager 412. The digital visual code manager 412 can create, generate, provide for display, scan, identify, decode, interpret, and/or process digital visual codes. For example, as discussed previously, the digital visual code manager 412 can process a digital visual code captured by a scanning device (e.g., the scanning device 416). In particular, the digital visual code manager 412 can process a digital visual code and identify data embedded in the digital visual code, such as an identifier of an account corresponding to a user, an action identifier, product information, product cost information, coupons, user information, user preferences, and/or other information. As mentioned previously, the digital visual code manager 412 can also create a digital visual code. For example, the digital visual code manager 412 can receive one or more identifiers (e.g., from the digital visual code engine 424) and generate a digital visual code reflecting the one or more identifiers. The digital visual code manager 412 can also generate digital visual codes that reflect other information, such as selected products, user information, and/or coupons. The digital visual code manager 412 can also generate digital visual codes that reflect information regarding groups or multiple users. The client devices 402, 406 also include the client application 414. In one or more embodiments, the client application 414 is a native application installed on the client devices 402, 406. For example, the client applications 414 on one or both client devices 402, 406 may be a mobile application that installs and runs on a mobile device, such as a smartphone or a tablet. Alternatively, client applications 414 may be a desktop application, widget, or other form of a native computer program that runs on a desktop device or laptop device. Alternatively, the client applications 414 may be a remote application, such as a web application executed within a web browser, that the client devices 402, 406 access. For example, in one or more embodiments the client application 414 comprises a digital communication application (e.g., an instant messaging application, e-mail application, or texting application). Similarly, in one or more embodiments, the client application 414 comprises a social networking application. Although the client application 414 is illustrated as an individual component of the first client device 402 and the second client device 406, it will be appreciated that in one or more embodiments, other components of the first client device 402 are implemented in conjunction with the client application 414. For example, in one or more embodiments, the user interface manager 408, the user input detector, the digital visual code manager 412, and the client application 414 are implemented as part of the client application 414. As illustrated in FIG. 4, client devices 402, 406 also include the scanning device 416. The scanning device 416 can identify, capture, scan, and analyze one or more codes. For example, the scanning device 416 can capture a digital visual code. In particular, the scanning device 416 can scan a digital visual code provided for display via another client device. Similarly, the scanning device 416 can provide an image feed for display (e.g., via the user interface manager 408) such that users of the client devices 402, 406 can identify a digital visual code for scanning. As shown in FIG. 4, the client devices 402, 406 also include the device storage manager 418. The device storage manager 418 maintains data for the digital identification system 400. The device storage manager 418 can maintain data of any type, size, or kind, as necessary to perform the functions of the digital identification system. As illustrated in FIG. 4, the device storage manager 418 includes digital visual code 418a (i.e., one or more digital visual codes utilized by the client devices 402, 406) and user identifier 418b (i.e., one or more user identifiers utilized by the client devices 402, 406). As briefly mentioned above, in addition to the client devices 402, 406, the digital identification system 400 can further include a networking application 420 that is implemented in whole or in part on the server(s) 404. In one or more embodiments of the present disclosure, the networking application 420 is part of a social-networking system (such as, but not limited to FACEBOOK®), but in other embodiments the networking application 420 may comprise another type of applications, including but not limited to a digital communication application (e.g., an instant messaging application or an e-mail application), search engine application, digital event application, payment application, banking application, or any number of other application types that utilizes user accounts. As illustrated, in one or more embodiments where the networking application 420 comprises a social networking system, the networking application 420 may include the social graph 430 for representing and analyzing a plurality of users and concepts. Node storage of the social graph 430 can store node information 430a comprising nodes for users, nodes for concepts, nodes for transactions, and nodes for items. Edge storage of the social graph 430 can store edge information 430b comprising relationships between nodes and/or actions occurring within the social-networking system. Further detail regarding social networking systems, social graphs, edges, and nodes is presented below with respect to FIGS. 9-10. As illustrated in FIG. 4, the server(s) 404 include the communication manager 422. The communication manager 422 processes messages received from the client devices 402, 406. The communication manager 422 can act as a directory for messages or data sent to and received from users interacting via the networking application 420. For example, the communication manager 422 can act as a directory for messages or data in relation to parties to a payment transaction, in relation to participants in a message thread, in relation attendees of a digital event, or in relation to users of a social networking system. As shown in FIG. 4, the networking application 420 also includes the digital visual code engine 424. The digital visual code engine 424 can generate, create, provide, modify, alter, change, send, and/or receive one or more digital visual codes. In particular, the digital visual code engine 424 can generate digital visual codes embedding identifiers corresponding to an account of one or more users or groups. For example, the digital visual code engine 424 can generate a digital visual code that embeds a username corresponding to a user account (e.g., from the user accounts 432a). Moreover, the digital visual code engine 424 can send the digital visual code to the client devices 402, 406. As mentioned above, the digital visual code engine 424 can generate digital visual codes that embed a variety of other types of information, such as action identifiers, product information, coupon information, or user information. As described above, in one or more embodiments, the digital visual code engine 424 generates a digital array comprising a plurality of digital visual code points. The digital visual code engine 424 can embed one or more identifiers in the digital array by marking digital visual code points corresponding to an identifier (e.g., mark digital visual code points to mirror bits in a binary code). Furthermore, in one or more embodiments, the digital visual code engine 424 generates a digital array of digital visual code points arranged in concentric circles (e.g., four concentric circles) surrounding a digital media item area. Upon marking digital visual code points corresponding to one or more identifiers, the digital visual code engine 424 can connect adjacent digital visual code points. For example, the digital visual code engine 424 can connect adjacent digital visual code points in each concentric circle of the digital array with curves. As mentioned, the digital visual code engine 424 can also modify one or more digital visual codes. For example, the digital visual code engine 424 can modify the digital visual codes database 432b to change the digital visual code corresponding to a particular user (e.g., user account), action, or information. Similarly, in one or more embodiments, the digital visual code engine 424 modifies digital visual codes by modifying identifiers utilized to create the digital visual codes (e.g., modify the identifiers stored in the user accounts 432a). In one or more embodiments, the digital visual code engine 424 can generate a verified digital visual code. A verified digital visual code (e.g., a single-use digital visual code) can automatically perform a function upon scanning by a client device. The digital identification system 400 can utilize verified digital visual codes to streamline identification and authentication of a user and interacting between users of the digital identification system 400. As shown in FIG. 4, the networking application 420 can also include the identification facility 426. The identification facility 426 can identify one or more users or actions. In particular, the identification facility 426 can identify an account of a user based on an identifier received from one of the client devices 402, 406. For example, the identification facility 426 can receive an identifier from the client device and identify an account corresponding to the identifier from the user accounts 432a. Similarly, the identification facility 426 can identify actions corresponding to action identifiers provided by the client devices 402, 406. As illustrated in FIG. 4, the networking application 420 can also include the authentication facility 428. The authentication facility 428 can verify and/or authenticate one or more users and/or client devices. Moreover, the authentication facility 428 can provide one or more privileges and/or access. For example, in one or more embodiments the client devices 402, 406 provide verification credentials corresponding to a user of the client devices. The authentication facility 428 can verify the accuracy of the verification credentials. Similarly, upon receiving an identifier and determining an account corresponding to the identifier (e.g., via the identification facility 426), the authentication facility 428 can provide one or more privileges. For example, as described above, the authentication facility 428 can provide privileges to access information from a user account (e.g., user accounts 432a), initiate payment transactions, add or invite to digital events, connect with another user (e.g., add as a friend), and/or initiate electronic communications. The server(s) 404 can also include the server storage manager 432. The server storage manager maintains data for the digital identification system 400. The device storage manager 418 can maintain data of any type, size, or kind, as necessary to perform the functions of the digital identification system. As illustrated in FIG. 4, the device storage manager 418 includes user accounts 432a and digital visual code database 432b. The server storage manager 432 can manage the user accounts 432a and digital visual code database 432b corresponding to a plurality of users. Specifically, when a user registers with the networking application 420 (e.g., via the client application 414), the networking application 420 (e.g., via the server storage manager 432) creates a user account for the user. The server storage manager 432 can store information about the user for maintaining and displaying in a visible user profile for the user. For example, the user accounts 432a can maintain personal information, identification information, location information, images uploaded by the user, contacts, and other information that the user provides to the networking application 420 to populate the user account. In one or more embodiments, server storage manager 432 also associates identifiers or other information with digital visual codes and/or user accounts via the digital visual code database 432b. For example, the digital visual code database 432b can comprise one or more arrays, spreadsheets, or tables that identify a user corresponding to an identifier and/or a digital visual code. As described above, the digital identification system 400 can modify and update the digital visual code database 432b to refresh, update, and/or modify digital visual codes. Furthermore, as described, the digital identification system 400 can utilize the digital visual code database 432b to provide digital visual codes to client devices 402, 406. Each of the components of the first client device 402, the server(s) 404, and the second client device 406 can communicate with each other using any suitable communication technologies. It will be recognized that although the components of the first client device 402, the server(s) 404, and the second client device 406 are shown to be separate in FIG. 4, any of the components may be combined into fewer components, such as into a single facility or module, or divided into more components as may serve a particular embodiment. Moreover, while FIG. 4 describes certain components as part of the client application 414 and other components as part of the networking application 420, the present disclosure is not so limited. In alternative embodiments, one or more of the components shown as part of the client application 414 can be part of the networking application 420 or vice versa. The components can include software, hardware, or both. For example, the components can include computer instructions stored on a non-transitory computer-readable storage medium and executable by at least one processor of the client devices 402, 406 or the server(s) 404. When executed by the at least one processor, the computer-executable instructions can cause the client devices 402, 406 or the server(s) 404 to perform the methods and processes described herein. Alternatively, the components can include hardware, such as a special purpose processing device to perform a certain function or group of functions. Moreover, the components can include a combination of computer-executable instructions and hardware. Furthermore, the components 408-432 of the digital identification system 400 may, for example, be implemented as one or more stand-alone applications, as one or more modules of an application, as one or more plug-ins, as one or more library functions or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components 408-432 of the digital identification system 400 may be implemented as a stand-alone application, such as a desktop or mobile application. Furthermore, the components 408-432 of the digital identification system 400 may be implemented as one or more web-based applications hosted on a remote server. Moreover, the components of the digital identification system 400 may be implemented in a suit of mobile device applications or “apps.” Turning now to FIG. 5, further information will be provided regarding implementation of the digital identification system 400. Specifically, FIG. 5 illustrates a schematic diagram of one embodiment of an exemplary system environment (“environment”) 500 in which the digital identification system 400 can operate. As illustrated in FIG. 5, the environment 500 can include client devices 502a-502n, a network 504, and server(s) 506. The client devices 502a-502n, the network 504, and the server(s) 506 may be communicatively coupled with each other either directly or indirectly (e.g., through the network 504). The client devices 502a-502n, the network 504, and the server(s) 506 may communicate using any communication platforms and technologies suitable for transporting data and/or communication signals, including any known communication technologies, devices, media, and protocols supportive of remote data communications, examples of which will be described in more detail below. As just mentioned, and as illustrated in FIG. 5, the environment 500 can include the client devices 502a-502n. The client devices 502a-502n (e.g., the client devices 402, 406) may comprise any type of computing device. For example, the client devices 502a-502n may comprise one or more personal computers, laptop computers, mobile devices, mobile phones, tablets, special purpose computers, TVs, or other computing devices. In one or more embodiments, the client devices 502a-502n may comprise computing devices capable of communicating with each other or the server(s) 506. The client devices 502a-502n may comprise one or more computing devices as discussed in greater detail below in relation to FIGS. 8-9. As illustrated in FIG. 5, the client devices 502a-502n and/or the server(s) 506 may communicate via the network 504. The network 504 may represent a network or collection of networks (such as the Internet, a corporate intranet, a virtual private network (VPN), a local area network (LAN), a wireless local network (WLAN), a cellular network, a wide area network (WAN), a metropolitan area network (MAN), or a combination of two or more such networks. Thus, the network 504 may be any suitable network over which the client devices 502a-502n (or other components) may access the server(s) 506 or vice versa. The network 504 will be discussed in more detail below in relation to FIGS. 8-9. Moreover, as illustrated in FIG. 5, the environment 500 also includes the server(s) 506. The server(s) 506 (e.g., the server(s) 404) may generate, store, receive, and/or transmit any type of data. For example, the server(s) 506 may receive data from the client device 502a and send the data to the client device 502b. In one example, server(s) 506 can host a social network. In one or more embodiments, the server(s) 506 may comprise a data server. The server(s) 506 can also comprise a communication server or a web-hosting server. Regardless, the server(s) 506 can be configured to receive a wide range of electronic documents or communications, including but not limited to, text messages, instant messages, social networking messages, social networking posts, emails, tags, comments, and any other form of electronic communications or electronic documents. Additional details regarding the server(s) 506 will be discussed below in relation to FIGS. 8-9. Although FIG. 5 illustrates three client devices 502a-502n, it will be appreciated that the client devices 502a-502n can represent any number of computing devices (fewer or greater than shown). Similarly, although FIG. 5 illustrates a particular arrangement of the client devices 502a-502n, the network 504, and the server(s) 506, various additional arrangements are possible. In addition to the elements of the environment 500, one or more users can be associated with each of the client devices 502a-502n. For example, users may be individuals (i.e., human users). The environment 500 can include a single user or a large number of users, with each of the users interacting with the digital identification system 400 through a corresponding number of computing devices. For example, a user can interact with the client device 502a for the purpose of composing and sending an electronic communication (e.g., instant message). The user may interact with the client device 502a by way of a user interface on the client device 502a. For example, the user can utilize the user interface to cause the client device 502a to create and send an electronic communication to one or more of the plurality of users of the digital identification system 400. By way of an additional example, in one or more embodiments the client device 502a sends (via the client application 414) a request to the server(s) 506 for a digital visual code corresponding to a particular action (e.g., inviting another user to a digital event). The server(s) 506 can determine an identifier corresponding to a user of the client device 502a and generate (e.g., via the digital visual code engine 424) a digital visual code corresponding to the identifier and the particular action. Specifically, the server(s) 506 encode an identifier and action identifier in a digital visual code with a plurality of digital visual code points arranged in concentric circles. Moreover, the server(s) 506 provide the digital visual code to the client device 502a. The client device 502a can display the digital visual code and the client device 502b can scan (e.g., via the scanning device 416) the digital visual code form the client device 502a. The client device 502b can decode (e.g., via the client application 414) the digital visual code and identify the identifier and the action identifier embedded in the digital visual code. The client device 502b can send the identifier and the action identifier to the server(s) 506. The server(s) 506 can identify (e.g., via the identification facility 426) an account and an action based on the identifier and the action identifier. In response, the server(s) 506 can provide the client device 502b with one or more privileges (e.g., via the authentication facility 428). In particular, the server(s) 506 can provide privileges that enable the second client device to perform the particular action (e.g., invite the user to the digital event). Notably, the user of the client device 502a and the user of the client device 502b need not exchange contact information or search through lists of contacts to perform the particular action or obtain privileges. As illustrated by the previous example embodiment, the digital identification system 400 may be implemented in whole, or in part, by the individual elements 502a-506 of the environment 500. Although the previous example, described certain components of the digital identification system 400 implemented with regard to certain components of the environment 500, it will be appreciated that components of the digital identification system 400 can be implemented in any of the components of the environment 500. FIGS. 1A-5, the corresponding text, and the examples, provide a number of different systems and devices for generating and utilizing digital visual codes. In addition to the foregoing, embodiments can also be described in terms of flowcharts comprising acts and steps in a method for accomplishing a particular result. For example, FIGS. 6-7 illustrate flowcharts of exemplary methods in accordance with one or more embodiments of the present invention. The methods described in relation to FIGS. 6-7 may be performed with less or more steps/acts or the steps/acts may be performed in differing orders. Additionally, the steps/acts described herein may be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. FIG. 6 illustrates a flowchart of a series of acts in a method 600 of utilizing digital visual codes in accordance with one or more embodiments of the present invention. In one or more embodiments, the method 600 is performed in a digital medium environment that includes the digital identification system 400. The method 600 is intended to be illustrative of one or more methods in accordance with the present disclosure, and is not intended to limit potential embodiments. Alternative embodiments can include additional, fewer, or different steps than those articulated in FIG. 6. As shown in FIG. 6, the method 600 includes an act 610 of generating a digital visual code with an embedded identifier by affirmatively marking digital visual code points in accordance with the identifier and connecting adjacent affirmative digital visual code points. In particular, the act 610 can include generating, by at least one processor, a digital visual code by embedding an identifier of an account of a first user with a networking system into a digital array comprising a plurality of digital visual code points and one or more anchor points by affirmatively marking digital visual code points from the plurality of digital visual code points in accordance with the identifier of the first user and connecting adjacent affirmative digital visual code points. For example, in one or more embodiments, the act 610 comprises generating the digital visual code points in a plurality of concentric circles. Moreover, in one or more embodiments, the act 610 comprises generating the digital visual array such that the plurality of concentric circles surrounds a digital media item corresponding to the first user. For instance, in one or more embodiments, the digital media item comprises a profile picture (or digital video). Furthermore, in one or more embodiments, the act 610 includes generating at least three anchor points and an orientation anchor. In addition, the step 610 can further include connecting adjacent affirmative digital visual code points within each of the concentric circles in the digital array with a curve. Moreover, the step 610 can further comprise: identifying a user ID corresponding to the account of the first user; generating a hash based on the user ID corresponding to the account of the first user; transforming the hash to a binary code comprising a plurality of bits; and affirmatively marking the digital visual code points based on the bits of the binary code. In addition, as illustrated in FIG. 6, the method 600 also includes an act 620 of providing the digital visual code to a first remote client device. In particular, the act 620 can include providing the digital visual code to a first remote client device of the first user. As illustrated in FIG. 6, the method 600 also includes an act 630 of receiving, from a second remote client device, the embedded identifier obtained by scanning the digital visual code. In particular, the act 630 can include receiving, from a second remote client device of a second user, the identifier of the first user obtained by scanning and decoding the digital visual code. In one or more embodiments, the act 630 includes receiving, from the second remote client device, a hash corresponding to the account of the first user. As shown in FIG. 6, the method 600 also includes an act 640 of, in response to receiving the embedded identifier, identifying an account and granting one or more privileges in relation to the account. In particular, the act 640 can include, in response to receiving the identifier from the second remote client device of the second user, identifying the account of the first user with the networking system; and granting one or more privileges to the second remote client device of the second user in relation to the account of the first user with the networking system. For example, in one or more embodiments, granting the one or more privileges comprises: providing information from the account of the first user; initiating a payment transaction between the first user and the second user; initiating an electronic communication between the first user and the second user; or sending an invitation for an event corresponding to the first user to the second user. In addition, in one or more embodiments, the method 600 also includes embedding an action identifier corresponding to one or more actions into the digital visual code by marking additional digital visual code points in accordance with the action identifier; and receiving the action identifier from the second device. Further, the method 600 can also include marking additional digital visual code points corresponding to the action identifier, wherein the one or more actions comprise at least one of: initiating a payment transaction between the first user and the second user, initiating an electronic communication between the first user and the second user, or sending an invitation for an event. Moreover, granting the one or more privileges to the second remote client device of the second user (from step 640) can further include permitting the second remote client device to perform the one or more actions. In addition, FIG. 7 illustrates a flowchart in another series of acts in a method 700 of utilizing digital visual codes. As shown in FIG. 7, the method 700 includes an act 710 of scanning a digital visual code displayed by a second computing device, wherein the digital visual code comprises affirmatively marked digital visual code points, wherein adjacent affirmatively marked digital visual code points are connected. In particular, the act 710 can include scanning, by a first computing device of a first user, a digital visual code displayed by a second computing device of a second user, wherein the digital visual code comprises a plurality of affirmatively marked digital visual code points and one or more anchor points, wherein adjacent affirmatively marked digital visual code points from the plurality of affirmatively marked digital visual code points are connected. For example, in one or more embodiments, the one or more anchor points comprise: at least three anchor points and an orientation anchor, wherein the orientation anchor comprises a brand image. As illustrated in FIG. 7, the method 700 also includes an act 720 of decoding the digital visual code to identify an identifier of an account. In particular, the act 720 can include decoding the digital visual code to identify an identifier of an account of the second user of the second computing device in relation to a networking system based on the one or more anchor points and the affirmatively marked digital visual code points. For example, in one or more embodiments, the act 720 includes capturing an image of the digital visual code; and orienting the digital visual code within the image based on the at least three anchor points and the orientation anchor. Moreover, the act 720 can also include generating a binary code corresponding to the affirmatively marked digital visual code points; and generating the identifier from the binary code. In addition, as illustrated in FIG. 7, the method 700 also includes an act 730 of, in response to sending the identifier to a remote server, obtaining a privilege in relation to the account. In particular, the act 730 can include, in response to sending the identifier to a remote server, obtaining a privilege in relation to the account of the second user and the networking system. For example, in one or more embodiments, the act 730 includes obtaining information from the account of the second user; initiating a payment transaction between the first user and the second user, initiating an electronic communication between the first user and the second user, or sending an invitation for an event corresponding to the second user. Further, in one or more embodiments, the method 700 includes identifying an action identifier corresponding to an action embedded in the digital visual code; and sending the action identifier embedded in the digital visual code to the remote server. In addition, in one or more embodiments, obtaining the privilege in relation to the account of the second user and the networking system comprises performing the action corresponding to the action identifier. FIG. 8 illustrates, in block diagram form, an exemplary computing device 800 that may be configured to perform one or more of the processes described above. One will appreciate that the first client device 202, the second client device 204, the server device(s) 206, the first computing device 300, the second computing device 320, the first client device 402, the server(s) 404, the second client device 406, the client devices 502a-502n, and the server(s) 506 each comprise one or more computing devices in accordance with implementations of computing device 800. As shown by FIG. 8, the computing device can comprise a processor 802, a memory 804, a storage device 806, an I/O interface 808, and a communication interface 810, which may be communicatively coupled by way of communication infrastructure 812. While an exemplary computing device 800 is shown in FIG. 8, the components illustrated in FIG. 8 are not intended to be limiting. Additional or alternative components may be used in other embodiments. Furthermore, in certain embodiments, a computing device 800 can include fewer components than those shown in FIG. 8. Components of computing device 800 shown in FIG. 8 will now be described in additional detail. In particular embodiments, processor 802 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage device 806 and decode and execute them. In particular embodiments, processor 802 may include one or more internal caches for data, instructions, or addresses. As an example and not by way of limitation, processor 802 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 804 or storage 806. Memory 804 may be used for storing data, metadata, and programs for execution by the processor(s). Memory 804 may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. Memory 804 may be internal or distributed memory. Storage device 806 includes storage for storing data or instructions. As an example and not by way of limitation, storage device 806 can comprise a non-transitory storage medium described above. Storage device 806 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage device 806 may include removable or non-removable (or fixed) media, where appropriate. Storage device 806 may be internal or external to the computing device 800. In particular embodiments, storage device 806 is non-volatile, solid-state memory. In other embodiments, Storage device 806 includes read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. I/O interface 808 allows a user to provide input to, receive output from, and otherwise transfer data to and receive data from computing device 800. I/O interface 808 may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. I/O interface 808 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O interface 808 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. Communication interface 810 can include hardware, software, or both. In any event, communication interface 810 can provide one or more interfaces for communication (such as, for example, packet-based communication) between computing device 800 and one or more other computing devices or networks. As an example and not by way of limitation, communication interface 810 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI. Additionally or alternatively, communication interface 810 may facilitate communications with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, communication interface 810 may facilitate communications with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination thereof. Communication infrastructure 812 may include hardware, software, or both that couples components of computing device 800 to each other. As an example and not by way of limitation, communication infrastructure 812 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination thereof. As mentioned above, digital identification system 400 may be linked to and/or implemented within a social networking system. A social networking system may enable its users (such as persons or organizations) to interact with the system and with each other. The social networking system may, with input from a user, create and store in the social networking system a user profile associated with the user. The user profile may include demographic information, communication-channel information, and information on personal interests of the user. The social networking system may also, with input from a user, create and store a record of relationships of the user with other users of the social networking system, as well as provide services (e.g. wall posts, photo-sharing, event organization, messaging, games, or advertisements) to facilitate social interaction between or among users. The social networking system may store records of users and relationships between users in a social graph comprising a plurality of nodes and a plurality of edges connecting the nodes. The nodes may comprise a plurality of user nodes and a plurality of concept nodes. A user node of the social graph may correspond to a user of the social networking system. A user may be an individual (human user), an entity (e.g., an enterprise, business, or third party application), or a group (e.g., of individuals or entities). A user node corresponding to a user may comprise information provided by the user and information gathered by various systems, including the social networking system. For example, the user may provide his or her name, profile picture, city of residence, contact information, birth date, gender, marital status, family status, employment, educational background, preferences, interests, and other demographic information to be included in the user node. Each user node of the social graph may have a corresponding web page (typically known as a profile page). In response to a request including a user name, the social networking system can access a user node corresponding to the user name, and construct a profile page including the name, a profile picture, and other information associated with the user. A profile page of a first user may display to a second user all or a portion of the first user's information based on one or more privacy settings by the first user and the relationship between the first user and the second user. A concept node may correspond to a concept of the social networking system. For example, a concept can represent a real-world entity, such as a movie, a song, a sports team, a celebrity, a group, a restaurant, or a place or a location. An administrative user of a concept node corresponding to a concept may create or update the concept node by providing information of the concept (e.g., by filling out an online form), causing the social networking system to associate the information with the concept node. For example and without limitation, information associated with a concept can include a name or a title, one or more images (e.g., an image of cover page of a book), a web site (e.g., an URL address) or contact information (e.g., a phone number, an email address). Each concept node of the social graph may correspond to a web page. For example, in response to a request including a name, the social networking system can access a concept node corresponding to the name, and construct a web page including the name and other information associated with the concept. An edge between a pair of nodes may represent a relationship between the pair of nodes. For example, an edge between two user nodes can represent a friendship between two users. For another example, the social networking system may construct a web page (or a structured document) of a concept node (e.g., a restaurant, a celebrity), incorporating one or more selectable buttons (e.g., “like”, “check in”) in the web page. A user can access the page using a web browser hosted by the user's client device and select a selectable button, causing the client device to transmit to the social networking system a request to create an edge between a user node of the user and a concept node of the concept, indicating a relationship between the user and the concept (e.g., the user checks in to a restaurant, or the user “likes” a celebrity). As an example, a user may provide (or change) his or her city of residence, causing the social networking system to create an edge between a user node corresponding to the user and a concept node corresponding to the city declared by the user as his or her city of residence. In addition, the degree of separation between any two nodes is defined as the minimum number of hops required to traverse the social graph from one node to the other. A degree of separation between two nodes can be considered a measure of relatedness between the users or the concepts represented by the two nodes in the social graph. For example, two users having user nodes that are directly connected by an edge (i.e., are first-degree nodes) may be described as “connected users” or “friends.” Similarly, two users having user nodes that are connected only through another user node (i.e., are second-degree nodes) may be described as “friends of friends.” A social networking system may support a variety of applications, such as photo sharing, on-line calendars and events, gaming, instant messaging, and advertising. For example, the social networking system may also include media sharing capabilities. Also, the social networking system may allow users to post photographs and other multimedia files to a user's profile page (typically known as “wall posts” or “timeline posts”) or in a photo album, both of which may be accessible to other users of the social networking system depending upon the user's configured privacy settings. The social networking system may also allow users to configure events. For example, a first user may configure an event with attributes including time and date of the event, location of the event and other users invited to the event. The invited users may receive invitations to the event and respond (such as by accepting the invitation or declining it). Furthermore, the social networking system may allow users to maintain a personal calendar. Similarly to events, the calendar entries may include times, dates, locations and identities of other users. FIG. 9 illustrates an example network environment of a social networking system. In particular embodiments, a social networking system 902 may comprise one or more data stores. In particular embodiments, the social networking system 902 may store a social graph comprising user nodes, concept nodes, and edges between nodes as described earlier. Each user node may comprise one or more data objects corresponding to information associated with or describing a user. Each concept node may comprise one or more data objects corresponding to information associated with a concept. Each edge between a pair of nodes may comprise one or more data objects corresponding to information associated with a relationship between users (or between a user and a concept, or between concepts) corresponding to the pair of nodes. In particular embodiments, the social networking system 902 may comprise one or more computing devices (e.g., servers) hosting functionality directed to operation of the social networking system 902. A user of the social networking system 902 may access the social networking system 902 using a client device such as client device 906. In particular embodiments, the client device 906 can interact with the social networking system 902 through a network 904. The client device 906 may be a desktop computer, a laptop computer, a tablet computer, a personal digital assistant (PDA), an in- or out-of-car navigation system, a smart phone or other cellular or mobile phone, or a mobile gaming device, other mobile device, or other suitable computing devices. Client device 906 may execute one or more client applications, such as a web browser (e.g., Microsoft Windows Internet Explorer, Mozilla Firefox, Apple Safari, Google Chrome, Opera, etc.) or a native or special-purpose client application (e.g., Facebook for iPhone or iPad, Facebook for Android, etc.), to access and view content over network 904. Network 904 may represent a network or collection of networks (such as the Internet, a corporate intranet, a virtual private network (VPN), a local area network (LAN), a wireless local area network (WLAN), a cellular network, a wide area network (WAN), a metropolitan area network (MAN), or a combination of two or more such networks) over which client devices 906 may access the social networking system 902. While these methods, systems, and user interfaces utilize both publicly available information as well as information provided by users of the social networking system, all use of such information is to be explicitly subject to all privacy settings of the involved users and the privacy policy of the social networking system as a whole. FIG. 10 illustrates example social graph 1000. In particular embodiments, social networking system 902 may store one or more social graphs 1000 in one or more data stores. In particular embodiments, social graph 1000 may include multiple nodes—which may include multiple user nodes 1002 or multiple concept nodes 1004—and multiple edges 1006 connecting the nodes. Example social graph 1000 illustrated in FIG. 10 is shown, for didactic purposes, in a two-dimensional visual map representation. In particular embodiments, a social networking system 902, client device 906, or third-party system 908 may access social graph 1000 and related social-graph information for suitable applications. The nodes and edges of social graph 1000 may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or query able indexes of nodes or edges of social graph 1000. In particular embodiments, a user node 1002 may correspond to a user of social networking system 902. As an example and not by way of limitation, a user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over social networking system 902. In particular embodiments, when a user registers for an account with social networking system 902, social networking system 902 may create a user node 1002 corresponding to the user, and store the user node 1002 in one or more data stores. Users and user nodes 1002 described herein may, where appropriate, refer to registered users and user nodes 1002 associated with registered users. In addition or as an alternative, users and user nodes 1002 described herein may, where appropriate, refer to users that have not registered with social networking system 902. In particular embodiments, a user node 1002 may be associated with information provided by a user or information gathered by various systems, including social networking system 902. As an example and not by way of limitation, a user may provide his or her name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. Each user node of the social graph may have a corresponding web page (typically known as a profile page). In response to a request including a user name, the social networking system can access a user node corresponding to the user name, and construct a profile page including the name, a profile picture, and other information associated with the user. A profile page of a first user may display to a second user all or a portion of the first user's information based on one or more privacy settings by the first user and the relationship between the first user and the second user. In particular embodiments, a concept node 1004 may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with social network system 902 or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located within social networking system 902 or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; another suitable concept; or two or more such concepts. A concept node 1004 may be associated with information of a concept provided by a user or information gathered by various systems, including social networking system 902. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, a concept node 1004 may be associated with one or more data objects corresponding to information associated with concept node 1004. In particular embodiments, a concept node 1004 may correspond to one or more webpages. In particular embodiments, a node in social graph 1000 may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible to social networking system 902. Profile pages may also be hosted on third-party websites associated with a third-party server 908. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to a particular concept node 1004. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node 1002 may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. As another example and not by way of limitation, a concept node 1004 may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node 1004. In particular embodiments, a concept node 1004 may represent a third-party webpage or resource hosted by a third-party system 908. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “eat”), causing a client system 906 to send to social networking system 902 a message indicating the user's action. In response to the message, social networking system 902 may create an edge (e.g., an “eat” edge) between a user node 1002 corresponding to the user and a concept node 1004 corresponding to the third-party webpage or resource and store edge 1006 in one or more data stores. In particular embodiments, a pair of nodes in social graph 1000 may be connected to each other by one or more edges 1006. An edge 1006 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge 1006 may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication, social networking system 902 may send a “friend request” to the second user. If the second user confirms the “friend request,” social networking system 902 may create an edge 1006 connecting the first user's user node 1002 to the second user's user node 1002 in social graph 1000 and store edge 1006 as social-graph information in one or more of data stores. In the example of FIG. 10, social graph 1000 includes an edge 1006 indicating a friend relation between user nodes 1002 of user “A” and user “B” and an edge indicating a friend relation between user nodes 1002 of user “C” and user “B.” Although this disclosure describes or illustrates particular edges 1006 with particular attributes connecting particular user nodes 1002, this disclosure contemplates any suitable edges 1006 with any suitable attributes connecting user nodes 1002. As an example and not by way of limitation, an edge 1006 may represent a friendship, family relationship, business or employment relationship, fan relationship, follower relationship, visitor relationship, sub scriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected in social graph 1000 by one or more edges 1006. In particular embodiments, an edge 1006 between a user node 1002 and a concept node 1004 may represent a particular action or activity performed by a user associated with user node 1002 toward a concept associated with a concept node 1004. As an example and not by way of limitation, as illustrated in FIG. 10, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to a edge type or subtype. A concept-profile page corresponding to a concept node 1004 may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons, social networking system 902 may create a “favorite” edge or a “check in” edge in response to a user's action corresponding to a respective action. As another example and not by way of limitation, a user (user “C”) may listen to a particular song (“Ramble On”) using a particular application (SPOTIFY, which is an online music application). In this case, social networking system 902 may create a “listened” edge 1006 and a “used” edge (as illustrated in FIG. 10) between user nodes 1002 corresponding to the user and concept nodes 1004 corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover, social networking system 902 may create a “played” edge 1006 (as illustrated in FIG. 10) between concept nodes 1004 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played” edge 1006 corresponds to an action performed by an external application (SPOTIFY) on an external audio file (the song “Imagine”). Although this disclosure describes particular edges 1006 with particular attributes connecting user nodes 1002 and concept nodes 1004, this disclosure contemplates any suitable edges 1006 with any suitable attributes connecting user nodes 1002 and concept nodes 1004. Moreover, although this disclosure describes edges between a user node 1002 and a concept node 1004 representing a single relationship, this disclosure contemplates edges between a user node 1002 and a concept node 1004 representing one or more relationships. As an example and not by way of limitation, an edge 1006 may represent both that a user likes and has used at a particular concept. Alternatively, another edge 1006 may represent each type of relationship (or multiples of a single relationship) between a user node 1002 and a concept node 1004 (as illustrated in FIG. 10 between user node 1002 for user “E” and concept node 1004 for “SPOTIFY”). In particular embodiments, social networking system 902 may create an edge 1006 between a user node 1002 and a concept node 1004 in social graph 1000. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user's client system 906) may indicate that he or she likes the concept represented by the concept node 1004 by clicking or selecting a “Like” icon, which may cause the user's client system 906 to send to social networking system 902 a message indicating the user's liking of the concept associated with the concept-profile page. In response to the message, social networking system 902 may create an edge 1006 between user node 1002 associated with the user and concept node 1004, as illustrated by “like” edge 1006 between the user and concept node 1004. In particular embodiments, social networking system 902 may store an edge 1006 in one or more data stores. In particular embodiments, an edge 1006 may be automatically formed by social networking system 902 in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge 1006 may be formed between user node 1002 corresponding to the first user and concept nodes 1004 corresponding to those concepts. Although this disclosure describes forming particular edges 1006 in particular manners, this disclosure contemplates forming any suitable edges 1006 in any suitable manner. In particular embodiments, an advertisement may be text (which may be HTML-linked), one or more images (which may be HTML-linked), one or more videos, audio, one or more ADOBE FLASH files, a suitable combination of these, or any other suitable advertisement in any suitable digital format presented on one or more webpages, in one or more e-mails, or in connection with search results requested by a user. In addition or as an alternative, an advertisement may be one or more sponsored stories (e.g., a news-feed or ticker item on social networking system 902). A sponsored story may be a social action by a user (such as “liking” a page, “liking” or commenting on a post on a page, RSVPing to an event associated with a page, voting on a question posted on a page, checking in to a place, using an application or playing a game, or “liking” or sharing a website) that an advertiser promotes, for example, by having the social action presented within a pre-determined area of a profile page of a user or other page, presented with additional information associated with the advertiser, bumped up or otherwise highlighted within news feeds or tickers of other users, or otherwise promoted. The advertiser may pay to have the social action promoted. As an example and not by way of limitation, advertisements may be included among the search results of a search-results page, where sponsored content is promoted over non-sponsored content. In particular embodiments, an advertisement may be requested for display within social-networking-system webpages, third-party webpages, or other pages. An advertisement may be displayed in a dedicated portion of a page, such as in a banner area at the top of the page, in a column at the side of the page, in a GUI of the page, in a pop-up window, in a drop-down menu, in an input field of the page, over the top of content of the page, or elsewhere with respect to the page. In addition or as an alternative, an advertisement may be displayed within an application. An advertisement may be displayed within dedicated pages, requiring the user to interact with or watch the advertisement before the user may access a page or utilize an application. The user may, for example view the advertisement through a web browser. A user may interact with an advertisement in any suitable manner. The user may click or otherwise select the advertisement. By selecting the advertisement, the user may be directed to (or a browser or other application being used by the user) a page associated with the advertisement. At the page associated with the advertisement, the user may take additional actions, such as purchasing a product or service associated with the advertisement, receiving information associated with the advertisement, or subscribing to a newsletter associated with the advertisement. An advertisement with audio or video may be played by selecting a component of the advertisement (like a “play button”). Alternatively, by selecting the advertisement, social networking system 902 may execute or modify a particular action of the user. An advertisement may also include social-networking-system functionality that a user may interact with. As an example and not by way of limitation, an advertisement may enable a user to “like” or otherwise endorse the advertisement by selecting an icon or link associated with endorsement. As another example and not by way of limitation, an advertisement may enable a user to search (e.g., by executing a query) for content related to the advertiser. Similarly, a user may share the advertisement with another user (e.g., through social networking system 902) or RSVP (e.g., through social networking system 902) to an event associated with the advertisement. In addition or as an alternative, an advertisement may include social-networking-system context directed to the user. As an example and not by way of limitation, an advertisement may display information about a friend of the user within social networking system 902 who has taken an action associated with the subject matter of the advertisement. In particular embodiments, social networking system 902 may determine the social-graph affinity (which may be referred to herein as “affinity”) of various social-graph entities for each other. Affinity may represent the strength of a relationship or level of interest between particular objects associated with the online social network, such as users, concepts, content, actions, advertisements, other objects associated with the online social network, or any suitable combination thereof. Affinity may also be determined with respect to objects associated with third-party systems 908 or other suitable systems. An overall affinity for a social-graph entity for each user, subject matter, or type of content may be established. The overall affinity may change based on continued monitoring of the actions or relationships associated with the social-graph entity. Although this disclosure describes determining particular affinities in a particular manner, this disclosure contemplates determining any suitable affinities in any suitable manner. In particular embodiments, social networking system 902 may measure or quantify social-graph affinity using an affinity coefficient (which may be referred to herein as “coefficient”). The coefficient may represent or quantify the strength of a relationship between particular objects associated with the online social network. The coefficient may also represent a probability or function that measures a predicted probability that a user will perform a particular action based on the user's interest in the action. In this way, a user's future actions may be predicted based on the user's prior actions, where the coefficient may be calculated at least in part a the history of the user's actions. Coefficients may be used to predict any number of actions, which may be within or outside of the online social network. As an example and not by way of limitation, these actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of a observation actions, such as accessing or viewing profile pages, media, or other suitable content; various types of coincidence information about two or more social-graph entities, such as being in the same group, tagged in the same photograph, checked-in at the same location, or attending the same event; or other suitable actions. Although this disclosure describes measuring affinity in a particular manner, this disclosure contemplates measuring affinity in any suitable manner. In particular embodiments, social networking system 902 may use a variety of factors to calculate a coefficient. These factors may include, for example, user actions, types of relationships between objects, location information, other suitable factors, or any combination thereof. In particular embodiments, different factors may be weighted differently when calculating the coefficient. The weights for each factor may be static or the weights may change according to, for example, the user, the type of relationship, the type of action, the user's location, and so forth. Ratings for the factors may be combined according to their weights to determine an overall coefficient for the user. As an example and not by way of limitation, particular user actions may be assigned both a rating and a weight while a relationship associated with the particular user action is assigned a rating and a correlating weight (e.g., so the weights total 250%). To calculate the coefficient of a user towards a particular object, the rating assigned to the user's actions may comprise, for example, 60% of the overall coefficient, while the relationship between the user and the object may comprise 40% of the overall coefficient. In particular embodiments, the social networking system 902 may consider a variety of variables when determining weights for various factors used to calculate a coefficient, such as, for example, the time since information was accessed, decay factors, frequency of access, relationship to information or relationship to the object about which information was accessed, relationship to social-graph entities connected to the object, short- or long-term averages of user actions, user feedback, other suitable variables, or any combination thereof. As an example and not by way of limitation, a coefficient may include a decay factor that causes the strength of the signal provided by particular actions to decay with time, such that more recent actions are more relevant when calculating the coefficient. The ratings and weights may be continuously updated based on continued tracking of the actions upon which the coefficient is based. Any type of process or algorithm may be employed for assigning, combining, averaging, and so forth the ratings for each factor and the weights assigned to the factors. In particular embodiments, social networking system 902 may determine coefficients using machine-learning algorithms trained on historical actions and past user responses, or data farmed from users by exposing them to various options and measuring responses. Although this disclosure describes calculating coefficients in a particular manner, this disclosure contemplates calculating coefficients in any suitable manner. In particular embodiments, social networking system 902 may calculate a coefficient based on a user's actions. Social networking system 902 may monitor such actions on the online social network, on a third-party system 908, on other suitable systems, or any combination thereof. Any suitable type of user actions may be tracked or monitored. Typical user actions include viewing profile pages, creating or posting content, interacting with content, joining groups, listing and confirming attendance at events, checking-in at locations, liking particular pages, creating pages, and performing other tasks that facilitate social action. In particular embodiments, social networking system 902 may calculate a coefficient based on the user's actions with particular types of content. The content may be associated with the online social network, a third-party system 908, or another suitable system. The content may include users, profile pages, posts, news stories, headlines, instant messages, chat room conversations, emails, advertisements, pictures, video, music, other suitable objects, or any combination thereof. Social networking system 902 may analyze a user's actions to determine whether one or more of the actions indicate an affinity for subject matter, content, other users, and so forth. As an example and not by way of limitation, if a user may make frequently posts content related to “coffee” or variants thereof, social networking system 902 may determine the user has a high coefficient with respect to the concept “coffee.” Particular actions or types of actions may be assigned a higher weight and/or rating than other actions, which may affect the overall calculated coefficient. As an example and not by way of limitation, if a first user emails a second user, the weight or the rating for the action may be higher than if the first user simply views the user-profile page for the second user. In particular embodiments, social networking system 902 may calculate a coefficient based on the type of relationship between particular objects. Referencing the social graph 1000, social networking system 902 may analyze the number and/or type of edges 1006 connecting particular user nodes 1002 and concept nodes 1004 when calculating a coefficient. As an example and not by way of limitation, user nodes 1002 that are connected by a spouse-type edge (representing that the two users are married) may be assigned a higher coefficient than a user nodes 1002 that are connected by a friend-type edge. In other words, depending upon the weights assigned to the actions and relationships for the particular user, the overall affinity may be determined to be higher for content about the user's spouse than for content about the user's friend. In particular embodiments, the relationships a user has with another object may affect the weights and/or the ratings of the user's actions with respect to calculating the coefficient for that object. As an example and not by way of limitation, if a user is tagged in first photo, but merely likes a second photo, social networking system 902 may determine that the user has a higher coefficient with respect to the first photo than the second photo because having a tagged-in-type relationship with content may be assigned a higher weight and/or rating than having a like-type relationship with content. In particular embodiments, social networking system 902 may calculate a coefficient for a first user based on the relationship one or more second users have with a particular object. In other words, the connections and coefficients other users have with an object may affect the first user's coefficient for the object. As an example and not by way of limitation, if a first user is connected to or has a high coefficient for one or more second users, and those second users are connected to or have a high coefficient for a particular object, social networking system 902 may determine that the first user should also have a relatively high coefficient for the particular object. In particular embodiments, the coefficient may be based on the degree of separation between particular objects. Degree of separation between any two nodes is defined as the minimum number of hops required to traverse the social graph from one node to the other. A degree of separation between two nodes can be considered a measure of relatedness between the users or the concepts represented by the two nodes in the social graph. For example, two users having user nodes that are directly connected by an edge (i.e., are first-degree nodes) may be described as “connected users” or “friends.” Similarly, two users having user nodes that are connected only through another user node (i.e., are second-degree nodes) may be described as “friends of friends.” The lower coefficient may represent the decreasing likelihood that the first user will share an interest in content objects of the user that is indirectly connected to the first user in the social graph 1000. As an example and not by way of limitation, social-graph entities that are closer in the social graph 1000 (i.e., fewer degrees of separation) may have a higher coefficient than entities that are further apart in the social graph 1000. In particular embodiments, social networking system 902 may calculate a coefficient based on location information. Objects that are geographically closer to each other may be considered to be more related, or of more interest, to each other than more distant objects. In particular embodiments, the coefficient of a user towards a particular object may be based on the proximity of the object's location to a current location associated with the user (or the location of a client system 906 of the user). A first user may be more interested in other users or concepts that are closer to the first user. As an example and not by way of limitation, if a user is one mile from an airport and two miles from a gas station, social networking system 902 may determine that the user has a higher coefficient for the airport than the gas station based on the proximity of the airport to the user. In particular embodiments, social networking system 902 may perform particular actions with respect to a user based on coefficient information. Coefficients may be used to predict whether a user will perform a particular action based on the user's interest in the action. A coefficient may be used when generating or presenting any type of objects to a user, such as advertisements, search results, news stories, media, messages, notifications, or other suitable objects. The coefficient may also be utilized to rank and order such objects, as appropriate. In this way, social networking system 902 may provide information that is relevant to user's interests and current circumstances, increasing the likelihood that they will find such information of interest. In particular embodiments, social networking system 902 may generate content based on coefficient information. Content objects may be provided or selected based on coefficients specific to a user. As an example and not by way of limitation, the coefficient may be used to generate media for the user, where the user may be presented with media for which the user has a high overall coefficient with respect to the media object. As another example and not by way of limitation, the coefficient may be used to generate advertisements for the user, where the user may be presented with advertisements for which the user has a high overall coefficient with respect to the advertised object. In particular embodiments, social networking system 902 may generate search results based on coefficient information. Search results for a particular user may be scored or ranked based on the coefficient associated with the search results with respect to the querying user. As an example and not by way of limitation, search results corresponding to objects with higher coefficients may be ranked higher on a search-results page than results corresponding to objects having lower coefficients. In particular embodiments, social networking system 902 may calculate a coefficient in response to a request for a coefficient from a particular system or process. To predict the likely actions a user may take (or may be the subject of) in a given situation, any process may request a calculated coefficient for a user. The request may also include a set of weights to use for various factors used to calculate the coefficient. This request may come from a process running on the online social network, from a third-party system 908 (e.g., via an API or other communication channel), or from another suitable system. In response to the request, social networking system 902 may calculate the coefficient (or access the coefficient information if it has previously been calculated and stored). In particular embodiments, social networking system 902 may measure an affinity with respect to a particular process. Different processes (both internal and external to the online social network) may request a coefficient for a particular object or set of objects. Social networking system 902 may provide a measure of affinity that is relevant to the particular process that requested the measure of affinity. In this way, each process receives a measure of affinity that is tailored for the different context in which the process will use the measure of affinity. In connection with social-graph affinity and affinity coefficients, particular embodiments may utilize one or more systems, components, elements, functions, methods, operations, or steps disclosed in U.S. patent application Ser. No. 11/503,093, filed Aug. 8, 2006, U.S. patent application Ser. No. 12/977,027, filed Dec. 22, 2010, U.S. patent application Ser. No. 12/978,265, filed Dec. 23, 2010, and U.S. patent application Ser. No. 13/632,869, field Oct. 1, 2012, each of which is incorporated by reference in their entirety. In particular embodiments, one or more of the content objects of the online social network may be associated with a privacy setting. The privacy settings (or “access settings”) for an object may be stored in any suitable manner, such as, for example, in association with the object, in an index on an authorization server, in another suitable manner, or any combination thereof. A privacy setting of an object may specify how the object (or particular information associated with an object) can be accessed (e.g., viewed or shared) using the online social network. Where the privacy settings for an object allow a particular user to access that object, the object may be described as being “visible” with respect to that user. As an example and not by way of limitation, a user of the online social network may specify privacy settings for a user-profile page identify a set of users that may access the work experience information on the user-profile page, thus excluding other users from accessing the information. In particular embodiments, the privacy settings may specify a “blocked list” of users that should not be allowed to access certain information associated with the object. In other words, the blocked list may specify one or more users or entities for which an object is not visible. As an example and not by way of limitation, a user may specify a set of users that may not access photos albums associated with the user, thus excluding those users from accessing the photo albums (while also possibly allowing certain users not within the set of users to access the photo albums). In particular embodiments, privacy settings may be associated with particular social-graph elements. Privacy settings of a social-graph element, such as a node or an edge, may specify how the social-graph element, information associated with the social-graph element, or content objects associated with the social-graph element can be accessed using the online social network. As an example and not by way of limitation, a particular concept node 1004 corresponding to a particular photo may have a privacy setting specifying that the photo may only be accessed by users tagged in the photo and their friends. In particular embodiments, privacy settings may allow users to opt in or opt out of having their actions logged by social networking system 902 or shared with other systems (e.g., third-party system 908). In particular embodiments, the privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access. As an example and not by way of limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, and my boss), users within a particular degrees-of-separation (e.g., friends, or friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems 908, particular applications (e.g., third-party applications, external websites), other suitable users or entities, or any combination thereof. Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner. In particular embodiments, one or more servers may be authorization/privacy servers for enforcing privacy settings. In response to a request from a user (or other entity) for a particular object stored in a data store, social networking system 902 may send a request to the data store for the object. The request may identify the user associated with the request and may only be sent to the user (or a client system 906 of the user) if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store, or may prevent the requested object from be sent to the user. In the search query context, an object may only be generated as a search result if the querying user is authorized to access the object. In other words, the object must have a visibility that is visible to the querying user. If the object has a visibility that is not visible to the user, the object may be excluded from the search results. Although this disclosure describes enforcing privacy settings in a particular manner, this disclosure contemplates enforcing privacy settings in any suitable manner. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the invention(s) are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative of the invention and are not to be construed as limiting the invention. Numerous specific details are described to provide a thorough understanding of various embodiments of the present invention. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. For example, the methods described herein may be performed with less or more steps/acts or the steps/acts may be performed in differing orders. Additionally, the steps/acts described herein may be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. 16841445 meta platforms, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 20th, 2022 02:25PM Apr 20th, 2022 02:25PM Facebook Technology Software & Computer Services

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.