IBM

- NYSE:IBM
Last Updated 2024-04-15

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Company Name Sector Industry
nyse:ibm IBM Apr 26th, 2022 12:00AM Jul 20th, 2016 12:00AM https://www.uspto.gov?id=US11316896-20220426 Privacy-preserving user-experience monitoring A method of operating a mobile device includes displaying a user interface as an image, the user interface being composed of a plurality of widgets, storing a privacy policy identifying at least one of the widgets, capturing a screenshot image corresponding to the screenshot image, excluding the at least one of the widgets from the screenshot image to create a modified screenshot image, and transmitting the modified screenshot image over a network to a monitoring server. 11316896 1. A method of operating a computing device comprising: displaying a user interface on a display of said computing device, said user interface being composed of a plurality of widgets; storing a privacy policy identifying at least one of said widgets; detecting that a screen capture has been triggered; changing said user interface displayed by excluding said at least one of said widgets from said user interface in response to detecting that said screen capture has been triggered and according to said privacy policy; capturing a screenshot image of a changed user interface in which said at least one of said widgets is excluded from being displayed on said display of said computing device; and transmitting said screenshot image over a network to a monitoring server. 2. The method of claim 1, further comprising running an application in said computing device, said application including program instructions causing said computing device to display said user interface, detect that said screen capture has been triggered, display said changed user interface excluding said at least of said widgets, capture said screenshot image of said changed user interface and transmit said screenshot image. 3. The method of claim 1, wherein excluding said at least one of said widgets comprises at least one of omitting a portion of said user interface corresponding to said at least one of said widgets at a time of said capture of said screenshot image, occluding a portion of said user interface corresponding to said at least one of said widgets at said time of said capture of said screenshot image, and removing a portion of said user interface corresponding to said at least one of said widgets at said time of said capture of said screenshot image. 4. The method of claim 1, further comprising receiving data from a server causing said computing device to perform one of updating said privacy policy, modifying said privacy policy, and replacing said privacy policy. 5. The method of claim 1, further comprising recording metadata about said at least one of said widgets contemporaneously with said capture of said screenshot image. 6. The method of claim 1, further comprising: receiving a selection of a widget not identified in said privacy policy; and updating said privacy policy to include said widget not identified in said privacy policy. 7. A non-transitory computer program product comprising a computer readable storage medium having program instructions embodied therewith, said program instructions executable by a processor to cause the processor to: display a user interface on a display of said non-transitory computer program product, said user interface being composed of a plurality of widgets; store a privacy policy identifying at least one of said widgets; detect that a screen capture has been triggered; change said user interface displayed by excluding said at least one of said widgets from said user interface in response to detecting that said screen capture has been triggered and according to said privacy policy; capture a screenshot image of a changed user interface in which said at least one of said widgets is excluded from being displayed on said display of said computing device; and transmit said screenshot image over a network to a monitoring server. 8. The non-transitory computer program product of claim 7, wherein said program instructions executable by said processor to cause said processor to exclude said at least one of said widgets from said screenshot image further comprises program instructions executable by said processor to cause said processor to omit from said displayed user interface a portion of said user interface corresponding to said at least one of said widgets at a time of said capture of said screenshot image. 9. The non-transitory computer program product of claim 7, wherein said program instructions executable by said processor to cause said processor to exclude said at least one of said widgets from said screenshot image further comprises program instructions executable by said processor to cause said processor to occlude in said displayed user interface a portion of said user interface corresponding to said at least one of said widgets at a time of said capture of said screenshot image. 10. The non-transitory computer program product of claim 7, wherein said program instructions executable by said processor to cause said processor to exclude said at least one of said widgets from said screenshot image further comprises program instructions executable by said processor to cause said processor to remove a portion from said displayed user interface of said user interface corresponding to said at least one of said widgets at a time of said capture of said screenshot image. 11. The non-transitory computer program product of claim 7, further comprising program instructions executable by said processor to cause said processor to receive data from a server updating said privacy policy. 12. The non-transitory computer program product of claim 7, further comprising program instructions executable by said processor to cause said processor to receive data from a server modifying said privacy policy. 13. The non-transitory computer program product of claim 7, further comprising program instructions executable by said processor to cause said processor to receive data from a server replacing said privacy policy. 14. The non-transitory computer program product of claim 7, further comprising program instructions executable by said processor to cause said processor to record metadata about said at least one of said widgets contemporaneously with said capture. 15. The non-transitory computer program product of claim 7, further comprising program instructions executable by said processor to cause said processor to: receive a selection of a widget not identified in said privacy policy; and updating said privacy policy to include said widget not identified in said privacy policy. 15 BACKGROUND The present disclosure relates to the preservation of privacy in mobile applications, and more particularly, to the preservation of privacy in monitored applications. Customers demand mobile applications that are responsive, easy to use and intuitive. While significant effort goes into designing and testing the user interfaces of mobile applications, in many cases problems and usability issues do not become apparent until the application has been deployed. These issues can occur despite best efforts used during the implementation and testing phases of application development. Collecting actual behavior of the application following its deployment is important to the application's overall success with customers. Currently, customer analytics platforms exist for providing digital customer experience management and customer behavior analysis solutions. One example is TEALEAF, which is a customer experience management solution to help companies meet online conversion and customer retention objectives. One goal of these systems is to determine how users or customers are interacting with a given product. BRIEF SUMMARY According to an exemplary embodiment of the present invention, a method of operating a mobile device includes displaying a user interface as an image, the user interface being composed of a plurality of widgets, storing a privacy policy identifying at least one of the widgets, capturing a screenshot image corresponding to the screenshot image, excluding at least one of the widgets from the screenshot image to create a modified screenshot image, and transmitting the modified screenshot image over a network to a monitoring server. According to an exemplary embodiment of the present invention, a computer network includes a mobile device running an application capturing and modifying a screenshot image using a privacy policy to generate a modified screenshot image, a computer network, and a monitoring server in signal communication with the mobile device over the computer network, wherein the monitoring server provides the privacy policy to the mobile device and receives the modified screenshot image from the mobile device, and wherein the privacy policy identifies a plurality of widgets of a user interface to be excluded from the screenshot image. As used herein, “facilitating” an action includes performing the action, making the action easier, helping to carry the action out, or causing the action to be performed. Thus, by way of example and not limitation, instructions executing on one processor might facilitate an action carried out by instructions executing on a remote processor, by sending appropriate data or commands to cause or aid the action to be performed. For the avoidance of doubt, where an actor facilitates an action by other than performing the action, the action is nevertheless performed by some entity or combination of entities. One or more embodiments of the invention or elements thereof can be implemented in the form of a computer program product including a computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more embodiments of the invention or elements thereof can be implemented in the form of a system (or apparatus) including a memory, and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more embodiments of the invention or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) stored in a computer readable storage medium (or multiple such media) and implemented on a hardware processor, or (iii) a combination of (i) and (ii); any of (i)-(iii) implement the specific techniques set forth herein. Techniques of the present invention can provide substantial beneficial technical effects. For example, one or more embodiments may provide one or more of the following advantages: address the issue of user privacy and sensitive information in monitored applications, and enable customizable privacy policies that can be created/updated without requiring the application to be altered. These and other features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS Preferred embodiments of the present invention will be described below in more detail, with reference to the accompanying drawings: FIG. 1 depicts a cloud computing node according to an embodiment of the present invention; FIG. 2 depicts a cloud computing environment according to an embodiment of the present invention; FIG. 3 depicts abstraction model layers according to an embodiment of the present invention; FIG. 4 is a flow diagram of a method for monitoring an application while preserving user privacy according to an embodiment of the present invention; FIG. 5A is a view of a mobile device having a display screen according to an embodiment of the present invention; FIG. 5B is a view of a mobile device having a modified screen capture image illustrating privacy protection features according to embodiments of the present invention; FIG. 6 is a view of a network connecting a mobile device to a monitoring server according to an exemplary embodiment of the present invention; and FIG. 7 is a block diagram depicting an exemplary computer system embodying a method of privacy preserving application monitoring according to an exemplary embodiment of the present invention. DETAILED DESCRIPTION According to one or more embodiments of the present invention, an application running on a mobile device is configured to capture one or more screenshot images during ongoing use of the application. According to an embodiment of the present invention, the application includes code causing the mobile device to capture the screenshot images and gather state information about user interface (UI) elements within the running application (e.g., visible in the context of the application). A privacy policy stored on the mobile device is used to identify and exclude certain widgets from the screenshot images, thereby causing the mobile device to generate modified screenshot images. These modified screenshot images can be communicated to a monitoring server to assess usability of the mobile application or for other purposes. The UI elements, including buttons, sheets, controls, text boxes, containers, etc., are referred to as widgets for the remainder to the disclosure. These UI elements are not intended to be limiting, and other UI elements, now known and yet to be developed, are to be considered widgets in the context of this disclosure. According to at least one embodiment, a mobile application's code is augmented with additional code that is called upon to capture a screenshot image while selectively excluding information from the screenshot image, by omission or removal, according to a privacy policy associated with the application and/or the mobile device. The additional code can be implemented as, for example, a toolkit, a library/software development kit (SDK), or the like. In the case of the library/SDK implementation, the privacy preserving functionality is added to the mobile application as logic in the form of a library/SDK. The library/SDK creates screenshot images contemporaneously with metadata about one or more widgets captured in the screenshot image, including their positions, sizes, and identifications. The metadata facilitates widget identification. Further, the metadata can be used in the creation of a privacy policy, for example, by identifying widgets according to their parameters (e.g., location, size, function, etc.). According to an embodiment of the present invention, the additional code further causes the mobile device to communicate with a monitoring server, sending the modified screenshot images to the server. In at least one embodiment, user privacy expectations are met or exceeded by limiting information sent to the monitoring server by the mobile device, e.g., wherein the mobile device captures only a portion of a user interface, excluding one or more fields or areas of the user interface from a screenshot image. According to an embodiment of the present invention, the monitoring server or another server can update, modify, or replace the privacy policy. For example, the mobile device, when communicating with the monitoring server, can receive a new or updated privacy policy. According to an embodiment of the present invention, a remote application (e.g., running on a server remote from the mobile device) is configured to create the privacy policy. While one or more embodiments of the present invention concern a mobile device running an application to perform certain technological acts, these and other embodiments can be implemented using a cloud architecture, such as the implementation of the monitoring server, which receives and sends data to the library. As such, a description of cloud computing follows. It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now to FIG. 1, a schematic of an example of a cloud computing node is shown. Cloud computing node 10 is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node 10 is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node 10 there is a computer system/server 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server 12 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown in FIG. 1, computer system/server 12 in cloud computing node 10 is shown in the form of a general-purpose computing device. The components of computer system/server 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16. Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32. Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 34 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 18 by one or more data media interfaces. As will be further depicted and described below, memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility 40, having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22. Still yet, computer system/server 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As depicted, network adapter 20 communicates with the other components of computer system/server 12 via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, and external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Referring now to FIG. 2, illustrative cloud computing environment 50 is depicted. As shown, cloud computing environment 50 comprises one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 54A, desktop computer 54B, laptop computer 54C, and/or automobile computer system 54N may communicate. Nodes 10 may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment 50 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices 54A-N shown in FIG. 2 are intended to be illustrative only and that computing nodes 10 and cloud computing environment 50 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now to FIG. 3, a set of functional abstraction layers provided by cloud computing environment 50 (FIG. 2) is shown. It should be understood in advance that the components, layers, and functions shown in FIG. 3 are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer 60 includes hardware and software components. Examples of hardware components include mainframes, in one example IBM® zSeries® systems; RISC (Reduced Instruction Set Computer) architecture based servers, in one example IBM pSeries® systems; IBM xSeries® systems; IBM BladeCenter® systems; storage devices; networks and networking components. Examples of software components include network application server software, in one example IBM Web Sphere® application server software; and database software, in one example IBM DB2® database software. (IBM, zSeries, pSeries, xSeries, BladeCenter, WebSphere, and DB2 are trademarks of International Business Machines Corporation registered in many jurisdictions worldwide). Virtualization layer 62 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications and operating systems; and virtual clients. In one example, management layer 64 may provide the functions described below. Resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal provides access to the cloud computing environment for consumers and system administrators. Service level management provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer 66 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; transaction processing; and mobile desktop. A mobile application, or app, is a software program executable by a mobile device such as a smartphone, tablet or music player. Mobile applications can exist in one or a variety of states. For example, a currently executing and visible application (i.e., in the foreground) is said to be the active, visible or foreground application. Other applications can be executing, but not visible, and are considered background applications. Still other applications can be in the background, but not executing, and are considered suspended or empty applications. While these definitions are provided for a general overview of applications states, it should be understood that additional states can be used, for example, in the case where an operating system supports multiple visible applications, which can be prioritized. Referring to FIG. 4, according to an embodiment of the present invention, a method (400) for gathering screenshot images and state information about a mobile device includes identifying widgets within an application (401). Unique identifiers enable the widgets to be identified at runtime in the application. Examples of identifiers include, but are not limited to, an identification (id) field assigned to a widget object, a tag field, an XPath like expression locating the widget, e.g., BookingActivity→Root Container→3rd Container→2nd Container→4th Button, and combinations of the aforementioned identifiers. In the case of the identification field example, this can be an identification field of a widget system. The identification field can be an integer type field (e.g., numeric). In such a case, a platform (e.g., operating system) provides a method such as “Widget object=getWidgetById(int id)” that can be used to directly obtain a widget object, e.g., an edit box instance, by id. In the case of the tag field example, a platform supports user defined data included in each widget. The user defined data, used as a tag field, can include, for example, a complex object by reference or a simple string tag. That is, the user defined data can be used as a means of identification. The tags in this case are created to be unique for the widgets so they can be referenced or located via their unique tag values. In the case of an expression based identifier example, a widget, such as a button, can be referred to using directions or a path. For example, an expression can identify the button as part of the screen for the BookingActivity example given above, which navigates to a particular button in a user interface. Typically, widgets are placed directly in the UI or in another container in the UI, which is a special type of widget that can have, and lay out, other widgets inside it, for example, to arrange a row of buttons. A screen typically has a container at a root, such that widgets can be placed in it or other containers. According to one or more embodiments of the present invention, these user interface elements are limited to user interface elements that have at least one capability to display user data, e.g., in textual form or in image form. In the present disclosure, sensitive and/or private data is called user data. The user data can be data that is sensitive, e.g., credit card numbers, identifying information, demographic information, etc. In one or more implementations, the user data can be data that an entity controlling the monitoring server does not want to view or that would expose the entity to additional obligations, e.g., enterprise sensitive data, health case data, etc. For example, a healthcare related application can be deployed in a company infrastructure, wherein the company wants to monitor the application to potentially improve its functionality, but the company does not want to be exposed to healthcare data that the user of the application might view. According to an embodiment of the present invention, a privacy policy is created for the application that enumerates and uniquely identifies each the widgets. Widgets that are to be excluded can be identified in the privacy policy. According to an embodiment of the present invention, the application is augmented with additional code to read the privacy policy and capture screenshot images in accordance with this policy(402). The augmentation can be performed at development time or after the application has been developed. At development time a developer uses a library/SDK tool that augments the application with privacy preserving functionality. Alternatively after the application is developed, application wrapping techniques can instead be used to inject the library/SDK into the application. The creation of a privacy policy is described separately herein. In FIG. 4, blocks 403-405 represent steps performed by a mobile device running an application augmented with privacy preserving functionality and a privacy policy. One or more widgets that can potentially display sensitive data are listed in the privacy policy. The contents of these widgets are excluded in the screenshot image communicated to the monitoring server, i.e., the modified screenshot image. The additional code of the application controls the capture of screenshot images using the privacy policy to identify and exclude any widgets listed in the privacy policy (403). While following techniques are examples of how this can be accomplished, the present invention is not limited to these techniques. According to an embodiment of the present invention, after the user interface has been laid out and before any screenshot image is captured, the currently displayed widgets that are configured to display user data are identified using the privacy ruling (403). According to an embodiment of the present invention, the identified widgets are excluded (e.g., omitted, occluded or removed from a screenshot) during any screen capture to generate a modified screenshot image (404). That is the modified screenshot image is a screenshot image in which the widgets identified in the privacy policy have been excluded. In one example, the identified user interface elements are made fully transparent. In another example, the identified user interface elements are obscured to remove them from view without affecting the layout. In yet another example, the additional code determines whether a widget in the privacy policy contains data, wherein the additional code is configured to either ignore the privacy policy in the case of a lack of user data or exclude the widget regardless of whether user data is displayed. The logic of the additional code that manages the exclusion recognizes the presence of user data and only excludes the widget in a case of a visible manifestation of the user data in the sensitive widget. It should be understood that any changes to the display image made during a screen capture are reversed after the screenshot image is captured. According to at least one embodiment of the present invention, a widget set that is being used to create the user's display is modified, whereupon the platform (e.g., operating system) is used to render the widgets to an offscreen image using the privacy policy. Once the screenshot image is captured, the modified widgets are restored to their original state. In this example, the physically displayed image (i.e., the image displayed to the user) is not directly modified. According to an embodiment of the present invention, in a case where the displayed image is modified using the privacy policy prior to capturing the screenshot image, the mobile device, reading the privacy policy, makes temporary changes to the widgets used to render the displayed image (e.g., the displayed screen), captures the screenshot image, and reverts to the widgets to their prior state and hence the previously displayed image. In at least one embodiment, this process results in a brief change in the displayed image, sufficient for the application and/or operating system of the mobile device to capture the screenshot image, before the previously displayed image is restored. According to an embodiment of the present invention, after the user interface has been laid out and before any image is captured, the bounds of identified widgets are determined. The identified widgets are left alone at this stage and the image of the screen is captured. After the capture, the image may still contain user data. According to an embodiment of the present invention, utilizing the recorded bounds information about the identified widgets, the image can be altered to obscure the identified widgets, e.g., by altering the color of the original pixels (404). According to an embodiment of the present invention, screen captures can be triggered manually or automatically upon the occurrence of certain events. For example, a screen capture can be triggered when an application is made active, just before a switch is made to a new screen of the application, in response to a change within a currently displayed screen. According to an embodiment of the present invention, automatic triggers can be configured to be active based on certain parameters, such as current connectivity of the user device (e.g., available bandwidth, availability of a WiFi signal, etc.), battery level, time of day, etc. In at least one example, the screen capture function is performed only in cases where it does not detract from the user experience of the application. According to an embodiment of the present invention, captured images and state information are communicated from the user device to a monitoring server (405). For example, the modified screenshot image can be sent via the Internet to the monitoring server along with other user experience monitoring data (e.g., location, network state, date/time, etc.). According to an embodiment of the present invention, the monitoring server receives and stores the captured screenshot images, making them available for analysis. For example, the captured screenshot images can be used to solve an issue that the user has experienced while using an application. According to an embodiment of the present invention, the monitoring server can change the privacy policy be replacing or updating the privacy policy implemented by the additional code of the application running on the mobile device. Thus, the monitoring server can ensure that sensitive data does not appear in the modified screenshot images communicated by the mobile device as part of the user experience monitoring system. According to an embodiment of the present invention, the privacy policy can be securely updated by sending the deployed application instances a new version to the mobile device allowing widgets to be added to, or removed from, the privacy policy. The suppression of user data in the modified screenshot images can be controlled through the privacy policy without altering any code of the application or the additional code. According to an embodiment of the present invention, once the application on the mobile device has a privacy policy and has been augmented to capture screenshot images at runtime, it can be deployed to customers. According to an embodiment of the present invention, modified screenshot images captured by the mobile devices are used for user experience debugging and problem assistance/resolution performed at a monitoring server. FIGS. 5A-B show a mobile device 500 displaying an application UI 501. FIG. 5A shows the application UI 501 including a text box 502 including user text data, a field showing user image data 503 and three control buttons 504-506. In FIG. 5B, a modified screenshot image (e.g., captured by a mobile device and communicated to a monitoring server) is illustrated, wherein the text box 502 and the field showing the user image data 503 are occluded. Furthermore, the button 504 has been made transparent, such as in the case where the mere presence of the button could reveal user data (e.g., an available action that reveals some information about the user). According to an embodiment of the present invention, the mechanism for removing user data (e.g., obscuring data with a mask, changing transparency, etc.) from captured images is controlled by the privacy policy. For example, a mask can be selected to overlay an identified location of a widget in any captured image. A privacy policy can allow different user interface elements to be hidden by different mechanisms within the same user interface as applicable/desirable. According to an embodiment of the present invention, a method specified by a privacy policy overrides another method (e.g., a default method) of removing user data. Whether the images are screenshots that are taken at defined and/or periodic intervals, and sent as images, or the images are combined as individual frames into a video that captures the user experience, the same techniques can be applied. According to an embodiment of the present invention, at least one widget 507 enables a user of the mobile device to nominate a widget whose data is to be suppressed. For example, upon the selection of the nomination button 507, the application communications a next selected widget to the monitoring server as a nominated widget. Other methods of nominating widgets can be used. For example, according to at least one embodiment of the present invention, a captured screenshot image and its metadata are displayed by the application to the user, enabling the user to pick a widget and affect the privacy policy. In at least one embodiment, the nominate button 507 puts the application into a privacy policy modifier mode, where clicking on a widget in the user interface element affects the privacy policy. This functionality enables the end-user to control what widgets are monitored where it might be applicable/reasonable to do so. FIG. 6 depicts a mobile device 601 communicating with a monitoring server 602 across a computer network 603, such as the Internet. Furthermore, a developer server 604 is disposed in signal communication with the mobile device 601 to, for example, deploy applications, updates to privacy policies and the like. Additional computer nodes can be disposed in the computer network 603, such as app store servers and the like. Privacy Policy Creation: According to an embodiment of the present invention, tooling code can be used to create a privacy policy. An initial privacy policy is created during an application development process. The widget identification and creation of the privacy policy file (e.g., a plurality of widgets) can be performed entirely manually or assisted by a user-interface (UI) layout builder. In the case of an assisted implementation, a custom plugin for an integrated development environment (IDE) enables a developer, privacy expert, etc., to create a privacy policy identifying certain widgets used by the application/graphical user interface (GUI) to be excluded from any screenshot images captured as part of an operation of the monitoring server. In at least one embodiment, a UI can be shown in a design panel used by the developer, and one or more widgets are selected/chosen for inclusion in the privacy policy. According to at least one embodiment, the tooling code can be in the form an SDK/library added to the application either by a developer or by injection (i.e., application wrapping). The library/SDK for creating the privacy policy assists in the identification of widgets and the creation of the privacy policy. According to an embodiment of the present invention, the SDK/library for creating a privacy policy includes logic similar to that of the privacy capture SDK/library in the deployed application to be monitored. According to an embodiment of the present invention, this library/SDK for creating the privacy policy is configured to capture one or more views of the display of user device (screenshot images) as the application runs. Each screenshot image includes a map of the location of each widget in the image. The state data can include hierarchical relationships between the widgets. According to an embodiment of the present invention, the SDK/library for creating the privacy policy includes functionality for sending data (e.g., unmodified screenshot image, metadata, etc.) enabling another application to display the unmodified screenshot image with hotspots over any widget by use of the metadata, and allow the privacy policy to be created by selecting the hotspots corresponding to certain widgets. For purposes of creating a privacy policy, a special build of the application can be created including the SDK/library for creating the privacy policy to facilitate the privacy policy creation. According to an embodiment of the present invention, in a case of an application developed by a third party, users' experiences can be monitored by injecting the library/SDK for creating the privacy policy into the application. The library/SDK for creating the privacy policy enables all screens to be captured so that widgets can be located. The library/SDK for creating the privacy policy captures the screenshot images, the widget hierarchy, and any identifications or tags that the widgets might have. The widgets can then be identified by an identification or tag if it exists, or by a path down to the widget's position (e.g., by an xpath like expression). According to an embodiment of the present invention, a privacy policy tool is provided (e.g., at the monitoring server or another server) to create the privacy policy. According to an embodiment of the present invention, the privacy policy tool identifies widgets using scene graphs (e.g., hierarchical relationships in a container/widget hierarchy) captured from the application. The hierarchical relationships can be used when forming xpath like expressions. The application can be run on the monitoring server or another computer system to collect screenshot images and create the privacy policy. The privacy policy can then be deployed to the mobile device(s) via the Internet or another computer network. Recapitulation: According to an exemplary embodiment of the present invention, a method of operating a mobile device includes displaying a user interface as an image, the user interface being composed of a plurality of widgets, storing a privacy policy identifying at least one of the widgets, capturing a screenshot image corresponding to the screenshot image, excluding at least one of the widgets from the screenshot image to create a modified screenshot image, and transmitting the modified screenshot image over a network to a monitoring server. The methodologies of embodiments of the disclosure may be particularly well-suited for use in an electronic device or alternative system. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “processor,” “circuit,” “module” or “system.” Furthermore, it should be noted that any of the methods described herein can include an additional step of providing a computer system for preserving user privacy in mobile applications. Further, a computer program product can include a tangible computer-readable recordable storage medium with code adapted to be executed to carry out one or more method steps described herein, including the provision of the system with the distinct software modules. Referring to FIG. 7; FIG. 7 is a block diagram depicting an exemplary computer system 700 embodying the computer system for preserving user privacy in mobile applications (see FIG. 4) according to an embodiment of the present invention. The computer system shown in FIG. 7 includes a processor 701, memory 702, display 703, input device 704 (e.g., keyboard), a network interface (I/F) 705, a media I/F 706, and media 707, such as a signal source, e.g., camera, Hard Drive (HD), external memory device, etc. In different applications, some of the components shown in FIG. 7 can be omitted. The whole system shown in FIG. 7 is controlled by computer readable instructions, which are generally stored in the media 707. The software can be downloaded from a network (not shown in the figures), stored in the media 707. Alternatively, software downloaded from a network can be loaded into the memory 702 and executed by the processor 701 so as to complete the function determined by the software. The processor 701 may be configured to perform one or more methodologies described in the present disclosure, illustrative embodiments of which are shown in the above figures and described herein. Embodiments of the present invention can be implemented as a routine that is stored in memory 702 and executed by the processor 701 to process the signal from the media 707. As such, the computer system is a general-purpose computer system that becomes a specific purpose computer system when executing routines of the present disclosure. Although the computer system described in FIG. 7 can support methods according to the present disclosure, this system is only one example of a computer system. Those skilled of the art should understand that other computer system designs can be used to implement embodiments of the present invention. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. 15214849 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Mar 28th, 2019 12:00AM https://www.uspto.gov?id=US11316530-20220426 Adaptive compression for data services A method, system, and computer program product for data compression in storage clients. In some embodiments, a storage client for accessing a storage service from a computer program is provided. A compression method is provided in the storage client to reduce a size of data objects. A frequency of compressing data from the computer program or modifying a compression algorithm based on assessing costs and benefits of compressing the data is varied. 11316530 1. A computer-implemented method comprising: providing a storage client for accessing a storage service from a computer program, wherein the storage client is implemented with software and the storage service is running on at least one computer; the storage client storing data objects using the storage service; providing at least one compression algorithm, implemented with software, in the storage client to reduce a size of data objects; compressing, with the storage client, data objects at a first frequency; and varying a frequency of compression of data objects from the first frequency to a second frequency, different from the first, or varying a type of compression algorithm based on assessing, with the storage client software, that a change has occurred to at least one of: available bandwidth to transfer data between the storage client and the storage server, computational resources for performing compression, available space for storing data, cost of available bandwidth to transfer data between the storage client and the storage server, cost of computational resources for performing compression, or cost of available space for storing data. 2. The computer-implemented method of claim 1, further comprising: providing a cache within the storage client for reducing a number of accesses to the storage service. 3. The computer-implemented method of claim 2, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the amount of free cache space available to the computer program falls below a threshold. 4. The computer-implemented method of claim 1, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the amount of free space in the storage service available to the computer program falls below a threshold. 5. The computer-implemented method of claim 1, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when an available bandwidth between the storage client and the storage service falls below a threshold. 6. The computer-implemented method of claim 1, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the storage client determines that a cost for storing data on the storage service increases above a predetermined threshold. 7. The computer-implemented method of claim 1, wherein varying the frequency of compression of data objects further comprises: determining data compression ratios for different datatypes; and increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio for datatypes with higher data compression ratios. 8. The computer-implemented method of claim 1, wherein varying the frequency of compression of data objects further comprises: monitoring a CPU usage on at least one computing node performing compression; and increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the monitored CPU usage decreases. 9. The computer-implemented method of claim 1, further comprising: providing an encryption method in the storage client to preserve data privacy. 10. A computer program product comprising a non-transitory storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising: providing a storage client for accessing a storage service from a computer program, wherein the storage client is implemented with software and the storage service is running on at least one computer; the storage client storing data objects using the storage service; providing at least one compression algorithm, implemented with software, in the storage client to reduce a size of data objects; compressing, with the storage client, data objects at a first frequency; and varying a frequency of compression of data objects from the first frequency to a second frequency, different from the first, or varying a type of compression algorithm based on assessing, with the storage client software, that a change has occurred to at least one of: available bandwidth to transfer data between the storage client and the storage server, computational resources for performing compression, available space for storing data, cost of available bandwidth to transfer data between the storage client and the storage server, cost of computational resources for performing compression, or cost of available space for storing data. 11. The computer program product of claim 10, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the amount of free space in the storage service available to the computer program falls below a threshold. 12. The computer program product of claim 10, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when an available bandwidth between the computer program and the storage service falls below a threshold. 13. The computer program product of claim 10, wherein varying the frequency of compression of data objects further comprises: increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the storage client determines that a cost for storing data on the storage service increases above a predetermined threshold. 14. The computer program product of claim 10, wherein varying the frequency of compression of data objects further comprises: determining data compression ratios for different datatypes; and increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio for datatypes with higher data compression ratios. 15. The computer program product of claim 10, wherein varying the frequency of compression of data objects further comprises: monitoring a CPU usage on at least one computing node performing compression; and increasing the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the monitored CPU usage decreases. 16. A system, comprising: a processor in communication with one or more types of memory, the processor configured to: provide a storage client for accessing a storage service from a computer program, wherein the storage client is implemented with software and the storage service is running on at least one computer; the storage client storing data objects using the storage service; provide at least one compression algorithm, implemented with software, in the storage client to reduce a size of data objects; compress, with the storage client, data objects at a first frequency; and vary a frequency of compression of data objects from the first frequency to a second frequency, different from the first, or vary a type of compression algorithm based on assessing, with the storage client software, that a change has occurred to at least one of: available bandwidth to transfer data between the storage client and the storage server, computational resources for performing compression, available space for storing data, cost of available bandwidth to transfer data between the storage client and the storage server, cost of computational resources for performing compression, or cost of available space for storing data. 17. The system of claim 16, wherein the processor is further configured to: integrate a cache within the storage client for reducing a number of accesses to the storage service. 18. The system of claim 17, wherein, to vary the frequency of compression of data objects, the processor is further configured to: increase the frequency of compressing data objects or use a type of compression algorithm with a higher data compression ratio when the amount of free cache space available to the computer program falls below a threshold. 19. The system of claim 16, wherein, to vary the frequency of compression of data objects, the processor is further configured to: increase the frequency of compressing data objects or using a type of compression algorithm with a higher data compression ratio when the storage client determines that a cost for storing data objects on the storage service increases above a predetermined threshold. 20. The computer-implemented method of claim 1, wherein at least one of the plurality of the storage services is a cloud service. 20 BACKGROUND The present disclosure relates to data storage, and more particularly, to methods, systems and computer program products for data compression in storage clients. There are a wide variety of ways of storing data persistently, particularly with cloud-based systems. These include file systems, relational databases (e.g. DB2, MySQL, SQL Server), and NoSQL systems (e.g. Redis, CouchDB/Cloudant, HBase, Hazelcast, MongoDB). It is typical to have an application program store data persistently using a client. There are a number of problems with storage clients, such as a client will typically work for only a single back-end storage system or that the performance for accessing the back-end storage systems can be significant. The problem is often much worse in cloud environments, where the distance to cloud servers can add tens (or even hundreds) of milliseconds of latency. In some instances, the persistent storage system might become unavailable due to failures or network problems. This can be a problem if the client is communicating remotely with a cloud server and does not have good connectivity. SUMMARY In accordance with an embodiment, a method for data compression in storage clients is provided. The method may include providing a storage client for accessing a storage service from a computer program; providing a compression method in the storage client to reduce a size of data objects; and varying a frequency of compressing data from the computer program or modifying a compression algorithm based on assessing costs and benefits of compressing the data. In another embodiment, a computer program product may comprise a non-transitory storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method that may include providing a storage client for accessing a storage service from a computer program; providing a compression method in the storage client to reduce a size of data objects; and varying a frequency of compressing data from the computer program or modifying a compression algorithm based on assessing costs and benefits of compressing the data. In another embodiment, a system may include a processor in communication with one or more types of memory. The processor may be configured to provide a storage client for accessing a storage service from a computer program; provide a compression method in the storage client to reduce a size of data objects; and vary a frequency of compressing data from the computer program or modify a compression algorithm based on assessing costs and benefits of compressing the data. BRIEF DESCRIPTION OF THE DRAWINGS The forgoing and other features, and advantages of the disclosure are apparent from the following detailed description taken in conjunction with the accompanying drawings in which: FIG. 1 is a block diagram illustrating a computing environment that includes multiple storage clients and storage servers in accordance with an exemplary embodiment; FIG. 2 is a block diagram illustrating an architecture of an enhanced client in accordance with an exemplary embodiment; FIG. 3 is a block diagram illustrating a cache interface with multiple implementations in accordance with an exemplary embodiment; FIG. 4 is a block diagram illustrating another depiction of a cache interfaces with multiple implementations in accordance with an exemplary embodiment; FIG. 5 is a block diagram illustrating a remote process cache implementation in accordance with an exemplary embodiment; FIG. 6 is a flow diagram illustrating a method for handling cached objects which have expired in accordance with an exemplary embodiment; FIG. 7 is a diagram illustrating a method for handling poor and/or limited connectivity in accordance with an exemplary embodiment; FIG. 8 is a block diagram illustrating a compression interface with multiple implementations in accordance with an exemplary embodiment; and FIG. 9 is a block diagram illustrating an encryption interface with multiple implementations in accordance with an exemplary embodiment; and FIG. 10 is a block diagram illustrating one example of a processing system for practice of the teachings herein. DETAILED DESCRIPTION In accordance with exemplary embodiments of the disclosure, methods, systems and computer program products for data compression in storage clients, which offer access to multiple back-end storage systems, improved performance, and higher availability than previous systems. It is particularly applicable to the cloud where there are multiple storage services available and latency for accessing a cloud storage service can be high. The systems and methods described herein may provide enhanced storage capabilities, a broad selection of storage options, optimize latency for accessing cloud storage systems (e.g., move significant data handling capabilities into client), may avoid overhead of remote storage, and may avoid sending confidential data. In some embodiments, an application may use multiple cloud storage systems or change from using one cloud storage system to another. A cloud storage manager may be provided as a layer above the cloud storage system, which allows an application to easily use multiple cloud storage systems and provides additional services not provided by cloud storage systems. The cloud storage manager may provide a storage interface for applications to use. The storage interface may be built for each cloud storage system of interest. In some embodiments, applications may access cloud storage through the storage interface. Substituting different cloud storage systems may not require changes to an application. Options for key-value stores, relational databases, and file systems may be provided by the cloud storage manager. The methods and systems described herein are directed to the design and implementation of storage clients, which offer access to multiple back-end storage systems, improved performance, and higher availability of storage capabilities. It is particularly applicable to the cloud where there are multiple storage services available and latency for accessing a cloud storage service can be high. In some embodiments, the storage client may handle multiple back-end systems. The storage client may define a key-value interface. Any back-end storage system, which implements the key-value interface, may use the storage client. If the server supports delta encoding, then the server may make the choice as to whether to decode a delta and store the full object or to just store the delta. In many cases, the server may not have the ability to decode a delta. In this case, the client may instruct the server to simply store a delta from the previous version. After a certain number of deltas, the client may send a full object (not just the delta) to the server. That way, the server does not have to keep accumulating deltas. Note that the client may perform all delta encoding and decoding (if necessary). The server does not have to understand how to perform delta encoding or decoding. The systems and methods described herein may provide encryption. Users might desire all data stored persistently to be encrypted. Therefore, the storage client may provide data encryption and decryption capabilities. Some embodiments of the disclosure may be directed to support users who have poor connectivity. The caches described herein may provide a method for users to continue to run an application when connectivity is poor. When connectivity is restored, a remote storage service can be updated in batches. FIG. 1 is a block diagram illustrating a computing environment 100 that includes multiple storage clients and storage servers in accordance with an exemplary embodiment. In some embodiments, storage systems (which may be offered over the cloud) such as Cloudant, Object Storage (which implements the OpenStack Swift API), and Cassandra typically have clients (e.g., Cloudant client 115, Object Storage client 125, Cassandra client 135), which application programs use to communicate with the actual storage servers (e.g., Cloudant server(s) 110, Object Storage server(s) 120, Cassandra server(s) 130). Although this disclosure is discussed in the context of cloud storage systems, the systems and methods described herein may be applicable to other types of storage systems. In some cases, the clients (e.g., Cloudant client 115, Object Storage client 125, Cassandra client 135) can be language specific (e.g. written for a specific programming language, such as Java, Python, JavaScript). For example, a Java client might be designed with an API allowing Java programs to use the API using Java method calls. Other storage clients have other types of API's. For example, a Rest API would allow applications to access a storage system using HTTP. The systems and methods described herein may be compatible with a wide variety of types of client (and server) APIs for accessing storage systems, including but not limited to method and/or function calls from conventional programming languages, protocols (e.g. HTTP, XML, JSON, SOAP, many others), and several other established methods for specifying interfaces. Although the disclosure discusses Cloudant, Object Storage, and Cassandra, these services are merely exemplary and other systems and methods described herein may be applied to different cloud or remote systems or services. FIG. 2 is a block diagram illustrating an architecture 200 of an enhanced storage client in accordance with an exemplary embodiment. The enhanced storage client may handle multiple back-end systems. The enhanced storage client may include the enhanced client module 210 and the cloud service subclients (e.g., Cloudant subclient 215, Object Storage subclient 220, Cassandra subclient 225). This enhanced storage client allows application programs to communicate with multiple different back-end storage systems (e.g., Cloudant server(s) 110, Object Storage server(s) 120, Cassandra server(s) 130). In some embodiments, a key-value interface may be implemented for the enhanced storage client, which may be standardized across all back-end storage systems. Any back-end storage system may use the key-value interface by implementing a subclient (e.g., Cloudant subclient 215) that implements the key-value interface over a back-end storage system (e.g., Cloudant server(s) 110). In this case, an application program can use the back-end storage system by communicating with the enhanced client. It should be noted that the subclient (e.g., Cloudant subclient 215) may implement other methods for communicating with the back-end storage system beyond just the key-value interface. The application has the option of using the back-end-specific methods in the subclient for communicating with the back-end storage system, in addition to the enhanced client key-value interface, which is standard across all back-end storage systems. That way, the application program still has the full generality of the features for the back-end storage system. The key-value interface does not limit the usage of the back-end storage system by an application program, since the application program can bypass the key-value interface and use the back-end storage system-specific API calls from the subclient. Other implementations (besides key-value interfaces) are also possible for the enhanced clients. FIGS. 3-4 are discussed collectively. Caching be used to improve performance and may be useful in cloud-based storage systems in which the client (e.g., Cloudant client 115) is remote from the storage server (e.g., Cloudant server(s) 110). In such embodiments, the physical distance between the client and server may add to the latency for storage operations. In some embodiments, the caches may be integrated directly with the client (e.g., Cloudant client 115, Object Storage client 125, Cassandra Client 135), which may enhance functionality and performance of the clients. Additionally, the integration of the caches with the clients may be a feature for application programmers. If application programmers have to implement their own caching solutions outside of the client, it may require considerably more work, and the performance of such caching solutions may not be as good. FIG. 3 is a block diagram illustrating an environment 300 with a cache interface 310 with multiple implementations in accordance with an exemplary embodiment. Multiple caches may be used within the enhanced storage clients. In some embodiments, to utilize a particular cache, the cache interface 310 may be implemented on top of the particular cache. The modular cache design may include a cache interface 310, a same process implementation 315 as the client (e.g., an in-process cache, which may store data in the same process as the application program), a remote process(es) 320, which may be an open source cache such as Redis 415 and memcached 425, and other implementations 325 (e.g., an open source cache such as Ehcache or Guava caches). The in-process cache 420 may store data in the same process as the application program. FIG. 4 is a block diagram illustrating another environment 400 of a cache interfaces 410 with multiple implementations in accordance with an exemplary embodiment. In some embodiments, the cache design may be modular. Multiple caches may be used within our enhanced clients. In order to use a particular cache, the cache interface 410 should be implemented on top of a cache (e.g., as illustrated in FIGS. 3-4). The in-process cache 420 may store data in the same process as the application program. FIG. 5 is a block diagram illustrating a remote process cache implementation 500 in accordance with an exemplary embodiment. In some embodiments, two types of caches may be utilized: in-process and remote process. In-process caches may operate in the same process as the application process. They have the advantage of being fast. Data (e.g., cached objects) does not need to be serialized in order to be cached. The cache is not shared with other clients or applications. Remote process caches 520 (e.g. Redis, memcached) execute in different processes from the application program. They have the advantage that they can be shared by multiple clients (e.g., Client1 510, Client2 515) and applications. Furthermore, they can scale to many processes (which can execute on the same or distinct computing nodes). On the negative side, there is some overhead for the interprocess communication that is required for applications/clients to communicate with the cache(s) 520. In addition, cached data may need to be serialized, which introduces additional overhead. When the cache 520 becomes full, a method may be needed to determine which object to remove from the cache 520 to make room for other objects. This process is known as cache replacement. One of the most widely used cache replacement algorithms is to replace the object which was accessed most distantly in the past (least recently used, or LRU). Other cache replacement algorithms (e.g. greedy-dual size) are also possible. Different cache replacement algorithms are also compatible with the methods and systems described herein. FIG. 6 is a flow diagram illustrating a method 600 for handling cached objects, which have expired in accordance with an exemplary embodiment. In some embodiments, cached objects may have expiration times associated with them. Once the expiration time for a cached object has passed, the object is no longer valid. Cached objects may be deleted from the cache after they have expired. Alternatively, they can be kept in the cache after their expiration times, permitting cached objects that have expired but are still current to remain in the cache. For example, at block 605, an object (o1) with an expiration time of 7:00 AM may be cached at 6:00 AM. At block 610, at 7:00 AM, o1 may remain cached. At block 615, at 7:04 AM, o1 may be requested. When o1 is requested, the server is contacted to see if the version of o1 in the cache is still current (e.g., a get-if-modified-since request may be transmitted to the server). If the server indicates that o1 is still current, the method may proceed to block 620, where the expiration time associated with o1 is updated using a new expiration time provided by the server. This may save network bandwidth (depending on the size of o1) since o1 does not need to be unnecessarily fetched from the server. If the cached version of o1 is determined to be obsolete at block 615, then the method may proceed to block 625. At block 625, the server may send an updated version of o1 to the client, and the cache may be updated using the updated version of o1 received from the server. FIG. 7 is a diagram illustrating a method 700 for handling poor and/or limited connectivity in accordance with an exemplary embodiment. In some embodiments, the cache 710 integrated with the enhanced storage client may be used to mitigate connectivity problems between the client 705 and server 715. During periods when the server 715 is unresponsive (e.g., not responding within a predetermined length of time) and/or the cost to communicate with the server is high (e.g., resources exceed an predetermined threshold), an application (which may be implemented by one or more computer programs) can operate by using the cache 710 for storage instead of the server 715. At data exchanges 720 and 725, the client 705 and server 715 may communicate to transmit batch updates or initiate synchronization when the connectivity is deemed to be responsive (e.g., server 710 responds to a client request within a predetermined period of time). FIG. 7 depicts a situation in which an application using the client 705 relies upon the cache 710 when connectivity between the client 705 and the server 715 is poor. If the server(s) 715 are not responding or is responding too slowly, the application may use the cache 710 for storage instead of the server 715. When server 715 response times improve, the application may start using the storage server 715 again. At this point, several messages might have to be exchanged between the client 705 and the server 715 to make the contents of the cache 710 and the cloud storage server(s) 715 consistent. For example, if the application has updated of to version v2 at 7:35 in the cache 710 and the cloud storage server 715 has a previous version of o1 from 7:30, then version v2 of o1 is stored at the cloud storage server(s) 715. If the cache 710 is storing o2 version v3 and the storage server 715 has a newer version v4 of o2, then version v4 of o2 is stored in the cache 710. The client 705 may make adaptive decisions of how frequently to use the cache 710 based on the responsiveness of the server 715. The client 705 may monitor the time it takes for server(s) 715 to respond to client requests. When the server 715 is slow to respond, the client 705 can increase the frequency it uses for caching data. When the server 715 is responding relatively quickly without significant delays, the client 705 can decrease the frequency it uses for caching. For example, suppose that the average time for getting a response from the server 715 increases by 70%. This might result in the client 705 increasing the percentage of requests that it directs to the cache 710. The client 705 might choose to store data in the cache 710 more frequently. It might also choose to retrieve data from the cache 710 more frequently without checking with the server 715 to determine if the cached data is the most current version. Suppose the average time for getting a response from the server 715 decreases by 50%. The client 705 might choose to use the cache 710 less frequently. For example, it might store data more frequently at the server 715 instead of caching it. It might also choose to more frequently check with the server 715 to determine if a cached object is current. FIG. 8 is a block diagram illustrating a compression interface 805 with multiple implementations in accordance with an exemplary embodiment. In some embodiments, the enhanced storage client may be used to reduce the overhead of large data objects. This may be handled by both data compression and delta encoding. The enhanced storage client may have the ability to compress objects prior to storage and to decompress them upon retrieval. In some embodiments, the compression design may be modular. In some embodiments, the compression interface 805 may be defined. Multiple compression algorithms may be used within our enhanced clients. In order to use a particular compression algorithm, the compression interface 805 may be implemented on top of the compression algorithm. In some embodiments, compression techniques described herein may include adaptive compression in which the amount and degree of compression can be varied based on run-time conditions. In some embodiments, it may be desirable to perform compression when cache space is low (e.g., below a predetermined threshold, where the threshold may be modified by a user), since a compressed object takes up less cache space. Similarly, it may be desirable to perform compression when space in the storage service is low, since a compressed object takes up less storage service space. Sometimes, there is a cost to storing data with the storage service. If this cost goes up, it becomes more desirable to perform compression before storing a data object with the storage service. In some embodiments, the bandwidth between the client and the server can affect performance. When that bandwidth is low (e.g., below a predetermined threshold), it becomes more desirable to compress data objects before sending them from the client to the server. Compression may take up CPU cycles. Therefore, when the client CPU is heavily utilized, it may be less desirable to perform compression. When the client CPU is lightly utilized, it becomes more desirable to perform compression. Not all data objects compress equally well. The client can predict from the type of a data object whether it is a good candidate for compression based on empirical evidence it has on how well similar data objects have compressed in the past. If the client determines that little space is likely to be saved by compressing a data object (o1), then it may not be desirable to compress o1, as doing so would incur some CPU overhead. If the client determines that considerable space can be saved by compressing o1, then it may be desirable to compress o1. Examples of compression algorithms that may be used by the compression interface 805 may include, but are not limited to Snappy 810, Iz4 815, and/or gzip 820. In some embodiments, the client may control the amount of compression by varying the frequency with which it will compress a data object. If the client determines that compression is desirable, it may compress data objects frequently (e.g., a set number of times during a given time period). If the client determines that compression is not desirable, it can compress data objects less frequently (e.g., fewer times in a given time period). In some embodiments, the enhanced storage clients may allow different types of compression algorithms to be used. Some compression algorithms are efficient at compressing data, while others are not as efficient but have the advantage of using fewer CPU cycles. The data compression ratio is the uncompressed data size of an object divided by the compressed data size. The data compression ratio is dependent on both the data object and the compression algorithm. In general, an algorithm with a higher compression ratio will result in more compression at the cost of higher CPU overhead. If a data compression algorithm consumes more CPU cycles without improvement in compression ratio, it is probably not a good algorithm to use. The enhanced storage client may have the capability to increase the amount of compression, via some combination of increasing the frequency of data compression and/or using data compression algorithm(s) with a higher compression ratio(s) in response to one or more of the following: 1. The amount of free cache space available to the computer program falls below a threshold. 2. The amount of free space in the storage service available to the computer program falls below a threshold. 3. Available bandwidth between the computer program and the storage service falls below a threshold. 4. A cost for storing data on the storage service increases. 5. The type of the data object currently being stored has a higher compression ratio. 6. The CPU utilization of the client decreases. The enhanced storage clients may have the capability to decrease the amount of compression, via some combination of decreasing the frequency of data compression and/or using data compression algorithm(s) with lower CPU overhead (which generally means a lower compression ratio) in response to one or more of the following: 1. The amount of free cache space available to the computer program rises above a threshold. 2. The amount of free space in the storage service available to the computer program rises above a threshold. 3. Available bandwidth between the computer program and the storage service rises above a threshold. 4. A cost for storing data on the storage service decreases. 5. The type of the data object currently being stored has a lower compression ratio. 6. The CPU utilization of the client increases. In some embodiments, the overhead may be reduced by delta encoding. Delta encoding is useful when a client is sending updated objects to the server. Instead of sending the full object each time, the client can send only a delta (e.g., the difference between the current version and the last stored version on the server). In many cases, deltas are only a small fraction of the size of the complete object. If the server supports delta encoding, then the server may make the choice as to whether to decode a delta and store the full object or to just store the delta. In many cases, the server will not have the ability to decode a delta. The client can instruct the server to simply store a delta from a previous version. After a certain number of deltas, the client can send a full object (not just the delta) to the server. That way, the server does not have to keep accumulating deltas. Note that the client can perform all delta encoding and decoding (if necessary). The server does not have to understand how to perform delta encoding or decoding. Delta encoding enables a client storing multiple updates to an object (o1) to send the deltas (e.g., changes) resulting in the new objects d1, d2, . . . , dn instead of the entire copies of the updated objects. Accordingly, less information needs to be sent from the client to the server. At some point, the client might send a full updated object instead of a delta. There are multiple ways in which a client might make a decision to send a full version of an object instead of a delta: 1. The client might wait until the number of previous deltas it has sent since sending the last full version of the object has exceeded a threshold. 2. The client might wait until the total number of bytes contained in deltas exceeds a threshold. 3. The client might make a determination of the cost to construct an updated object by applying deltas to the previous version of the object stored at the server. Once this cost exceeds a threshold, the client then sends a full version of the object instead of a delta. 4. The decision can also be made in a large number of other ways within the spirit and scope of the invention. When a client sends a delta di for o1, the server may retain the previous version of o1 and all other deltas needed to construct the updated version of o1 from the previous version. In order to reconstruct an updated version of o1 from an earlier version, deltas are used to construct the updated version of o1. The updated version can be constructed by the server, the client, or by another party. The fact that the server does not have to apply deltas to reconstruct the object means that enhanced storage clients can use delta encoding without the server having special support for delta encoding. The server can be unaware that delta encoding is actually being implemented. In some cases, a client may have to determine the value of o1 when it does not have a copy of o1 stored locally. Instead, o1 is represented by a previous version and multiple deltas on the server. If the server does not have the capability to decode deltas, the previous version of o1 and the subsequent deltas can be retrieved by the client from the server. The client then determines an updated version of o1 by applying the deltas to the previous version of o1. When the client successfully stores a full updated version of o1 instead of a delta on the server, the previous version of o1 stored on the server, as well as previous deltas applicable to this previous version of o1, can be deleted from the server. This saves space. FIG. 9 is a block diagram illustrating an environment 900 with an encryption interface with multiple implementations in accordance with an exemplary embodiment. In some embodiments, users may want all data stored persistently to be encrypted, which may be provided by the enhanced storage client. In some embodiments, the encryption design may be modular. An encryption interface may be defined 905. Multiple encryption algorithms can be used within the enhanced clients. In order to use a particular encryption algorithm, the encryption interface may be implemented on top of the particular encryption algorithm. In some embodiments, users can encrypt data using encryption algorithms, such as AES 128 bits 910, AES 256 bits 915, or Blowfish 448 bits 920. Using the enhanced storage client, users may encrypt data before it is ever stored in a server, or before the data is cached. Referring to FIG. 10, there is shown an embodiment of a processing system 1000 for implementing the teachings herein. In this embodiment, the system 1000 has one or more central processing units (processors) 1001a, 1001b, 1001c, etc. (collectively or generically referred to as processor(s) 1001). In one embodiment, each processor 1001 may include a reduced instruction set computer (RISC) microprocessor. Processors 1001 are coupled to system memory 1014 and various other components via a system bus 1013. Read only memory (ROM) 1002 is coupled to the system bus 1013 and may include a basic input/output system (BIOS), which controls certain basic functions of system 1000. FIG. 10 further depicts an input/output (I/O) adapter 1007 and a network adapter 1006 coupled to the system bus 1013. I/O adapter 1007 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 1003 and/or tape storage drive 1005 or any other similar component. I/O adapter 1007, hard disk 1003, and tape storage device 1005 are collectively referred to herein as mass storage 1004. Operating system 1020 for execution on the processing system 1000 may be stored in mass storage 1004. A network adapter 1006 interconnects bus 1013 with an outside network 1016 enabling data processing system 1000 to communicate with other such systems. A screen (e.g., a display monitor) 1015 is connected to system bus 1013 by display adaptor 1012, which may include a graphics adapter to improve the performance of graphics intensive applications and a video controller. In one embodiment, adapters 1007, 1006, and 1012 may be connected to one or more I/O busses that are connected to system bus 1013 via an intermediate bus bridge (not shown). Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Additional input/output devices are shown as connected to system bus 1013 via user interface adapter 1008 and display adapter 1012. A keyboard 1009, mouse 1010, and speaker 1011 all interconnected to bus 1013 via user interface adapter 1008, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. In exemplary embodiments, the processing system 1000 includes a graphics-processing unit 1030. Graphics processing unit 1030 is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. In general, graphics-processing unit 1030 is very efficient at manipulating computer graphics and image processing, and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. Thus, as configured in FIG. 10, the system 1000 includes processing capability in the form of processors 1001, storage capability including system memory 1014 and mass storage 1004, input means such as keyboard 1009 and mouse 1010, and output capability including speaker 1011 and display 1015. In one embodiment, a portion of system memory 1014 and mass storage 1004 collectively store an operating system such as the AIX® operating system from IBM Corporation to coordinate the functions of the various components shown in FIG. 10. The present disclosure may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. 16367400 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Dec 18th, 2020 12:00AM https://www.uspto.gov?id=US11315938-20220426 Stacked nanosheet rom A semiconductor device including a first nanosheet stack of two memory cells including a lower nanosheet stack on a substrate including alternating layers of a first work function metal and a semiconductor channel material vertically aligned and stacked one on top of another, and an upper nanosheet stack including alternating layers of a second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the upper nanosheet stack vertically aligned and stacked on the lower nanosheet stack, where a first memory cell of the two memory cells including the lower nanosheet stack includes a first threshold voltage and a second memory cell of the two memory cells including the upper nanosheet stack includes a second threshold voltage, where the first threshold voltage is different than the second threshold voltage. Forming a semiconductor device including a first nanosheet stack of two memory cells. 11315938 1. A semiconductor device comprising: a first nanosheet stack comprising two memory cells, the first nanosheet stack comprising: a lower nanosheet stack on a substrate comprising alternating layers of a first work function metal and a semiconductor channel material vertically aligned and stacked one on top of another; and an upper nanosheet stack comprising alternating layers of a second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the upper nanosheet stack vertically aligned and stacked on top of the lower nanosheet stack, wherein at least a portion of the first work function metal and a portion of the second work function metal are disposed on opposite sides of the nanosheet stack, wherein a first memory cell of the two memory cells comprising the lower nanosheet stack comprises a first threshold voltage and a second memory cell of the two memory cells comprising the upper nanosheet stack comprises a second threshold voltage, and wherein the first threshold voltage is different than the second threshold voltage. 2. The semiconductor device according to claim 1, further comprising: a spacer separating the lower nanosheet stack from the upper nanosheet stack. 3. The semiconductor device according to claim 2, further comprising: a lower isolation disposed on a first side of the first nanosheet stack in direct contact with sidewalls of the lower nanosheet stack, wherein the lower isolation extends from the spacer down to the substrate; and an upper isolation disposed on a second side of the first nanosheet stack in direct contact with sidewalls of the upper nanosheet stack, wherein the upper isolation extends above the first nanosheet stack from the spacer, wherein the lower isolation and the upper isolation are disposed on opposite sides of the first nanosheet stack. 4. The semiconductor device according to claim 1, further comprising: a gate isolation structure separating the first nanosheet stack from adjacent nanosheet stacks in an array of nanosheet stacks. 5. The semiconductor device according to claim 1, further comprising: a first contact connected to the portion of the first work function metal; and a second contact connected to the portion of the second work function metal. 6. The semiconductor device according to claim 1, further comprising: a second nanosheet stack comprising: a second lower nanosheet stack on the substrate comprising alternating layers of the first work function metal and the semiconductor channel material vertically aligned and stacked one on top of another; and a second upper nanosheet stack on the substrate comprising alternating layers of the first work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the second upper nanosheet stack vertically aligned and stacked on top of the second lower nanosheet stack. 7. The semiconductor device according to claim 1, further comprising: a third nanosheet stack of two memory cells comprising: a third lower nanosheet stack on the substrate comprising alternating layers of the second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another; a third upper nanosheet stack on the substrate comprising alternating layers of the second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the third upper nanosheet stack vertically aligned and stacked on top of the third lower nanosheet stack. 8. A semiconductor device comprising: a lower nanosheet stack on a substrate comprising alternating layers of a first work function metal and a semiconductor channel material vertically aligned and stacked one on top of another; and an upper nanosheet stack comprising alternating layers of a second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the upper nanosheet stack vertically aligned and stacked on top of the lower nanosheet stack; a single contact connected to both the first work function metal of the lower nanosheet stack and the second work function metal of the upper nanosheet stack, wherein at least a portion of the first work function metal and a portion of the second work function metal are disposed on opposite sides of the nanosheet stack, wherein a first memory cell of the two memory cells comprising the lower nanosheet stack comprises a first threshold voltage and a second memory cell of the two memory cells comprising the upper nanosheet stack comprises a second threshold voltage, wherein the first threshold voltage is different than the second threshold voltage. 9. The semiconductor device according to claim 8, further comprising: a spacer between the lower nanosheet stack and the upper nanosheet stack, wherein vertical sides of the spacer are coplanar with vertical sides of the lower nanosheet stack and vertical sides of the upper nanosheet stack. 10. The semiconductor device according to claim 8, further comprising: a gate isolation structure between adjacent nanosheet stacks in an array of nanosheet stacks. 11. The semiconductor device according to claim 8, further comprising: a lower isolation vertically coplanar with a side of the lower nanosheet stack; and an upper isolation vertically coplanar with an opposite side of the upper nanosheet stack. 12. The semiconductor device according to claim 8, further comprising: a second nanosheet stack of two memory cells comprising: a second lower nanosheet stack on the substrate comprising alternating layers of the first work function metal and the semiconductor channel material vertically aligned and stacked one on top of another; a second upper nanosheet stack comprising alternating layers of the first work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the second upper nanosheet stack vertically aligned and stacked on top of the second lower nanosheet stack; and a second single contact connected to opposite sides of the second nanosheet stack to both the first work function metal of the second lower nanosheet stack and to the first work function metal of the second upper nanosheet stack. 13. The semiconductor device according to claim 8, further comprising: a third nanosheet stack of two memory cells comprising: a third lower nanosheet stack on the substrate comprising alternating layers of the second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another; an third upper nanosheet stack on the substrate comprising alternating layers of the second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the third upper nanosheet stack vertically aligned and stacked on top of the third lower nanosheet stack; and a third single contact connected to opposite sides of the third nanosheet stack to both the second work function metal of the third lower nanosheet stack and to the second work function metal of the third upper nanosheet stack. 14. A method comprising: forming a nanosheet stack on a substrate, the nanosheet stack comprising an upper nanosheet stack vertically aligned above a lower nanosheet stack, the upper nanosheet stack and the lower nanosheet stack each comprising alternating layers of a sacrificial material and a semiconductor channel material vertically aligned and stacked one on top of another; removing the sacrificial material layers of the lower nanosheet stack; patterning a first work function metal surrounding the semiconductor channel layers of lower nanosheet stack; removing the sacrificial material layers of the upper nanosheet stack; and patterning a second work function metal surrounding the semiconductor channel layers of the upper nanosheet stack, wherein a first memory cell comprising the lower nanosheet stack comprises a first threshold voltage and a second memory cell of the upper nanosheet stack comprises a second threshold voltage, wherein the first threshold voltage is different than the threshold voltage. 15. The method according to claim 14, further comprising: forming a first contact connected to the first work function metal on of the lower nanosheet stack; and forming a second contact connected to the second work function metal on an opposite side of the upper nanosheet stack. 16. The method according to claim 15, further comprising: applying a read voltage to the first contact and to the second contact, wherein the first threshold voltage is greater than the read voltage and the read voltage is greater than the second threshold voltage; determining the upper nanosheet stack is a first memory state, based on the upper nanosheet stack being off, dependent upon the first threshold voltage being greater than the read voltage; and determining the lower nanosheet stack is in a second memory state, based on the lower nanosheet stack being on, dependent upon the second threshold voltage being less than the read voltage. 17. The method according to claim 14, further comprising: forming a second nanosheet stack on the substrate, the second nanosheet stack comprising a second upper nanosheet stack vertically aligned above a second lower nanosheet stack, the second upper nanosheet stack and the second lower nanosheet stack each comprising alternating layers of the sacrificial material and the semiconductor channel material vertically aligned and stacked one on top of another; removing the sacrificial material layers of the second nanosheet stack; patterning the first work function metal surrounding the semiconductor channel layers of second nanosheet stack. 18. The method according to claim 14, further comprising: forming a third nanosheet stack on the substrate, the third nanosheet stack comprising a third upper nanosheet stack vertically aligned above a third lower nanosheet stack, the third upper nanosheet stack and the third lower nanosheet stack each comprising alternating layers of a sacrificial material and a semiconductor channel material vertically aligned and stacked one on top of another; removing the sacrificial material layers of the third nanosheet stack; patterning the second work function metal surrounding the semiconductor channel layers of the third nanosheet stack. 19. The method according to claim 14, further comprising: forming upper source drain regions extending laterally from either end of the semiconductor channel material layers of the upper nanosheet stack; forming lower source drain regions extending laterally from either end of the semiconductor channel material layers of the upper nanosheet stack. 20. The method according to claim 14, further comprising: forming a first contact connected to both the first work function metal on a side of the lower nanosheet stack and to the second work function metal on an opposite side of the upper nanosheet stack. 20 BACKGROUND The present invention relates, generally, to the field of semiconductor manufacturing, and more particularly to fabricating read-only memory (hereinafter “ROM”) in stacked nanosheet field effect transistors. Complementary Metal-oxide-semiconductor (CMOS) technology is commonly used for field effect transistors (hereinafter “FET”) as part of advanced integrated circuits (hereinafter “IC”), such as central processing units (hereinafter “CPUs”), memory, storage devices, and the like. As demands to reduce the dimensions of transistor devices continue, nanosheet FETs help achieve a reduced FET device footprint while maintaining FET device performance. A nanosheet device contains one or more layers of semiconductor channel material portions having a vertical thickness that is substantially less than its width. A nanosheet FET includes a plurality of stacked nanosheets extending between a pair of source/drain epitaxial regions. The device may be a gate all around device or transistor in which a gate surrounds a portion of the nanosheet channel. Memory cells are needed in a stacked nanosheet device, including non-volatile memory. Co-integration of non-volatile memories such as ROM while fabricating stacked nanosheet FETS would provide more integrated circuitry. SUMMARY According to an embodiment, a semiconductor device is provided. The semiconductor device including a first nanosheet stack including two memory cells, the first nanosheet stack including a lower nanosheet stack on a substrate including alternating layers of a first work function metal and a semiconductor channel material vertically aligned and stacked one on top of another, and an upper nanosheet stack including alternating layers of a second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the upper nanosheet stack vertically aligned and stacked on top of the lower nanosheet stack, where at least a portion of the first work function metal and a portion of the second work function metal are disposed on opposite sides of the nanosheet stack, where a first memory cell of the two memory cells including the lower nanosheet stack includes a first threshold voltage and a second memory cell of the two memory cells including the upper nanosheet stack includes a second threshold voltage, where the first threshold voltage is different than the second threshold voltage. According to an embodiment, a semiconductor device is provided. The semiconductor device including a lower nanosheet stack on a substrate including alternating layers of a first work function metal and a semiconductor channel material vertically aligned and stacked one on top of another, and an upper nanosheet stack including alternating layers of a second work function metal and the semiconductor channel material vertically aligned and stacked one on top of another, the upper nanosheet stack vertically aligned and stacked on top of the lower nanosheet stack, a single contact connected to both the first work function metal of the lower nanosheet stack and the second work function metal of the upper nanosheet stack, wherein at least a portion of the first work function metal and a portion of the second work function metal are disposed on opposite sides of the nanosheet stack, where a first memory cell of the two memory cells including the lower nanosheet stack includes a first threshold voltage and a second memory cell of the two memory cells including the upper nanosheet stack includes a second threshold voltage, where the first threshold voltage is different than the second threshold voltage. According to an embodiment, a method is provided. The method including forming a nanosheet stack on a substrate, the nanosheet stack including an upper nanosheet stack vertically aligned above a lower nanosheet stack, the upper nanosheet stack and the lower nanosheet stack each including alternating layers of a sacrificial material and a semiconductor channel material vertically aligned and stacked one on top of another, removing the sacrificial material layers of the lower nanosheet stack, patterning a first work function metal surrounding the semiconductor channel layers of lower nanosheet stack, removing the sacrificial material layers of the upper nanosheet stack; and patterning a second work function metal surrounding the semiconductor channel layers of the upper nanosheet stack, where a first memory cell including the lower nanosheet stack includes a first threshold voltage and a second memory cell of the upper nanosheet stack includes a second threshold voltage, where the first threshold voltage is different than the threshold voltage. BRIEF DESCRIPTION OF THE DRAWINGS These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. The various features of the drawings are not to scale as the illustrations are for clarity in facilitating one skilled in the art in understanding the invention in conjunction with the detailed description. In the drawings: FIG. 1 illustrates a cross-sectional view of a semiconductor structure at an intermediate stage of fabrication, according to an exemplary embodiment; FIG. 2 illustrates a cross-sectional view of the semiconductor structure and illustrates selective removal of semiconductor material layers, according to an exemplary embodiment; FIG. 3 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of side spacers, according to an exemplary embodiment; FIG. 4 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a dummy gate conductor and a gate hard mask, according to an exemplary embodiment; FIG. 5 illustrates a cross-sectional view of the semiconductor structure and illustrates removal of select silicon germanium layers, according to an exemplary embodiment; FIG. 6 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of spacers, according to an exemplary embodiment; FIG. 7 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a vertical isolation region, according to an exemplary embodiment; FIG. 8 illustrates a cross-sectional view of the semiconductor structure and illustrates selective removal of sacrificial semiconductor layers, according to an exemplary embodiment; FIG. 9 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a first work function metal, according to an exemplary embodiment; FIG. 10 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a first organic polymer layer, according to an exemplary embodiment; FIG. 11 illustrates a cross-sectional view of the semiconductor structure and illustrates removal of a portion of the work function metal, according to an exemplary embodiment; FIG. 12 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a first patterning layer and an opening, according to an exemplary embodiment; FIG. 13 illustrates a cross-sectional view of the semiconductor structure and illustrates removal of portions of the work function metal, according to an exemplary embodiment; FIG. 14 illustrates a cross-sectional view of the semiconductor structure, and illustrates removal of the first patterning layer and removal of the first organic polymer layer, according to an exemplary embodiment; FIG. 15 illustrates a cross-sectional view of the semiconductor structure, and illustrates formation of a second organic polymer layer, formation of a second patterning layer and formation of two openings, according to an exemplary embodiment; FIG. 16 illustrates a cross-sectional view of the semiconductor structure and illustrates removal of portions of the work function metal, according to an exemplary embodiment; FIG. 17 illustrates a cross-sectional view of the semiconductor structure, and illustrates removal of the second patterning layer and removal of the second organic polymer layer, according to an exemplary embodiment; FIG. 18 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of a second work function metal, according to an exemplary embodiment; FIG. 19 illustrates a cross-sectional view of the semiconductor structure and illustrates formation of contacts, according to an exemplary embodiment; and FIGS. 20A, 20B and 20C each illustrate a view of the semiconductor structure. FIG. 20A illustrates an upper view of the semiconductor structure. FIGS. 20B and 20C each illustrate a cross-sectional view of the semiconductor structure along sections X-X and Y-Y, respectively, of the semiconductor structure. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers may be repeated among the figures to indicate corresponding or analogous features. DETAILED DESCRIPTION Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. References in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. It will be understood that when an element as a layer, region or substrate is referred to as being “on” or “over” another element, it can be directly on the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. In the interest of not obscuring the presentation of embodiments of the present invention, in the following detailed description, some processing steps or operations that are known in the art may have been combined together for presentation and for illustration purposes and in some instances may have not been described in detail. In other instances, some processing steps or operations that are known in the art may not be described at all. It should be understood that the following description is rather focused on the distinctive features or elements of various embodiments of the present invention. A nanosheet field effect transistor (hereinafter “FET”) may be formed from alternating layers of silicon and silicon germanium, which are then formed into stacked nanosheets. A gate all around structure may be formed on all vertical sides and on a horizontal top surface of a section of the nanosheets. Source-drain structures may be formed at the opposite ends of the stacked nanosheet structures. Read-only memory (hereinafter “ROM”) may be used for firmware, which is programming for a computer start-up, among other uses. Fabrication of FET used as non-volatile memory, such as ROM, is an approach for increasing density of semiconductors. Stacking two nanosheet FETs, each used as ROM, with an isolation layer between them, further increases density. Co-fabrication of nanosheet FETs used as FET with nanosheet FETs programmed as ROM can be done using simple process modifications and help to enable combined stacked nanosheet circuits with less fabrication steps than separately fabricating nanosheet FETs and other types of ROM. The present invention relates, generally, to the field of semiconductor manufacturing, and more particularly to fabricating ROM in nanosheet FETs. The stacked nanosheet FET ROM may include two negative channel FETs (hereinafter “n-FETs”), each used as a ROM memory cell, stacked on top of each other. Alternatively, two positive channel FETs (hereinafter “p-FETs”), each used as a ROM memory cell, may be stacked on top of each other. Programming of each ROM memory cell may be done during device fabrication by utilizing a different work function metal for different memory states, which results in different threshold voltages. A memory state may be “read” based on a memory cell being turned “on” or “off” at a read voltage which is in between the different threshold voltage values. More specifically, those ROM memory cells which have a threshold voltage above the read voltage will turn on at the read voltage and may be considered in a first memory state, and those ROM memory cells which have a threshold voltage below the read voltage will be off at the read voltage and may be considered in a second memory state. The threshold voltage of a FET or a ROM memory cell is determined by properties of a composition of a work function metal used in the FET or ROM memory cell, along with various other device/material properties including but not limited to channel doping, growth conditions of a high-k dielectric, charge distribution within the high-k dielectric, spacing of high-k/channel interface, presence and properties of interfacial oxide formed between high-k and channel. In an embodiment, when fabricating ROM memory cells, all parts of the ROM memory cells may be fabricated at the same time with the same materials, and control of a threshold voltage for different ROM memory cells may be managed by using different work function metals for different ROM memory cells. Specifically, according to embodiments disclosed herein, a first set of ROM memory cells may have a first work function metal surrounding the channel region, and a second set of ROM memory cells may have a second work function metal surrounding the channel region. The first and the second set of ROM memory cells may be fabricated simultaneously and from the same materials, except for the formation of the work function metal for each. The resulting first set of ROM memory cells and the second set of ROM memory cells may each have a different threshold voltage, dependent, in this set of circumstances, on the respective work function metal surrounding the channel region of the ROM memory cell. Embodiments of the present invention disclose a structure and a method of forming a double stacked FET nanosheet ROM are described in detail below by referring to the accompanying drawings in FIGS. 1-20C, in accordance with an illustrative embodiment. Referring now to FIG. 1, a semiconductor structure 100 (hereinafter “structure”) at an intermediate stage of fabrication is shown according to an exemplary embodiment. FIG. 1 is a cross-sectional view of the structure 100 parallel with subsequently formed gate lines. The structure 100 of FIG. 1 may be formed or provided. The structure 100 may include alternating layers of sacrificial semiconductor material and semiconductor channel material stacked one on top of another, covered by a hard mask 20 on a substrate 10. It should be noted that, while a limited number of alternating layers are depicted, any number of alternating layers may be formed. The substrate 10 may be, for example, a bulk substrate, which may be made from any of several known semiconductor materials such as, for example, silicon, germanium, silicon-germanium alloy, and compound (e.g. III-V and II-VI) semiconductor materials. Non-limiting examples of compound semiconductor materials include gallium arsenide, indium arsenide, and indium phosphide, or indium gallium arsenide. Typically, the substrate 10 may be approximately, but is not limited to, several hundred microns thick. In other embodiments, the substrate 10 may be a layered semiconductor such as a silicon-on-insulator or SiGe-on-insulator, where a buried insulator layer, separates a base substrate from a top semiconductor layer. The alternating layers of sacrificial semiconductor material and semiconductor channel material may include a nanosheet stack sacrificial layer 12 (hereinafter “stack sacrificial layer”) on the substrate 10, covered by a sacrificial semiconductor material layer 16 (hereinafter “sacrificial layer”), covered by a semiconductor channel material layer 18 (hereinafter “channel layer”), covered by a sacrificial layer 16, covered by a channel layer 18, covered by a sacrificial layer 16, covered by a nanosheet stack sacrificial layer 14, (hereinafter “stack sacrificial layer”). The stack sacrificial layer 14 is covered by a sacrificial layer 16, covered by a channel layer 18, covered by a sacrificial layer 16, covered by a channel layer 18. The hard mask 20 may cover the uppermost channel layer 18. The stack sacrificial layers 12, 14, may, for example, be silicon germanium with a germanium concentration about 60 atomic percent, although percentages greater than 60 percent and less than 60 percent may be used. The stack sacrificial layers 12, 14 can each be formed using an epitaxial growth technique. The stack sacrificial layers 12, 14 will each subsequently be removed selective to the remaining alternating layers, as described below. In an embodiment, the stack sacrificial layers 12, 14 may be the same material. The terms “epitaxially growing and/or depositing” and “epitaxially grown and/or deposited” mean the growth of a semiconductor material on a deposition surface of a semiconductor material, in which the semiconductor material being grown has the same crystalline characteristics as the semiconductor material of the deposition surface. In an epitaxial deposition technique, the chemical reactants provided by the source gases are controlled and the system parameters are set so that the depositing atoms arrive at the deposition surface of the semiconductor substrate with sufficient energy to move around on the surface and orient themselves to the crystal arrangement of the atoms of the deposition surface. Therefore, an epitaxial semiconductor material has the same crystalline characteristics as the deposition surface on which it is formed. Examples of various epitaxial growth techniques include, for example, rapid thermal chemical vapor deposition (RTCVD), low-energy plasma deposition (LEPD), ultra-high vacuum chemical vapor deposition (UHVCVD), low pressure chemical vapor deposition (LPCVD), atmospheric pressure chemical vapor deposition (APCVD) and molecular beam epitaxy (MBE). The temperature for epitaxial deposition typically ranges from approximately 550° C. to approximately 900° C. Although higher temperature typically results in faster deposition, the faster deposition may result in crystal defects and film cracking. The epitaxial growth the first and second semiconductor materials that provide the sacrificial semiconductor material layers and the semiconductor channel material layers, respectively, can be performed utilizing any well-known precursor gas or gas mixture. Carrier gases like hydrogen, nitrogen, helium and argon can be used. Each sacrificial layer 16 is composed of a first semiconductor material which differs in composition from at least an upper portion of the substrate 10, the channel layer 18 and the stack sacrificial layers 12, 14. In an embodiment, each sacrificial layer 16 may be a silicon-germanium semiconductor alloy and have a germanium concentration less than 50 atomic percent. In another example, each sacrificial layer 16 may have a germanium concentration ranging from about 20 atomic percent to about 40 atomic percent. Each sacrificial layer 16 can be formed using known deposition techniques or an epitaxial growth technique as described above. Each channel layer 18 is composed of a second semiconductor material which differs in composition from at least the upper portion of the substrate 10, the stack sacrificial layer 12 and the sacrificial layer 16. Each channel layer 18 has a different etch rate than the first semiconductor material of sacrificial layer 16 and has a different etch rate than the stack sacrificial layer 12, 14. The second semiconductor material can be, for example, silicon. The second semiconductor material, for each channel layer 18, can be formed using known deposition techniques or an epitaxial growth technique as described above. The alternating layers of sacrificial layer 16, channel layer 18 and the stack sacrificial layers 12, 14 can be formed by sequential epitaxial growth of alternating layers of the first semiconductor material, the second semiconductor material and the nanosheet stack sacrificial layer material. The stack sacrificial layers 12, 14 may have a thickness ranging from about 5 nm to about 15 nm. The sacrificial layers 16 may have a thickness ranging from about 5 nm to about 12 nm, while the channel layers 18 may have a thickness ranging from about 3 nm to about 12 nm. Each sacrificial layer 16 may have a thickness that is the same as, or different from, a thickness of each channel material layer 18. In an embodiment, each sacrificial layer 16 has an identical thickness. In an embodiment, each channel layer 18 has an identical thickness. The hard mask 20 may be formed over an upper horizontal surface of the alternating layers of sacrificial layers 16, channel layers 18 and the stack sacrificial layers 12, 14 by methods known in the art. Referring now to FIG. 2, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 2, the alternating layers of sacrificial layers 16, channel layers 18, the stack sacrificial layers 12, 14 and the hard mask 20 may be formed into nanosheet stacks, each covered with the hard mask 20 by patterning the hard mask 20 and subsequent removal of portions of each layer. A trench 30 may be formed between each nanosheet stack by an anisotropic etching technique, such as, for example, reactive ion etching (RIE), and stopping on etching a portion of the substrate 10 for subsequent formation of a shallow trench isolation region between each nanosheet stack. Each nanosheet stack may include the stack sacrificial layer 12 covered by a lower nanosheet stack 22, covered by the stack sacrificial layer 14, covered by an upper nanosheet stack 24, covered by the hard mask 20. In FIG. 1, and only by way of an example, the lower nanosheet stack 22 includes three layers of sacrificial layers 16 alternating with two layers of the channel layers 18, and the upper nanosheet stack 24 includes two layers of sacrificial layers 16 alternating with two channel layers 18. The lower nanosheet stack 22 may be separated from the upper nanosheet stack 24 by the stack sacrificial layer 14. The material stacks that can be employed in embodiments of the present invention are not limited to the specific embodiment illustrated in FIG. 1. The lower nanosheet stack 22 and the upper nanosheet stack 24 each can include any number of sacrificial layers 16 and channel layers 18. The nanosheet stack is used to produce a gate all around device that includes vertically stacked semiconductor channel material nanosheets for a p-FET or an n-FET device. The structure 100 may include Structure A, Structure B, Structure C and Structure D. The Structures A, B, C, D may each include a nanosheet stack with the hard mask 20 on an upper horizontal surface of the nanosheet stack. The Structures A, B, C, D are the same at this point of fabrication and remain identical unless otherwise noted. There may be any number of Structures A, B, C, D on the structure 100. In an embodiment, any of the upper nanosheet stack 24 and any of the lower nanosheet stack 22 of each Structure A, B, C, D may be formed as an individual memory cell stacked one-on-top of the other. As shown in FIG. 2, there may be a total of 8 different memory cells formed in total, 2 FETs vertically stacked and aligned in each of the Structures A, B, C, D. Referring now to FIG. 3, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 3, a shallow trench isolation region 32 (hereinafter “STI”) may be formed between adjacent nanosheet stacks in a portion of the trench 30, a lower side spacer 34 and an upper side spacer 36 may each be selectively formed on portions of a vertical side surface of the nanosheet stack. The STI 32 may be a dielectric material and may be formed in a portion of the trench 30 between adjacent nanosheet stacks formed using known patterning and deposition techniques. Adjacent nanosheet stacks shown as the Structures A, B, C, D, are isolated from one another by the STI 32. An upper horizontal surface of the STI 32 may be coplanar with an upper horizontal surface of the substrate 10, and also coplanar with a lower horizontal surface of the stack sacrificial layer 12. The lower side spacer 34 and the upper side spacer 36 may each be formed after several processes, including for example, conformally depositing or growing a dielectric, performing an anisotropic etch process to form the lower spacer 34 adjacent to a first vertical side surface of the nanosheet stack, forming a mask which protects the lower spacer 34 on the first vertical side surface of the nanosheet stack and exposes a second side of the nanosheet stack, and selectively etching a portion of the conformal layer of spacer material to form the upper side spacer 36 on an upper portion of the second side of the nanosheet stack. The lower side spacer 34 and the upper side spacer 36 may include any dielectric material such as silicon nitride and may include a single layer or may include multiple layers of dielectric material. In an embodiment, the lower side spacer 34 and the upper side spacer 36 may be the same material. The lower side spacer 34 may be formed adjacent to a first vertical side surface of the stack sacrificial layer 12, a first vertical side surface of the lower nanosheet stack 22 and a first vertical side surface of the stack sacrificial layer 14. A thickness of the lower side spacer 34 may range from about 6 nm to about 12 nm. The upper side spacer 36 may be formed adjacent to a second vertical side surface of the stack sacrificial layer 14, a second vertical side surface of the upper nanosheet stack 24 and a second vertical side surface of the hard mask 20. A thickness of the upper side spacer 36 may range from about 6 nm to about 12 nm. The lower side spacer 34 and the upper side spacer 36 each provide a barrier for subsequent processing as described below. The lower side spacer 34 and the upper side spacer 36, in combination with deposition of additional layers and selective etching, may enable selective processing of the lower nanosheet stack 22 and the upper nanosheet stack 24, allowing different processing steps for each. Referring now to FIG. 4, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 4, the hard mask 20 may be removed, a sacrificial gate 44 may be formed, and a gate hard mask 46 may be formed. The hard mask 20 may be removed via a standard etch process, exposing an upper horizontal surface of the upper nanosheet stack 24 and a horizontal side surface of the upper side spacer 36. The sacrificial gate 44 may include a single sacrificial material or a stack of two or more sacrificial materials. The at least one sacrificial material can be formed by forming a blanket layer (or layers) of a material (or various materials) and then patterning the material (or various materials) by lithography and an etch. The sacrificial gate 44 can include any material including, for example, polysilicon, amorphous silicon, or multilayered combinations thereof. The sacrificial gate 44 can be formed using any deposition technique including, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), high density plasma (HDP) deposition, and spin on techniques. Optionally, a gate dielectric layer and a gate cap may be formed as part of the sacrificial gate 44 in accordance with known techniques. In an embodiment, the sacrificial gate 44 is deposited with a thickness sufficient to fill, or substantially fill, the spaces between adjacent structures A, B, C, D and cover a horizontal upper surface of the STI 32. The sacrificial gate 44 may be adjacent to a vertical side surface and a horizontal upper surface of the lower side spacer 34. The sacrificial gate 44 may be adjacent to the second vertical side surface of the lower nanosheet stack 22. The sacrificial gate 44 may be adjacent to the first vertical side surface, a portion of the second vertical side surface, a horizontal upper surface and a horizontal lower surface, all of the upper side spacer 36. The sacrificial gate 44 may be adjacent to the first vertical side surface and the upper horizontal surface of the upper nanosheet stack 24. A height of the sacrificial gate 44 may be much thicker than the underlying structure and may have a height between 100 nm and 150 nm about the nanosheet stack. The gate hard mask 46 may be formed over a horizontal upper surface of the sacrificial gate 44, by methods known in the art. Referring now to FIG. 5, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 5, the stack sacrificial layer 12 and the stack sacrificial layer 14 may be selectively removed using one or more known techniques. The stack sacrificial layers 12, 14 are both removed selective to the sacrificial layers 16, the channel layers 18, the STI 32, the lower side spacer 34, the upper side spacer 36, the sacrificial gate 44 and the gate hard mask 46. For example, a dry etching technique can be used to selectively remove the stack sacrificial layers 12, 14, such as, for example, using vapor phased HCl dry etch. Referring now to FIG. 6, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 6, a lower isolation 52 and an upper isolation 54 may be formed. The lower isolation 52 may be formed where the stack sacrificial layer 12 has been removed. The lower isolation 52 may be formed between the lower nanosheet stack 22, the substrate 10, the lower side spacer 34 and the sacrificial gate 44. The upper isolation 54 may be formed where the stack sacrificial layer 14 has been removed. The upper isolation 54 may be formed between the lower nanosheet stack 22, the upper nanosheet stack 24, the lower side spacer 3 and, the upper side spacer 36. The lower isolation 52 and the upper isolation 54 may each be formed after several processes, including for example, conformally depositing or growing a dielectric and performing an anisotropic etch process. The lower isolation 52 and the upper isolation 54 may include any dielectric material such as silicon nitride and may include a single layer or may include multiple layers of dielectric material. In an embodiment, the lower isolation 34 and the upper isolation 36 may be the same material. Referring now to FIG. 7, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 7, a gate isolation structure 56 may be formed between adjacent nanosheet stacks. A vertical opening, not shown, may be made, removing a portion of the gate hard mask 46 and a portion of the sacrificial gate 44. The vertical opening may be etched using an anisotropic etching technique, such as, for example, reactive ion etching (RIE), and stopping at the STI 32 for subsequent formation of the gate isolation structure 56 between each nanosheet stack. The gate isolation structure 56 may be formed in the vertical opening. The gate isolation structure 56 may be a dielectric material and may be formed using known patterning and deposition techniques. The Structures A, B, C, D are isolated from one another by the STI 32 and the gate isolation structure 56. After forming the gate isolation structure 56, the gate hard mask 46, may be removed, for example, by a wet etching technique as described above, followed by a chemical mechanical polishing (CMP) technique to remove excess material and polish upper surfaces of the structure 100. Source/drain regions may be epitaxially grown in a region formed after removal of a vertical portion of the upper stack 24 and the lower stack 22 on opposite sides of the sacrificial gate 44. The source/drain regions are not visible in this portion of the structure 100. An upper source/drain region may be in direct contact with end portions of the channel layers 18 of the upper stack 24 and a lower source/drain region may be in direct contact with end portions of the channel layers 18 of the lower stack 22. The upper/source drain region may be isolated from the lower source/drain region by an isolation layer. Referring now to FIG. 8, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 8, the sacrificial gate 44 and the sacrificial layers 16 are selectively removed via one or more steps according to techniques known in the art. The sacrificial material layers 16 are removed selective to the channel layers 18, the lower isolation 52, the upper isolation 54, the lower side spacer 34, the upper side spacer 36, the gate isolation structure 56 and the STI 32. As illustrated in FIG. 8, the remaining channel layers 18 of the lower nanosheet stack 22 and of the upper nanosheet stack 24 are shown suspended and are supported on both ends by the upper and lower source/drain regions which are not shown. For example, a dry etch process can be used to selectively remove the sacrificial layer 16, such as using vapor phased HCl dry etch. Referring now to FIG. 9, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 9, a first work function metal 58 (hereinafter “WFM”) may be formed. The first WFM 58 may be conformally formed on the structure 100, according to an exemplary embodiment. The first WFM 58 is formed in each cavity of the nanosheet stack and surrounding suspended portions of the channel layers 18. The first WFM 58 forms a layer surrounding exposed portions of the nanosheet stacks. The first WFM 58 may be cover an exposed portion of the STI 32, exposed surfaces of the gate isolation structure 56. The first WFM 58 may be directly adjacent to and fill a cavity between the lower side spacer 34 and the gate isolation structure 56 on the first side of the lower nanosheet stack 22. The first WFM 58 may cover exposed of the upper side spacer 36. The first WFM 58 may be deposited using typical deposition techniques, for example, atomic layer deposition (ALD), molecular layer deposition (MLD), and chemical vapor deposition (CVD). In an embodiment, the first WFM 58 may include more than one layer, for example, a conformal layer of a high-k material may be formed prior to the formation of the first WFM 58. The material chosen for the first WFM 58, and any high-k dielectric, may be selected based on a desired threshold voltage, in combination with other materials and properties as described above, for those memory cells where the first WFM 58 surrounds the channel layers 18, and whether the device is a p-FET or n-FET. In an embodiment, the work function metal of a p-FET device may include a metal nitride, for example, titanium nitride or tantalum nitride, titanium carbide titanium aluminum carbide, or other suitable materials known in the art. In an embodiment, the work function metal of an n-FET device may include, for example, titanium aluminum carbide or other suitable materials known in the art. In an embodiment, the work function metal may include one or more layers to achieve desired device characteristics. Exemplary high-k dielectrics include, but are not limited to, HfO2, ZrO2, La2O3, Al2O3, TiO2, SrTiO3, LaAlO3, Y2O3, HfOxNy, ZrOxNy, La2OxNy, Al2OxNy, TiOxNy, SrTiOxNy, LaAlOxNy, Y2OxNy, SiON, SiNx, a silicate thereof, and an alloy thereof. The material chosen for the first WFM 58, in combination with other materials and properties as described above, may determine a first threshold voltage, VT1. At this point of fabrication VT1 is the same for each of the eight memory cells in the structure 100. Referring now to FIG. 10, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 10, a first organic polymer layer 60 (hereinafter “OPL”) may be formed. The first OPL 60 may cover the nanosheet stacks in the Structures A, B, C, D. The first OPL 60 may be formed by a blanket deposition using typical deposition techniques, for example spin-on coating. The first OPL 60 can be a self-planarizing organic material that includes carbon, hydrogen, oxygen, and optionally nitrogen, fluorine, and silicon. The first OPL 60 can be a standard CxHy polymer. Non-limiting examples of materials include, but are not limited to, CHM701B, commercially available from Cheil Chemical Co., Ltd., HM8006 and HM8014, commercially available from JSR Corporation, and ODL-102 or ODL-401, commercially available from ShinEtsu Chemical, Co., Ltd. A wet etching technique may be used to selectively remove portions of the first OPL 60 selective to the first WFM 58. An upper horizontal surface of the remaining first OPL 60 may be above an upper horizontal surface of the first WFM 58 above the nanosheet stack, allowing for subsequent formation of a mask and processing to selectively remove the first WFM 58 above the nanosheet stack in select nanosheet stacks. Portions of the WFM 58 remain exposed after recessing OPL 60 on exposed portions of the upper side spacer 36 and on exposed portions of the gate isolation structure 56. Referring now to FIG. 11, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 11, exposed portions of the first WFM 58 may be removed. A wet etching technique may be used to selectively remove portions of the first WFM 58, selective to the first OPL 60. An upper horizontal surface, and portions of a vertical side surfaces of the upper side spacer 36 may be exposed above an upper horizontal surface of the first OPL 60. An upper horizontal surface, and portions of a vertical side surfaces of the gate isolation structure 56 may be exposed above the upper horizontal surface of the first OPL 60. At this point of fabrication, the Structures A, B, C, D are the same. Referring now to FIG. 12, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 12, an opening 62 and an opening 64 may be made in the structure 100. A first patterning layer 66, may be formed on the structure 100. The first patterning layer 66 may be a blanket patterning layer on the upper horizontal surface, and exposed portions of the vertical side surfaces of the upper side spacer 36, on the upper horizontal surface, and exposed portions of the vertical side surfaces of the gate isolation structure 56, and on the upper horizontal surface of the first OPL 60. The first patterning layer 66 may be composed of a photoresist material, such as, for example, a low temperature oxide layer (LTO) with a silicon containing anti-reflective coating (SiARC) layer formed thereon. The opening 62 and the opening 64 may be formed in the first patterning layer 66, and a portion of the first OPL 60 removed, exposing a portion of the first WFM 58 covering a top of the upper nanosheet stack 24 in the Structures B and D. The opening 62 and the opening 64 may be formed using an anisotropic etching technique, such as, for example, reactive ion etching (RIE), and stopping on etching the first WFM 58. At this point of fabrication, the Structures A and C are the same, the Structures B and D are the same, and the Structures A and C are different than the Structures B and D. Specifically, the Structures B and D each have an opening. Referring now to FIG. 13, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 13, a portion of the first WFM 58 may be removed from the Structures B and D. The opening 62 of FIG. 12 had an exposed portion of the first WFM 58 above the upper nanosheet 24 in the Structure B. The removal of the first WFM 58 is selective to the channel layer 18 of the upper nanosheet stack 24, selective to the upper isolation 54, selective to remaining portions of the first OPL 60 in a region of the upper nanosheet stack 24 in the Structure B. A remaining portion of the first WFM 58 remains between the lower side spacer 36 and the gate isolation structure 56. The opening 62 of FIG. 12 has increased to become opening 68 in the Structure B. Similarly, the opening 64 of FIG. 12 had an exposed portion of the first WFM 58 above the upper nanosheet 24 in the Structure D. The removal of the first WFM 58 is selective to the channel layer 18 of the upper nanosheet stack 24, selective to the upper isolation 54, selective to remaining portions of the first OPL 60 in a region of the upper nanosheet stack 24 in the Structure D. A remaining portion of the first WFM 58 remains between the lower side spacer 36 and the gate isolation structure 56. The opening 64 of FIG. 12 has increased to become opening 70 in the Structure D. The selective removal of the first WFM 58 may be performed by a wet etching technique as described above. At this point of fabrication, the Structures A and C are the same, the Structures B and D are the same, and the Structures A and C are different than the Structures B and D. Specifically, the Structures B and D each have an opening. Referring now to FIG. 14, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 14, remaining portions of the first patterning layer 66 and remaining portions of the first OPL 60 may be removed. The removal of the first patterning layer 66 and the first OPL 60 may be performed by a wet etching technique as described above. The removal of the first patterning layer 66 and the first OPL 60 is selective to the gate isolation structure 56, the upper side spacer 36, the upper isolation 54, the channel layers 18, and the first WFM 58. Specifically, the first WFM 58 is in both the upper nanosheet stack 24 and the lower nanosheet stack 22 in the Structures A, C, and the first WFM 58 is in the lower nanosheet stack 22 in the Structures B, D. There is no first WFM 58 in the upper nanosheet stack 24 in the Structures B, D. At this point of fabrication, the Structures A and C are the same, the Structures B and D are the same, and the Structures A and C are different than the Structures B and D. Specifically, the Structures B and D do not have a work function metal in the upper nanosheet stack 24. Referring now to FIG. 15, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 15, a second organic patterning layer 72 (hereinafter “OPL”) may be formed, a second patterning layer 74 may be formed, and openings 76 and 78 may be formed. The second OPL 72 may be formed on the structure 100 as described above regarding the first OPL 60 as shown in FIG. 10, by a blanket deposition and selective etching to remove portions of the second OPL 72 selective to a portion of the upper side spacer 36, a portion of the gate isolation structure 56 and the first WFM 58. An upper horizontal surface of the second OPL 72 may be above an upper horizontal surface of the first WFM 58 above the nanosheet stack, allowing for subsequent patterning and processing. Exposed portions of the upper side spacer 36 and the gate isolation structure 56 may be above the upper horizontal surface of the second OPL 72. A second patterning layer 74, may be formed on the structure 100 as described above regarding the first patterning layer 66 shown in FIG. 11. The second patterning layer 74 may be a blanket patterning layer on the exposed portions of the upper side spacer 36 and the gate isolation structure 56, and on the upper horizontal surface of the second OPL 72. The opening 76 and the opening 78 may be formed in the second patterning layer 74, and a portion of the second OPL 72 removed, similar to the description above and as shown in FIG. 12. A portion of the first WFM 58 may be exposed between the gate isolation structure 56 and surrounding the lower nanosheet stack 22 in the Structures C and D. At this point of fabrication, the Structures A, B, C and D are each different. Referring now to FIG. 16, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 16, a portion of the first WFM 58 may be removed from the Structures C and D. The selective removal of the first WFM 58 may be performed by a wet etching technique as described above. The opening 76 of the FIG. 15 had an exposed portion of the first WFM 58 exposed in the lower nanosheet stack 22 in the Structure C, while the opening 78 of the FIG. 15 had an exposed portion of the first WFM 58 exposed in the lower nanosheet stack 22 in the Structure D. Removal of the exposed first WFM 58 increases each of the openings 76, 78, as described above in FIG. 13. The removal of the first WFM 58 in the Structure C forms an opening 80. The removal of the first WFM 58 in the Structure D forms an opening 82. The removal of the first WFM 58 is selective to the channel layer 18 of the lower nanosheet stack 22, the lower isolation 52, the upper isolation 54, the upper side spacer 36 and the gate isolation structure 56, in a region of the lower nanosheet stack 22 in both the Structures C and D. At this point of fabrication, the Structures A, B, C and D are different. Referring now to FIG. 17, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 17, the second patterning layer 74 and the second OPL 72 may be removed. The removal of the second patterning layer 74 and the second OPL 72 may be performed by a wet etching technique as described above, similar to FIG. 14. The removal of the second patterning layer 74 and the second OPL 72 is selective to the gate isolation structure 56, the upper side spacer 36, lower side spacer 34, the upper isolation 54, the lower isolation 52, the channel layers 18, and the first WFM 58. Specifically, the first WFM 58 remains in both the upper nanosheet stack 24 and the lower nanosheet stack 22 in the Structure A. The first WFM 58 remains in the lower nanosheet stack 22 in the Structure B. The first WFM 58 remains in the upper nanosheet stack 24 in the Structure C. The first WFM 58 does not remain in the Structure D. There may be residual portions of the first WFM 58 along the gate isolation structure 56, which does not adversely affect operation of the memory cells. At this point of fabrication, the Structures A, B, C and D are different. Referring now to FIG. 18, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 18, a second work function metal 84 (hereinafter “WFM”) may be formed. The second WFM 84 may be conformally formed on the structure 100, as described above and shown in FIG. 9. The second WFM 84 is formed in each exposed cavity of the nanosheet stacks and surrounding exposed suspended portions of the channel layers 18. Specifically, the second WFM 84 is formed surrounding suspended portions of the channel layers 18 in the upper stack 24 of Structure B, in the lower stack 22 of Structure C and in both the upper stack 24 and the lower stack 22 of Structure D. The second WFM 84 also fills openings in the structure 100, including surrounding the first WFM 58, the gate isolation structure 56, the upper isolation 54, the lower isolation 52, the upper side spacer 36, the lower side spacer 34, and the STI 32. As described above, the material chosen for the second WFM 84, in combination with other materials and properties as described above, may be selected based on an applicable threshold voltage desired for those memory cells where the second WFM 84 surrounds the channel layers 18, and whether the device is a p-FET or n-FET. In an embodiment, the second WFM 84 may include more than one layer, for example, a conformal layer of a high-k material may be formed prior to the formation of the second WFM 84, also described above. The material used for the second WFM 84, in combination with other materials and properties as described above, may determine a second threshold voltage, VT2 which may be different than VT1, for those memory cells of the structure 100 which include the second WFM 84 directly surrounding the channel layers 18. In at least one embodiment, all other materials and properties of the memory cells may be the same, and the different work function metals will determine different threshold voltages. The material used for the first WFM 58 and the second WFM 84 determine a programmed value of either a first memory state or a second memory state for each of the eight memory cells of the structure 100. Referring now to FIG. 19, the structure 100 is shown according to an exemplary embodiment. As shown in FIG. 19, a dielectric layer 102 and contacts 86, 88, 90, 92, 94, 96, 98 may be formed in openings (not shown) in the dielectric layer. The dielectric layer 102 may be conformally formed as described above and form a blanket deposition on the structure 100. The dielectric layer 102 may be formed and then a chemical mechanical polishing (CMP) technique may be used to remove excess material and polish upper surfaces of the structure 100. A photo resist mask (not shown) may be used to protect the dielectric layer 102 and the photo resist mask may have a space which allows the openings to be formed during recessing/etching. The opening may be formed using an anisotropic vertical etch process such as a reactive ion etch (RIE), or any suitable etch process. The openings may be formed in one or more process steps. The photo resist mask may be removed subsequently. Side surfaces of the dielectric layer 102 will be exposed and upper surfaces of the WFM 84 will be exposed at a bottom of the openings. The contacts 86, 88, 90, 92, 94, 96, 98 may be deposited in each opening using conventional deposition techniques including, but not limited to: atomic layer deposition (ALD), chemical vapor deposition (CVD), molecular beam deposition (MBD), pulsed laser deposition (PLD), or liquid sourced misted chemical deposition (LSMCD). The contacts 86, 88, 90, 92, 94, 96, 98 may then be may be polished using a chemical mechanical polishing (CMP) technique to remove excess material and polish upper surfaces of the structure 100 until top surfaces of contacts 86, 88, 90, 92, 94, 96, 98 are substantially coplanar with a top surface of the dielectric layer 102. The contacts 86, 88, 90, 92, 94, 96, 98 may be made from any known metal, such as, for example, Al, W, Cu, Co, Zr, Ta, Hf, Ti, Ru, Pa, metal oxide, metal carbide, metal nitride, transition metal aluminides (e.g. Ti3Al, ZrAl), TaC, TiC, TaMgC, and any combination of those materials. The contacts 86, 88, 90, 92, 94, 96, 98 may have one or more layers. In an embodiment, the contacts 86, 88, 90, 92, 94, 96, 98 may have a bottom layer of Ti or TiN and a top layer of Ti or Cu. The contacts 86, 88, 90, 92, 94, 96, 98 may have an upper horizontal surface which is substantially coplanar with an upper horizontal surface of the dielectric layer 102. The contacts 86, 88, 90, 92, 94, 96, 98 may provide an electrical contact to a gate of the structure 100 by providing contact directly to the second WFM 84, and indirectly the first WFM 58. The first WFM 58 and the second WFM 84 provide a wrap around gate for the channels of the nanosheet transistor. The contacts 86, 88, 90, 92, 94, 96, 98 each provide a contact to the wrap around gate of a nanosheet stack. The contact 86 is a gate contact to the upper nanosheet stack 24 of the Structure A, connecting to the second WFM 84 which is connected to the first WFM 58 surrounding the channel layers 18 of the upper nanosheet stack 24 of the Structure A. Although not shown in the figures, the lower nanosheet stack 22 of the Structure A will also have a gate contact. The contact 88 is a gate contact to the lower nanosheet stack 22 of the Structure B, connecting to the second WFM 84 which is connected to the first WFM 58 surrounding the channel layers 18 of the lower nanosheet stack 22 of the Structure B. The contact 90 is a gate contact to the upper nanosheet stack 24 of the Structure B, connecting to the second WFM 84 surrounding the channel layers 18 of the upper nanosheet stack 24 of the Structure B. The contact 92 is a gate contact to the lower nanosheet stack 22 of the Structure C, connecting to the second WFM 84 surrounding the channel layers 18 of the lower nanosheet stack 22 of the Structure C. The contact 94 is a gate contact to the upper nanosheet stack 24 of the Structure C, connecting to the second WFM 84 which is connected to the first WFM 58 surrounding the channel layers 18 of the upper nanosheet stack 24 of the Structure C. The contact 96 is a gate contact to the lower nanosheet stack 22 of the Structure D, connecting to the second WFM 84 surrounding the channel layers 18 of the lower nanosheet stack 22 of the Structure D. The contact 98 is a gate contact to the upper nanosheet stack 24 of the Structure D, connecting to the second WFM 84 surrounding the channel layers 18 of the upper nanosheet stack 24 of the Structure D. A portion of the structure 100 may be removed using a chemical mechanical polishing (CMP) technique to remove excess material and polish upper surfaces of the structure 100. The material used for the first WFM 58 and the second WFM 84 determine a programmed value of either a first memory state or a second memory state for each of the eight memory cells of the structure 100. Using this method of programming each memory cell of the structure 100 by selection of a work function metal used in each of the nanosheet stacks, each of the upper nanosheet stack 24 and the lower nanosheet stack 22, can be used as a non-volatile memory or ROM. As described above, programming of each ROM memory cell may be done during device fabrication by utilizing a different work function metal for different memory states, which results in different threshold voltages. A memory state may be “read” based on a memory cell being turned “on” or “off” at a read voltage which is in between the different threshold voltage values. In an embodiment, VT2 may be less than VT1. A read voltage between VT2 and VT1 may be applied to the memory cells of the structure 100. When the read voltage is applied, those memory cells with the second WFM 84 may be turned on and those memory cells with the first WFM 58 may not be turned on. The Structure A has both the channel layers 18 of the upper nanosheet stack 24 and of the lower nanosheet stack 22 surrounded by the first WFM 58. Specifically, both the upper and lower nanosheet stacks 22, 24 of the Structure A may be off at the read voltage and may be in a first memory state. The Structure B has the channel layers 18 of the upper nanosheet stack 24 surrounded by the second WFM 84 and the channel layers 18 of the lower nanosheet stack 22 surrounded by the first WFM 58. Thus, the lower nanosheet stack 22 is off, resulting in the first memory state, while the upper nanosheet stack 24 is on, resulting in a second memory state, when the read voltage is applied to the gate. The Structure C has the channel layers 18 of the upper nanosheet stack 24 surrounded by the first WFM 58 and the channel layers 18 of the lower nanosheet stack 22 surrounded by the second WFM 84. The lower nanosheet stack 22 on, resulting in the second memory state, while the upper nanosheet stack 24 is off, resulting in the first memory state, when the read voltage is applied to the gate. The Structure D has the channel layers 18 of the upper nanosheet stack 24 and of the lower nanosheet stack 22 surrounded by the second WFM 84. Both the upper and lower nanosheet stacks 22, 24 are off at the read voltage and may each be in the second memory state, when the read voltage is applied to the gate. In an alternate embodiment, a single gate contact may be formed in direct contact with the gates of both the upper nanosheet stack 24 and the lower nanosheet stack 22, for each of the Structure A, B, C, D. For example, contacts 88, 90 of Structure B may be fabricated as a single structure directly contacting the second WFM 84 on opposite sides of the Structure B. Similarly, source/drain regions of the upper nanosheet stack 24 may be connected to source/drain regions of the lower nanosheet stack 22 of each Structure A, B, C, D. Each Structure A, B, C, D is isolated from the other. In this embodiment, each of the Structures A, B, C, D may have one of three memory states. As previously described, memory cells with the first WFM 58 surrounding the channel layers 18 have VT1 and those memory cells with the second WFM 84 surrounding the channel layers 18 have VT2. For example, if VT1>VT2, a voltage V1 may be applied to the single gate contact for each of the Structures A, B, C, D, where VT1>V1>VT2. When, for example, V1 is applied to the single gate contact of the Structure A, both the lower and upper nanosheet stacks 22, 24 will be off. This will result in a low flow of current across the combined source/drain of the Structure A. In such cases, the Structure A may be characterized as having a first memory state of the three memory states. When, for example, V1 is applied to the single gate contact of Structure B, the lower nanosheet stack 22 is off, while the upper nanosheet stack 24 is on. This will result in a medium flow of current across the combined source/drain of Structure B. In such cases, the Structure B may be characterized as having a second memory state of the three memory states. When, for example, V1 is applied to the single gate contact of Structure C, the lower nanosheet stack 22 is on, while the upper nanosheet stack 24 is off. This will result in a medium flow of current across the combined source/drain of the Structure C. In such cases, the Structure C may be characterized as having the second memory state of the three memory states. When, for example, V1 is applied to the single gate contact of Structure D, both the lower and the upper nanosheet stacks 22, 24 are on. This will result in a high flow of current across the combined source/drain of the Structure D. In such cases, the Structure D may be characterized as having a third memory state of the three memory states. In this embodiment, the Structures A, B, C, D may each be a memory cell which is hard programmed during manufacturing by a threshold voltage of a work function metal used in each of the upper nanosheet stack 24 and the lower nanosheet stack 22 and each can be used as a ROM which stores one of three memory states. An array of the Structures A, B, C, D may be arranged in any order and may provide hard programming of ROM memory cells. Arrangement of the Structures A, B, C, D is dependent upon desired programming of each memory cell. Word lines and bit lines may connect the Structures A, B, C, D to provide read access to the memory cells. Referring now to FIGS. 20A, 20B and 20C, the structure 100 is shown according to an exemplary embodiment. FIG. 20A is an overhead view. FIG. 20B is a cross section along section line X-X of FIG. 20A. FIG. 20C is the Structure C as shown in FIG. 19 and is a cross section along section line Y-Y of FIG. 20A and is perpendicular to the cross-sectional view of the FIG. 20B. FIG. 20B illustrates three nanosheet stacks parallel to each other, with an upper source drain region 160 separated by an isolation dielectric 164 from a lower source drain region 162 between each nanosheet stack. Each of the nanosheet stacks are the Structure C, with the lower nanosheet stack 22 including the second WFM 84 surrounding the channel layers 18 and the upper nanosheet stack 24 including the first WFM 58 surrounding the channel layers 18, according to an exemplary embodiment. A liner 166 surrounds outer edges of the first WFM 58 within the upper nanosheet stack 24, and also surround outer edges of the second WFM 84 within the lower nanosheet stack 22. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. 17126074 international business machines corporation USA B1 Utility Patent Grant (no pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Nov 14th, 2019 12:00AM https://www.uspto.gov?id=US11313810-20220426 Secure semiconductor wafer inspection utilizing film thickness A method for verifying semiconductor wafers includes receiving a semiconductor wafer including a plurality of layers. A first set of measurement data is obtained for at least one layer of the plurality of layers, where the first set of measurement data includes at least one previously recorded thickness measurement for one or more portions of the at least one layer. The first set of measurement data is compared to a second set of measurement data for the at least one layer. The second set of measurement data includes at least one new thickness measurement for the one or more portions of the at least one layer. The semiconductor wafer is determined to be an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data, otherwise the semiconductor is determined to not be an authentic wafer. 11313810 1. A method for verifying semiconductor wafers, the method comprising: receiving a semiconductor wafer comprising a plurality of layers; obtaining a first set of measurement data for a layer of the plurality of layer, the first set of measurement data comprising at least one thickness measurement for one or more portions of the layer that was recorded prior to at least one additional layer of the plurality of layers having been fabricated; comparing the first set of measurement data to a second set of measurement data for the layer, the second set of measurement data comprising at least one new thickness measurement for the one or more portions of the layer obtained after fabrication of the at least one additional layer; and determining the semiconductor wafer is an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data. 2. The method of claim 1, further comprising: determining the semiconductor wafer is not an authentic wafer based on the second set of measurement data failing to correspond to the first set of measurement data. 3. The method of claim 1, wherein determining the semiconductor wafer is an authentic further comprises: obtaining design data defining an expected pattern of features for the layer; obtaining imaging data for the layer capturing features patterned on the semiconductor wafer; determining if the imaging data corresponds to the design data; and determining the semiconductor wafer is an authentic wafer further based on the imaging data corresponding to the design data. 4. The method of claim 1, further comprising: obtaining the second set of measurement data for the layer by measuring a thickness of the layer at a first set of locations on the layer corresponding to a second set of locations on the layer at which the first set of measurement data was taken. 5. The method of claim 4, wherein obtaining the second set of measurement data further comprises: obtaining the second set of measurement data for the layer by measuring the thickness of the layer at the first set of locations across a plurality of different areas on the semiconductor wafer. 6. The method of claim 1, further comprising: obtaining a thickness measurement for a most recently fabricated layer of the plurality of layers at one or more locations of the most recently fabricated layer; and storing the measured thickness. 7. The method of claim 6, further comprising: obtaining the thickness measurement for the most recently fabricated layer at the one or more locations across a plurality of different areas of the semiconductor wafer. 8. The method of claim 6, wherein the layer of the plurality of layers was fabricated prior to the most recently fabricated layer of the plurality of layers. 9. The method of claim 1, wherein the first set of measurement data further comprises location data identifying one or more locations on the layer at which at least one previously recorded thickness measurement was taken. 10. The method of claim 9, wherein the at least one previously recorded thickness measurement comprises a plurality of previously recorded thickness measurements each taken at the one or more locations on the layer within a different area of the semiconductor wafer. 11. The method of claim 1, further comprising: obtaining a third set of measurement data for the at least one additional layer of the plurality of layers, the third set of measurement data comprising at least one previously recorded thickness measurement for one or more portions of the at least one additional layer; comparing the third set of measurement data to a fourth set of measurement data for the at least one additional layer, the fourth set of measurement data comprising at least one new thickness measurement for the one or more portions of the at least one additional layer; and determining the semiconductor wafer is an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data and the fourth second of measurement data corresponding to the third set of measurement data. 12. A system for verifying semiconductor wafers, the system comprising: memory; and one or more processors, wherein the one or more processors operate during fabrication of features on a semiconductor wafer to: receive, after fabrication of a first individual layer, a semiconductor wafer after an individual layer comprising one or more features has been fabricated thereon; obtain a first set of measurement data for a first feature of the first individual layer and a second set of measurement data for a second feature of a second individual layer fabricated on the semiconductor wafer prior to the first individual layer, the first and second sets of measurement data comprising at least one previously recorded thickness measurement for one or more portions of the first feature and the second feature, respectively; compare the first set of measurement data to a third set of measurement data for the first feature, the third set of measurement data comprising at least one new thickness measurement for the one or more portions of the first feature; compare the second set of measurement data to a fourth set of measurement data for the second feature, the fourth set of measurement data comprising at least one new thickness measurement for the one or more portions of the second feature; determine the semiconductor wafer is an authentic wafer based on the third set of measurement data corresponding to the first set of measurement data and the fourth set of measurement data corresponding to the second set of measurement data; and determine the semiconductor wafer is not an authentic wafer based on one or more of the third set of measurement data failing to correspond to the first set of measurement data, or the fourth set of measurement data failing to correspond to the second set of measurement data. 13. The system of claim 12, wherein the one or more processors further operate to: obtain at least the third set of measurement data for the first feature by measuring a thickness of the first feature at a first set of locations on the first individual layer corresponding to a second set of locations on the first individual layer at which the first set of measurement data was taken. 14. The system of claim 12, wherein the one or more processors operate to obtain the second set of measurement data by: obtaining at least the third set of measurement data for the first feature by measuring a thickness of multiple instances of the first feature on the first individual layer across a plurality of different areas on the semiconductor wafer. 15. A computer program product for verifying semiconductor wafers, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by an information processing system to cause the information processing system to perform a method comprising: receiving a semiconductor wafer comprising a plurality of layers; obtaining a first set of measurement data for at least one layer of the plurality of layers, the first set of measurement data comprising at least one previously recorded thickness measurement for one or more portions of the at least one layer; obtaining a second set of measurement data for the at least one layer by measuring a thickness of at least one portion of the at least one layer that is to remain exposed during subsequent processing operations; comparing the first set of measurement data to the second set of measurement data; determining the semiconductor wafer is an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data; and determining the semiconductor wafer is not an authentic wafer based on the second set of measurement data failing to correspond to the first set of measurement data. 16. The computer program product of claim 15, wherein obtaining the second set of measurement data further comprises: measuring a thickness of the at least one layer at a first set of locations on the at least one layer corresponding to a second set of locations on the at least one layer at which the first set of measurement data was taken. 17. The computer program product of claim 16, wherein obtaining the second set of measurement data further comprises: obtaining the second set of measurement data for the at least one layer by measuring the thickness of the at least one layer at the first set of locations across a plurality of different areas on the semiconductor wafer. 18. The computer program product of claim 15, wherein the method further comprises: obtaining a thickness measurement for a most recently fabricated layer of the plurality of layers at one or more locations of the most recently fabricated layer; and storing the measured thickness. 19. The computer program product of claim 18, wherein the method further comprises: obtaining the thickness measurement for the most recently fabricated layer at the one or more locations across a plurality of different areas of the semiconductor wafer. 20. The computer program product of claim 15, wherein the first set of measurement data further comprises location data identifying one or more locations on the at least one layer at which at least one previously recorded thickness measurement was taken, and wherein the at least one previously recorded thickness measurement comprises a plurality of previously recorded thickness measurements each taken at the one or more locations on the at least one layer within a different area of the semiconductor wafer. 20 BACKGROUND OF THE INVENTION The present disclosure generally relates to the field of semiconductors, and more particularly relates to secure inspection of semiconductor devices for trusted manufacturing thereof. Semiconductor chip security has become increasingly important in recent years. One mechanism for securing semiconductor chips is through the use of trusted foundries. A trusted foundry adheres to a set of protocols to ensure the integrity, authenticity, and confidentiality of semiconductor chips during manufacturing. However, trusted foundries may not be available to all chip customers or may not have the capabilities to fabricate a desired semiconductor chip. Therefore, in many instances chip customers utilize untrusted foundries for manufacturing of their semiconductor chips. The use of untrusted foundries for semiconductor chip manufacturing presents various security concerns since the chip customer may not be able to control or monitor the manufacturing process at an untrusted foundry. For example, an untrusted foundry may be able to counterfeit the semiconductor chip, reverse engineer the layout of the semiconductor chips, or steal sensitive or secret data required for fabrication of the semiconductor chip. In addition, there is no guarantee that the fabricated semiconductor chips do not contain malicious or damaging features that have been added by the untrusted foundry. Unfortunately, viable solutions to the above problems currently do not exist. SUMMARY OF THE INVENTION In one embodiment, a method for verifying semiconductor wafers comprises receiving a semiconductor wafer comprising a plurality of layers. A first set of measurement data is obtained for at least one layer of the plurality of layers. The first set of measurement data comprises at least one previously recorded thickness measurement for one or more portions of the at least one layer. The first set of measurement data is compared to a second set of measurement data for the at least one layer. The second set of measurement data comprises at least one new thickness measurement for the one or more portions of the at least one layer. The semiconductor wafer is determined to be an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data. The semiconductor wafer is determined to not be an authentic wafer based on the second set of measurement data failing to correspond to the first set of measurement data. In another embodiment, a system for verifying semiconductor wafers comprises at least one information processing system comprising memory and one or more processors. The system further comprises a wafer layer measurement system communicatively coupled to the at least one information processing system. The at least one information processing system and the layer measurement system operate to perform a process. The process comprises receiving a semiconductor wafer comprising a plurality of layers. A first set of measurement data is obtained for at least one layer of the plurality of layers. The first set of measurement data comprises at least one previously recorded thickness measurement for one or more portions of the at least one layer. The first set of measurement data is compared to a second set of measurement data for the at least one layer. The second set of measurement data comprises at least one new thickness measurement for the one or more portions of the at least one layer. The semiconductor wafer is determined to be an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data. The semiconductor wafer is determined to not be an authentic wafer based on the second set of measurement data failing to correspond to the first set of measurement data. In a further embodiment, a computer program product for verifying semiconductor wafers comprises a computer readable storage medium having program instructions embodied therewith. The program instructions executable by an information processing system to perform a method. The method comprises receiving a semiconductor wafer comprising a plurality of layers. A first set of measurement data is obtained for at least one layer of the plurality of layers. The first set of measurement data comprises at least one previously recorded thickness measurement for one or more portions of the at least one layer. The first set of measurement data is compared to a second set of measurement data for the at least one layer. The second set of measurement data comprises at least one new thickness measurement for the one or more portions of the at least one layer. The semiconductor wafer is determined to be an authentic wafer based on the second set of measurement data corresponding to the first set of measurement data. The semiconductor wafer is determined to not be an authentic wafer based on the second set of measurement data failing to correspond to the first set of measurement data. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying figures where like reference numerals refer to identical or functionally similar elements throughout the separate views, and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and advantages all in accordance with the present invention, in which: FIG. 1 is a block diagram illustrating a system for securing and verifying semiconductor wafers according one embodiment of the present invention; FIG. 2 is an operational flow diagram illustrating an overall process of securing and verifying semiconductor wafers according one embodiment of the present invention; FIG. 3 is an operational flow diagram illustrating a more detailed process of the trusted inspection and verification operation shown in step 210 of FIG. 2 according one embodiment of the present invention; FIG. 4 is an operational flow diagram continuing on from step 318 of FIG. 3 and illustrating a detailed process for trusted wafer layer thickness verification according one embodiment of the present invention; FIG. 5 is an illustrative example of design data according one embodiment of the present invention; FIG. 6 is an illustrative example of imaging data associated with a layer of features patterned on a semiconductor wafer that is used as part of the trusted inspection and verification operations of FIGS. 2 and 3 according one embodiment of the present invention; FIG. 7 is another illustrative example of imaging data associated with a layer of features patterned on a semiconductor wafer that is used as part of the trusted inspection and verification operations of FIGS. 2 and 3 according one embodiment of the present invention; FIG. 8 is a further illustrative example of overlaying imaging data for a current fabricated wafer layer onto a previously fabricated wafer layer as part of the trusted inspection and verification operations of FIGS. 2 and 3 according one embodiment of the present invention; FIG. 9 is cross-sectional view of a semiconductor device comprising a first layer and illustrates one example of performing wafer thickness measurement thereon according one embodiment of the present invention; FIG. 10 is a top-down view of the semiconductor wafer on which the semiconductor device of FIG. 9 was fabricated and illustrates a first set of inspection sites areas across the semiconductor wafer at which the wafer thickness measurements were taken according one embodiment of the present invention; FIG. 11 is another top-down view of the semiconductor wafer and illustrates a second set of inspection areas across the semiconductor wafer at which wafer thickness measurements were taken for the subsequently fabricated layers shown in FIG. 12 according one embodiment of the present invention; FIG. 12 is cross-sectional view of the semiconductor device of FIG. 9 after additional layers have been fabricated and illustrates another example of performing wafer thickness measurement thereon according one embodiment of the present invention; and FIG. 13 is a block diagram illustrating one example of an information processing system according to one embodiment of the present invention. DETAILED DESCRIPTION As required, detailed embodiments are discussed herein. However, it is to be understood that the provided embodiments are merely examples and that the systems and methods described below can be embodied in various forms. Therefore, specific structural and functional details discussed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present subject matter in virtually any appropriately detailed structure and function. Further, the terms and phrases used herein are not intended to be limiting, but rather, to provide an understandable description of the concepts. As will be discussed in greater detail below, embodiments of the present invention overcome security issues associated with untrusted semiconductor foundries by utilizing a trusted wafer inspection process. According to at least one embodiment, after each layer of patterned features is formed on a semiconductor wafer a trusted pattern verification system is utilized to verify that the formed pattern matches the intended pattern as defined by a corresponding design for the layer. If the formed pattern and the intended pattern do not match the verification system determines that the semiconductor wafer has been compromised. If the formed pattern and the intended pattern do match the verification system determines that the wafer is secure (i.e., has not been compromised) at this point in the fabrication process. However, once a subsequent layer of patterned features has been formed it is difficult (if not impossible) to re-verify previously formed layers of patterns since removing layers would damage the semiconductor wafer. This presents the opportunity for a semiconductor wafer, which has been verified by the trusted pattern verification system, to be modified or replaced with an unauthorized another wafer comprising damaging or malicious features. For example, after a given layer of patterns has been verified by the trusted verification system the authorized semiconductor wafer is returned to the fabrication line of the untrusted foundry. At this point, damaging or malicious features may be added to the previously verified wafer (or to an unauthorized semiconductor wafer) and a subsequent layer of patterned features corresponding to the trusted mask may be formed thereon. In other words, the malicious features are hidden under a layer of patterned features that match the intended/expected features defined by the trusted mask. Therefore, when the unauthorized semiconductor wafer is transferred to the trusted verification system the verification process may not determine that the current wafer is an unauthorized or malicious wafer since the current layer of patterned features corresponds to the expected layer of patterned features. Embodiments of the present invention overcome this problem by further utilizing a trusted wafer layer measurement system that generates one or more secure fingerprints/identifiers for the semiconductor wafer based on layer thickness. According to at least one embodiment, after a given layer of patterned features has been verified the trusted wafer layer measurement system measures the thickness of one or more features of the current layer. The measurement may be taken at one or more locations on the layer and multiple measurements may for the layer at the one or more locations but within different areas across the wafer. Data such as the measured thicknesses, the type of features that were measured, the locations at which the measurements were taken, the identifier of the layer for which the measurements were taken, and/or the like may be recorded. The recorded measurements act as a fingerprint for the semiconductor wafer since the film/layer thickness profile is unique for each wafer depending on the deposition process utilized to form the film/layer. After one or more subsequent layers of patterned features have been formed and verified, the wafer layer measurement system re-measures the thickness of any previously measured feature layers at their previously measured locations. In another embodiment a similar re-measurement process may be performed after fabrication of the semiconductor wafer has completed. If the current measured thickness of the layer(s) matches with the previous measured thickness the system determines that the wafer is a secure/authentic wafer that has not been replaced or modified. Referring now to the drawings in which like numerals represent the same of similar elements, FIG. 1 illustrates a block diagram of an operating environment 100 for the trusted inspection and verification of semiconductor wafers during manufacturing thereof. In various embodiments, the operating environment 100 comprises a semiconductor fabrication plant 102 (e.g., a foundry) and a trusted wafer inspection system (TWIS) 104. The semiconductor fabrication plant 102 is responsible for the manufacturing and packaging of semiconductor devices. In one embodiment, the semiconductor fabrication plant 102 comprises one or more information processing systems 106; fabrication and packaging stations/components 108 to 118; and semiconductor wafers 120. The information processing system 106 controls the one or more fabrication/packaging stations and their components. In one embodiment, the information processing system 106 may comprise at least one controller 122 that may be part of one or more processors or may be a component that is separate and distinct from the processor(s) of the information processing system 106. The one or more fabrication and packaging stations 108 to 118 may include a cleaning station 108, a deposition station 110, a photolithography station 112, an inspection station 114, a dicing station 116, a packaging station 118, and/or the like. In some embodiments, two or more of fabrication/packaging stations are separate from each other where the semiconductor wafer 120 is moved from one station to a different station after processing. However, in other embodiments, two or more of these stations may be combined into a single station. In addition, one or more of the stations/components 108 to 118 may not be a physical station per se but may refer to a fabrication or packaging process(es) performed by components of the fabrication plant 102. In some embodiments, one or more of the stations/processes 108 to 118 may be removed from the plant 102 and/or additional stations/processes may be added. Also, embodiments of the present invention are not limited to a semiconductor fabrication plant configured as shown in FIG. 1 and are applicable to any semiconductor fabrication plant. The TWIS 104, in one embodiment, comprises one or more information processing systems 124, a pattern verification system 126, a wafer layer measurement system 128 and wafer data 130. It should be noted that the TWIS 104 is not limited to these components as one or more components may be removed and/or additional components may be added to the TWIS 104. In one embodiment, the information processing system 124 may comprise at least one controller 132 that may be part of one or more processors or may be a component that is separate and distinct from the processor(s) of the information processing system 124. The wafer data 130, in one embodiment, comprises design data 134, wafer layer inspection data 136, and wafer image data 138. In some embodiment, the TWIS 104 is communicatively coupled to one or more networks 140 such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet). It should be noted that the information processing system 124 may be separate from or part of the pattern verification system 126 and wafer layer measurement system 128. In addition, the various operations discussed below with respect to the information processing system 124 may be similarly performed by separate information processing systems disposed within each of the pattern verification system 126 and wafer layer measurement system 128. In addition, the various operations discussed below with respect to these systems 126, 128 may be similarly performed by the information processing system 124. In addition, the pattern verification system 126 and wafer layer measurement system 128 are not required to be separate from each other and may be implemented as a single system. Embodiments of the present invention utilize the TWIS 104 to perform trusted inspection/verification of the wafers 120. In one or more of these embodiment, the TWIS 104 is a trusted system that is secured by physical and/or software-based mechanisms that prevent unauthorized access to and tampering with the TWIS 104. The TWIS 104 may be located within (or nearby) the semiconductor fabrication plant 102 in a manner that prevents unauthorized access to the TWIS 104. For example, the TWIS 104 may be located within a room or nearby building that only authorized individuals have access to. These individuals may be authorized to access the TWIS 104 by the owner/operator of the TWIS 104, the customer for which the semiconductor wafers 120 are being fabricated, a trusted entity managing the semiconductor wafers 120, and/or the like. In another embodiment, the TWIS 104 is part of the fabrication/packaging line where only authorized individuals may make changes to the TWIS 104. Even further one or more components of the TWIS 104 may be disposed outside of the semiconductor fabrication plant 102. For example, in one embodiment, the wafer layer measurement system 128, wafer inspection data 136, image data 138, and/or other components are situated at a foundry customer's location. However, in other embodiments, these components are located at the semiconductor fabrication plant 102 as well. The TWIS 104 and its components are discussed in greater detail below. FIG. 2 is an operational flow diagram illustrating an overall process of fabricating a semiconductor device including trusted inspection of the semiconductor wafer 120. The process shown in FIG. 2 begins after the wafer 120 has been inspected for any defects. After the wafer 120 has been inspected, the wafer 120 is processed by the cleaning station 108 at step 202. The cleaning station 108 removes any contaminants from the surface of the wafer 120 using, for example, a wet chemical treatment. Then, the wafer 120 is processed by the deposition station 110 at step 204. The deposition station 110 deposits, grows, and/or transfers one or more layers of various materials are onto the wafer using processes such as chemical vapor deposition (CVD), physical vapor deposition (PVD), atomic layer deposition (ALD), and/or the like. After the desired materials have been deposited the wafer 120 is processed by the photolithography and etching station 112 at step 206. For example, the wafer 120 may be cleaned and prepared by removing any unwanted moisture from the surface of the wafer 120. An adhesion promoter may also be applied to the surface of the wafer 120. A layer of photoresist material is then formed on the surface of wafer 120 (or the adhesion promoter layer if formed). A process such as, but not limited to, spin coating may be used to form the photoresist layer. Excess photoresist solvent may be removed by pre-baking the coated semiconductor wafer 120. The photoresist coated wafer 120 is then exposed to one or more patterns of light. The patterns may be formed by projecting the light through a photomask (also referred to herein as “mask”) created for the current layer. The mask is formed based on trusted design data 134 and may be produced by the semiconductor fabrication plant 102, a photomask fabrication plant, and/or the like. The design data 160, in one embodiment, comprises all shapes/patterns that are intended to be printed on the wafer 120 for a given layer. In some embodiments, the patterns may be formed using a maskless process. The bright parts of the image pattern cause chemical reactions, which result in one of the following situations depending on the type of resist material being used. Exposed positive-tone resist material becomes more soluble so that it may be dissolved in a developer liquid, and the dark portions of the image remain insoluble. Exposed negative-tone resist material becomes less soluble so that it may not be dissolved in a developer liquid, and the dark portions of the image remain soluble. A post exposure bake (PEB) process may be performed that subjects the wafer 120 to heat for a given period of time after the exposure process. The PEB performs and completes the exposure reaction. The PEB process may also reduce mechanical stress formed during the exposure process. The wafer 120 is then subjected to one or more develop solutions after the post exposure bake. The develop solution(s) dissolves away the exposed portions of the photoresist. After development, the remaining photoresist forms a stenciled pattern across the wafer surface, which accurately matches the desired design pattern. An etching process is then performed that subjects the wafer 120 to wet or dry chemical agents to remove one or more layers of the wafer 120 not protected by the photoresist pattern. Any remaining photoresist material may then be removed after the etching process using, for example, chemical stripping, ashing, etc. It should be noted that semiconductor fabrication is not limited to the above described process and other fabrication processes are applicable as well. The photolithographic process results in a layer of patterned features (also referred to herein as a “layer of patterns”, “layer of features”, “pattern of features”, “patterns”, and/or “pattern”). After the current layer of features has been patterned the wafer 120 is processed by one or more defect inspection stations 114 at step 208. In one embodiment, the defect inspection station 114 inspects the current layer of patterned features for defects and corrects/manages any defects using one or more methods known to those of ordinary skill in the art. Once the defect inspection process has been performed the wafer 120 is passed to the TWIS 104 for trusted wafer inspection and verification at step 210. As will be discussed in greater detail below, trusted wafer inspection and verification may include pattern verification processes and wafer layer thickness inspection and verification. It should be noted that, in some embodiments, instead of having a separate defect inspection station 114 the TWIS 104 performs defect inspection in addition to trusted wafer inspection and verification. In these embodiments, the wafer is passed to the TWIS 104 after the current layer of features has been patterned at step 206. If the TWIS 104 is satisfied with the results of the inspection and verification operations for the wafer 120, the wafer 120 is passed back to the cleaning station 108 as indicated by path “A” if additional fabrication processing is needed. The above described processes are then repeated until all of the desired layers of patterned features have been formed and fabrication of the wafer 120 has been completed. However, if fabrication of the wafer 120 has been completed the process follows path “B” where the wafer 120 is processed by the dicing station 116 to separate the dies from the wafer 120 at step 212. The packaging station 118 then packages and tests the dies using one or more packaging and testing methods at step 214. If, at step 210, the TWIS 104 is not satisfied with the results of the inspection and verification operations due to, for example, unauthorized changes the process follows path “C” where one or more security measures are taken at step 216. Fabrication may optionally be stopped at step 218 or another action taken as will be discussed in greater detail below. It should be noted that, in at least some embodiments, one or more of the trusted wafer inspection and verification operations of step 210 are not performed for every layer of the manufacturing process. For example, these operations may not be performed for sacrificial, temporary, or other layers that do not become part of the integrated circuit. Also, in at least one embodiment, layers found to be an unreliable source of inspection/verification data such as C4 layers are not inspected and/or verified. Instead, more reliable layers such as front-end-of-line (FEOL) and back-end-of-line (BEOL) layers are selected for inspection and/or verification operations at step 210. FIGS. 3-4 are operational flow diagrams illustrating an overall process of the inspection and verification operations performed by the TWIS 104 at step 210 of FIG. 2. As discussed above, after a layer of features has been patterned on the wafer 120 and defect inspection has completed the wafer 120 is transferred to the TWIS 104. The TWIS 104 receives the wafer 120 at step 302. The information processing system 124 initiates the pattern verification system 126 at step 304. In one embodiment, the pattern verification system 126 is initiated based on events such as detecting that the wafer 120 has been transferred to the TWIS 104, a user input received locally at the TWIS 104, a remote user input signal, a signal received from one or more of the stations/components of the semiconductor fabrication plant 102, and/or the like. Upon initiation, the pattern verification system 126 analyzes the wafer 120 and obtains image data 138 for the wafer 120 at step 306. The image data 138 is stored in local storage and/or in remote storage and may be annotated with a unique identifier that uniquely identifies the associated wafer 120. In one embodiment, the image data 138 comprises one or more images of feature patterns across the entire wafer 120, across one or more dies of the wafer 120, across portions of one or more dies, and/or the like. The image data 138, in one embodiment, is obtained using a scanning electron microscope (SEM), transmission electron microscope (TEM), an optical-based scanner or imaging system, a radiation-based imaging system, a combination of some/all of the above, and/or the like. The pattern verification system 126 obtains the design data 134 for the current fabrication layer of the wafer 120 at step 308. For example, if the current fabrication layer is Layer_1 the design data 134 for Layer_1 is obtained. The design data 134 may be stored locally on the TWIS 104 or on a trusted remote system. The design data 134 may comprise attributes or metadata that enables the pattern verification system 126 to determine the set of design data 134 associated with the current fabrication layer being inspected. The design data 134, in one embodiment, further comprises data such as pattern locations/coordinates, pattern layouts, pattern shapes, pattern dimensions (e.g., length and width), and/or the like utilized by a photomask fabricator to fabricate the photomask. The design data 134 may also comprise a simulated or rendered pattern layout for the current fabrication layer. The pattern verification system 126, at step 310, then compares the image data 138 for the current layer of patterned features with the corresponding design data 134 to determine if the current pattern of features on the wafer 120 matches the intended pattern of features as defined by the design data 134. For example, FIG. 5 shows one example of design data 502 comprising a plurality of desired patterns 504 to 512. In this example, the design data 502 comprises a rendered or simulated desired layout of patterns associated with the current fabricated layer of the wafer 120. FIG. 6 shows one example of wafer image data 602 obtained for the current layer of patterned features of the wafer 120. The pattern verification system 126, in this example, compares the desired pattern layout shown in FIG. 5 to fabricated pattern layout shown in FIG. 6 and determines that layout, shape, size, etc. of the desired patterns 504 to 512 and actual patterns 604 to 612 match (at least within a given threshold). Therefore, the current layer of patterned features is considered verified/authentic and the wafer 120 is considered secure (e.g., not compromised) since the layer of patterned features matches the desired layer of patterned features. However, consider the wafer image data 702 shown in FIG. 7 representing another example of a fabricated layer of patterned features for the wafer 120. In this example, the pattern verification system 126 determines that the pattern of features for the current layer does not match desired pattern of features as defined by the design data 502 shown in FIG. 5. For example, features 706 to 710 of FIG. 7 do not match the position/location and shape of features 706 to 710 of FIG. 7. Therefore, the layer of patterned features associated with the wafer image data 702 of FIG. 7 is considered “not verified” or “tampered with” and the corresponding wafer is considered compromised. The pattern verification system 126 may utilize various techniques to compare the wafer image data 138 for the current layer of patterned features with the corresponding design data 134. For example, in one embodiment, image analysis techniques are utilized to compare an image of the current feature patterns to a rendered/simulated image of the intended feature patterns defined by the design data 134. In some embodiments, an actual image of the corresponding photomask may be utilized as well. In another embodiment, data such as pattern locations/coordinates, pattern shapes, pattern dimensions (e.g., length and width), and/or the like are extrapolated from the image 138 of the current pattern of features and compared to similar data in the design data 134. It should be noted that other methods/techniques for comparing the image 138 of the current pattern of features and corresponding design data 134 are applicable as well. In one embodiment, the pattern verification system 126 stores the results of pattern inspection operation as part of the wafer data 130. For example, data such as a unique identifier associated with the wafer 120, an identifier associated with the current patterned layer being inspected, time and date, an indication whether the inspected layer is verified or not verified (e.g., unauthorized changes/modifications made to the layer), and/or the like. In addition, the processes discussed above may also be utilized to perform pattern verification for integrated wafers comprising previously fabricated levels. In this embodiment, the pattern verification system 126 overlays the pattern of expected features obtained from the design data 134 onto the previously fabricated levels of patterned features as shown in FIG. 8. For example, FIG. 8 shows an expected pattern of features 802 to 808 overlaid on previously fabricated levels of patterned layers 810 to 812. The expected pattern of features 802 to 808 may be overlaid onto the previously patterned features by a projection mechanism, simulation, and/or the like. The pattern verification system 126 then compares an image of the overlaid/previously patterned features with the image of the actual patterned of features similar to that discussed above with respect to FIGS. 5-7. Returning now to step 312 FIG. 3, if the pattern verification system 126 determines that the current layer of patterned features has been tampered with the flow proceeds to entry point C of FIG. 2 where one or more security measures are taken at step 216. For example, the information processing system 124 may generate one or more commands that are issued to one or more components of the fabrication facility 102 to shut down production. In another example, the information processing system 124 may automatically (or be manually instructed to) destroy the compromised wafer 120. Alternatively, the information processing system 124 may instruct one or more components of the TWIS 104 to remove the compromised wafer 120 from the fabrication line and place the compromised wafer in a quarantine area where the chips may be further inspected by authorized personnel. In yet another example, a message(s) may be sent from the information processing system 124 to one or more information processing systems via the network 140 indicating that a given wafer 102 has been compromised. The message may be sent as soon as a determination is made that the wafer 120 has been compromised, after fabrication of the wafer 120 has completed, after fabrication of a given number of wafers 120 has been completed, and/or the like. The message, in one embodiment, comprises data such as the unique identifier associated with the wafer 120, the identifier associated with the current patterned layer that has been compromised, time and date of layer inspection, fabrication facility identifier, and/or the like. The entity receiving the message(s) may then take an appropriate action. After security measures have been taken processing may return to step 202 for the next layer or wafer to be fabricated, or fabrication may be stopped at step 220 depending on the configuration of the TWIS 104. If the current layer of patterned features has been verified a determination is made, at step 314, whether wafer layer inspection is to be performed. In one embodiment, wafer layer inspection includes measuring the thickness at one or more locations on the layer across one or more areas of the wafer 120. In one embodiment, the information processing system 124 utilizes the wafer layer inspection data 136 to determine whether wafer layer inspection/verification operations are to be performed. The wafer layer inspection data 136 may comprise data such as wafer identifiers, layer identifiers, measurement location data, layer feature identifiers, layer thickness measurement data, and/or the like. Wafer identifier data comprises a unique identifier associated with a wafer 120. Layer identifier data indicates at which fabrication layer or layers inspection/verification operations are to be performed. Measurement location data comprises coordinates or other location identifying mechanisms indicating areas on a wafer that have been selected for thickness measurements operations. Measurement location data further indicates one or more locations at which thickness measurements are to be taken for the layer located within the selected wafer areas. Layer feature identifier data indicates the type of feature to be measured such as isolation regions, encapsulation layers, dielectric layers, and/or the like. Layer thickness measurement data comprises thickness measurement data obtained for the wafer layers associated with the one or more selected wafer areas. The wafer layer inspection data 136 may be global across all wafers, specific to one or more wafers 120, to one or more dies, fabrication layers, and/or the like. The TWIS 104 may be configured with the same wafer layer inspection data 136 for all wafers or different wafer layer inspection data 136 may be utilized for one or more different wafers, dies, fabrication layers, etc. In some embodiments, the TWIS 104 is configured to perform thickness measurements for every fabrication layer. In these embodiments, the information processing system 124 does not need to make the determination at step 314 whether wafer layer thickness inspection is to be performed nor does the wafer layer inspection data 136 need to be analyzed for making this determination. However, the wafer layer inspection data 136 still may be utilized to determine the inspection parameters/attributes for the current wafer layer. For example, the wafer layer inspection data 136 may still be utilized to determine which areas of the wafer are to be inspected and which locations/features of the layer within the identified wafer areas are to be measured. In other embodiments, the TWIS 104 is configured to perform thickness measurements for one or more previously selected fabrication layers (i.e., not randomly or dynamically selected). In another embodiment, the information processing system 124 randomly determines when and where wafer layer thickness inspection is to be performed. In these embodiments, the information processing system 124 is configured to randomly select at least one layer of patterned features for an associated thickness inspection process. The information processing system 124 may also randomly select wafers areas and locations/features of a selected layer within the wafer areas for inspection. Information identifying the randomly selected layers, wafer areas, and layer locations/features may be stored within the wafer layer inspection data 136. Accordingly, the information processing system 124 may utilize various mechanisms such as wafer data 130 analysis, random selection, hard coding, and/or the like to determine when to perform wafer layer thickness inspection. If the information processing system 124 determines that wafer layer thickness inspection is not to be performed for the current wafer layer processing continues to step 318, which is discussed in greater detail below. However, when the information processing system 124 determines that wafer layer thickness inspection is to be performed for the current wafer layer the information processing system 124 initiates the wafer layer measurement system 128 to obtain layer measurements at step 316. In one embodiment, the wafer layer measurement system 128 measures the thickness of the wafer layer at one or more locations on the layer and across one or more areas of the wafer 120. For example, FIG. 9 shows a cross-sectional view of a semiconductor device 900 formed on a wafer 120 at a first wafer site 1002 (FIG. 10). In this example, the wafer layer 902 has been fabricated on a substrate 904 and comprises shallow trench isolation regions 906 to 910; source/drains 912 to 918; gate stacks 920, 922; gate spacers 924, 926; and an encapsulation layer 928. In this example, after analyzing the wafer layer inspection data 136 the wafer layer measurement system 128 determines that the thickness of a shallow trench isolation region 906 and the thickness of a portion of the encapsulation layer 928 are to be measured. In at least some embodiments, portions/features of a layer that will not covered by subsequent processing steps are selected to be inspected so that they may be re-measured at a later point in time. The measurement system 128 obtains a first thickness measurement A 930 for the shallow trench isolation region 906 and a second thickness measurement B 932 for the portion of the encapsulation layer 928. In one embodiment, the measurement system 128 comprises one or more systems/tools capable of measuring layer/feature thickness such as (but not limited to) an ellipsometer, an interferometer, a microspectrophotometer, and/or the like. The wafer layer measurement system 128, in one embodiment, stores its measurements as part of the wafer layer inspection data 136 although the measurements may be stored separate from the wafer layer inspection data 136 as well. Other data may be stored along with the thickness measurement data such as a layer identifier, the location on the layer at which the measurements were taken, feature types that were measured, area of the wafer in which the measurement was taken, and/or the like. In some embodiments, the wafer layer measurement system 128 only performs a wafer layer thickness inspection at a single wafer site 1002. However, in other embodiments, the wafer layer measurement system 128 performs wafer layer thickness inspection across multiple sites 1002 to 1010 as shown in FIG. 10. For example, FIG. 10 shows a top-down view of a wafer 1002 where multiple inspection sites 1002 to 1010 are indicated across the wafer 1002. Each site 1002 to 1004 is in a different area of the wafer 1002 such as different dies. The thickness of the same portions/features of corresponding layers may be measured in each of the different inspection sites 1002 to 1010. However, the thickness of different portions/features of corresponding layers may be measured for two or more of the inspection sites 1002 to 1010. The location and/or number of the inspection sites across the wafer 120 may be the same or different for two or more wafer layers. For example, FIG. 11 shows the inspection sites 1102 to 1108 for a second wafer layer 1202 (FIG. 12) as having a different number of sites and different locations than the inspection sites 1002 to 1010 for the first wafer layer 902. Returning now to FIG. 3, after the wafer layer thickness inspection has been performed for one or more selected inspection sites 1002 to 1010 processing flows to step 318 where the information processing system 124 determines whether wafer layer thickness verification is to be performed for one or more previously fabricated layers. It should be noted that this determination may also be performed prior to or concurrently with the determination made in step 314. In one embodiment, the information processing system 124 makes the determination at step 318 by analyzing the wafer data 130. For example, the information processing system 124 determines if any layer thickness measurement data has been stored for previously fabricated layers. If layer thickness measurement data has not been stored for previously fabricated layers, the information processing system 124 determines that wafer layer thickness verification does not need to be performed for one or more of previously fabricated layers and processing returns to step 202 for processing of subsequent wafer layers. For example, when considering a wafer 120 at the fabrication point shown in FIG. 9 the information processing system 124 would determine that wafer layer thickness verification does not need to be performed since measurements have not been taken for previous fabrication layers. If layer thickness measurement data has been stored for one or more previously fabricated layers or if the information processing system 124 determines that fabrication of the wafer 120 has completed, processing flows to FIG. 4 wherein one or more wafer layer thickness verification operations are performed. For example, consider the example shown in FIG. 12 where subsequent layers 1202, 1204 of patterned features have been formed on layer 902 of the semiconductor device 900 in FIG. 9. In this example, a second layer 1202 comprises one or more contacts 1206 to 1216, a dielectric layer 1218, and an encapsulation layer 1120. A third layer 1204 comprises metallization layers 1220, 1222; a dielectric layer 1224; and an encapsulation layer 1226. The information processing system 124 determines from, for example, the wafer data 130 that wafer layer thickness verification is to be performed for the first wafer layer 902 and the second wafer layer 1202. The information processing system 124, at step 402, obtains layer/feature thickness measurement data for these wafer layers 902, 1202 from the wafer data 130. The system 124, at step 404, instructs the wafer layer measurement system 128 to re-measure the portions/features of the wafer layers 902, 1202 at their previously measured layer locations and inspection sites to obtain measurement A 930 and B 932 for the first layer 902 and measurements C 1228 and D 1230 for the second layer 1202. As discussed above, information processing system 124 may determine the previously measured wafer layer portions/features, their locations, and inspection sites from wafer data 130 such as the wafer layer inspection data 136. It should be noted that, in another embodiment, if one or more previously fabricated layers and current layers are accessible at the same time the information processing system 124 measures the thickness of these layers together to obtain a single thickness measurement for the multiple layers. The information processing system 124 compares the previous thickness measurement data for these wafer layers 902, 1202 to the new measurement data at step 406. The information processing system 124, at step 408, then determines if the previous thickness measurement data matches new measurement data based on the comparison performed at step 406. If the measurements do not match the information processing system 124, at step 410, determines that the wafer 120 has been compromised and the current wafer is an unauthorized/imposter wafer. In other words, the information processing system 124 determines that the wafer has been tampered with or the expected wafer has been replaced with a malicious wafer. Upon this determination, processing flows to entry point C of FIG. 2 where one or more security measures are taken as discussed above. However, if the new thickness measurements match the previous thickness measurements for the wafer layer portions/features the information processing system 124, at step 412, considers the wafer 120 as verified/authentic. In other words, the current wafer is the expected wafer and has not been compromised or replaced. The information processing system 124 then determines if fabrication of the wafer 120 has completed. If fabrication has not completed the process flow returns to entry point A of FIG. 2 where processing is initiated for the next fabrication layer of the wafer 120. However, if fabrication of the wafer 120 has completed the process flows to entry point B of FIG. 2 where dicing and packaging operations are performed. It should be noted that, in some embodiments, the wafer layer thickness verifications operations may also be performed at a customer's trusted location upon receiving the packaged devices. FIG. 13 shows one example of a block diagram illustrating an information processing system 1302 that may be utilized in embodiments of the present invention. The information processing system 1302 may be based upon a suitably configured processing system configured to implement one or more embodiments of the present invention such as the information processing systems 104 and/or 106 of FIG. 1. Any suitably configured processing system may be used as the information processing system 1302 in embodiments of the present invention. The components of the information processing system 1302 may include, but are not limited to, one or more processors or processing units 1304, a system memory 1306, and a bus 1308 that couples various system components including the system memory 1306 to the processor 1304. The bus 1308 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Although not shown in FIG. 13, the main memory 1306 may include the various types of data 134, 136, and 138 discussed above with respect to FIG. 1. The system memory 1306 may also include computer system readable media in the form of volatile memory, such as random access memory (RAM) 1310 and/or cache memory 1312. The information processing system 1302 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, a storage system 1314 may be provided for reading from and writing to a non-removable or removable, non-volatile media such as one or more solid state disks and/or magnetic media (typically called a “hard drive”). A magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each may be connected to the bus 1308 by one or more data media interfaces. The memory 1306 may include at least one program product having a set of program modules that are configured to carry out the functions of an embodiment of the present invention. Program/utility 1316, having a set of program modules 1318, may be stored in memory 1306 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 1318 generally carry out the functions and/or methodologies of embodiments of the present invention. The information processing system 1302 may also communicate with one or more external devices 1320 such as a keyboard, a pointing device, a display 1322, etc.; one or more devices that enable a user to interact with the information processing system 1302; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 1302 to communicate with one or more other computing devices. Such communication may occur via I/O interfaces 1324. Still yet, the information processing system 1302 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 1326. As depicted, the network adapter 1326 communicates with the other components of information processing system 1302 via the bus 1308. Other hardware and/or software components can also be used in conjunction with the information processing system 1302. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Reference in the specification to “one embodiment” or “an embodiment” of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, or computer program product. Accordingly, various aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system”. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Python, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the present invention have been discussed above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to various embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Although specific embodiments have been disclosed, those having ordinary skill in the art will understand that changes can be made to the specific embodiments without departing from the spirit and scope of the invention. The scope of the invention is not to be restricted, therefore, to the specific embodiments, and it is intended that the appended claims cover any and all such applications, modifications, and embodiments within the scope of the present invention. It should be noted that some features of the present invention may be used in one embodiment thereof without use of other features of the present invention. As such, the foregoing description should be considered as merely illustrative of the principles, teachings, examples, and exemplary embodiments of the present invention, and not a limitation thereof. Also note that these embodiments are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. 16683655 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Jun 25th, 2019 12:00AM https://www.uspto.gov?id=US11315544-20220426 Cognitive modification of verbal communications from an interactive computing device A method includes: determining, by a computer device, a current context associated with a user that is the target audience of an unprompted verbal output of an interactive computing device; determining, by the computer device, one or more parameters that are most effective in getting the attention of the user for the determined current context; and modifying, by the computer device, the unprompted verbal output of the interactive computing device using the determined one or more parameters. 11315544 1. A method, comprising: creating, by a server, a corpus of data based on plural verbal interactions involving a user; analyzing, by the server, the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determining, by the server, a verbal output of an interactive computing device to the user; determining, by the server, a current context; modifying, by the server, the verbal output based on the determined output parameters and the current context; and causing, by the server, the interactive computing device to output the modified verbal output to the user. 2. The method of claim 1, wherein the plural verbal interactions comprise verbal interactions between the user and the interactive computing device. 3. The method of claim 1, wherein the plural verbal interactions comprise verbal interactions between the user and another person. 4. The method of claim 1, wherein the corpus of data includes plural entries, wherein each entry includes: a context associated with a respective verbal output; a measure of effectiveness of the respective verbal output; and at least one parameter of the respective verbal output. 5. The method of claim 4, further comprising determining the context associated with the respective verbal output based on data from at least one selected from the group consisting of: a microphone that obtains audio data of an environment around the user; a video camera that obtains video data of the environment around the user; a biometric sensor that obtains biometric data of the user; a spatial sensor that obtains spatial data of the user; and a motion sensor or a proximity sensor that detects the presence of a person in a predefined area. 6. The method of claim 4, wherein context associated with the respective verbal output is selected from the group consisting of: an activity the user is performing at the time of the verbal interaction; a biometric state of the user at the time of the verbal interaction; and an amount of environmental noise at the time of the verbal interaction. 7. The method of claim 4, wherein the context associated with the respective verbal output comprises plural determined contexts. 8. The method of claim 4, wherein the measure of effectiveness of the respective verbal output comprises a combination of measures from plural detected reactions by the user. 9. The method of claim 4, wherein the at least one parameter of the respective verbal output comprises plural different parameters. 10. The method of claim 1, wherein the determining the current context comprises obtaining and analyzing data from at least one selected from the group consisting of: a microphone that obtains audio data of an environment around the user; a video camera that obtains video data of the environment around the user; a biometric sensor that obtains biometric data of the user; a spatial sensor that obtains spatial data of the user; and a motion sensor or a proximity sensor that detects the presence of a person in a predefined area. 11. The method of claim 1, wherein the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. 12. The method of claim 1, further comprising: observing a response of the user to the modified verbal output; and updating the corpus of data to include the current context, the modified verbal output, and the response of the user. 13. A method, comprising: creating, by an interactive computing device, a corpus of data based on plural verbal interactions involving a user; analyzing, by the interactive computing device, the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determining, by the interactive computing device, a verbal output of the interactive computing device to the user; determining, by the interactive computing device, a current context; modifying, by the interactive computing device, the verbal output based on the determined output parameters and the current context; and outputting, by the interactive computing device, the modified verbal output to the user. 14. The method of claim 13, wherein: the current context is selected from the group consisting of: an activity the user is performing; a biometric state of the user; and an amount of environmental noise; and the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. 15. The method of claim 13, wherein the interactive computing device is a robot that performs at least one physical task. 16. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer device to cause the computer device to: analyze a corpus of data of a user, wherein the analyzing comprises using machine learning to determine parameters that are most effective in getting the user's attention in different contexts; determine a current context of the user based on the user being a target audience of a verbal output of an interactive computing device; modify the verbal output based on the current context and the determined parameters; and cause the interactive computing device to output the modified verbal output to the user. 17. The computer program product of claim 16, wherein: the current context is selected from the group consisting of: an activity the user is performing; a biometric state of the user; and an amount of environmental noise; and the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. 18. A system, comprising: a processor, a computer readable memory, and a computer readable storage medium; program instructions to determine a verbal output for an interactive computing device to present to a user; program instructions to determine a current context of the user; program instructions to modify the verbal output based on verbal output parameters, wherein the modifying comprises matching the current context to one of plural different contexts of a group of contexts, and the verbal output parameters used in the modifying the verbal output are parameters that are determined to be most effective for getting the attention of the user in the one of the plural different contexts; and program instructions to cause the interactive computing device to output the modified verbal output to the user, wherein the program instructions are stored on the computer readable storage medium for execution by the processor via the computer readable memory. 19. The system of claim 18, wherein the current context is selected from the group consisting of: an activity the user is performing; a biometric state of the user; and an amount of environmental noise. 20. The system of claim 18, wherein the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. 21. A method, comprising: determining, by a computer device, a current context associated with a user that is the target audience of an unprompted verbal output of an interactive computing device; determining, by the computer device, one or more parameters that are most effective in getting the attention of the user for the determined current context; and modifying, by the computer device, the unprompted verbal output of the interactive computing device using the determined one or more parameters. 22. The method of claim 21, wherein the computer device comprises a server that communicates with the interactive computing device via a network, and further comprising the server transmitting data defining the modified unprompted verbal output to the interactive computing device for output to the user by the interactive computing device. 23. The method of claim 21, wherein the interactive computing device is a robot that performs at least one physical task. 24. The method of claim 21, wherein the context comprises at least one selected from the group consisting of: an amount of environmental noise; at least one activity the user is engaged in; and a biometric state of the user. 25. The method of claim 21, wherein the parameter comprises at least one selected from the group consisting of: name or names used to address the user in the unprompted verbal output; volume of the unprompted verbal output; cadence of the unprompted verbal output; specific words used in the unprompted verbal output; categories of words used in the unprompted verbal output; pronunciation of words used in the unprompted verbal output; and language used in the unprompted verbal output. 25 BACKGROUND The present invention relates generally to interactive computing devices and, more particularly, to cognitive modification of verbal communications from an interactive computing device. Interactive computing devices may include voice command devices, such as smart speakers, smartphones, robots, etc., that include an integrated virtual assistant, where the integrated virtual assistant is a software agent that is configured to perform tasks for an individual (e.g., a user) based on verbal commands from the user. In the case of smart speakers and smartphones, the task is most often a verbal (e.g., audio) output from the interactive computing device to the user. In the case of robots, the task may include a verbal output from the interactive computing device and additionally a physical task performed by the robot. In both cases, the verbal output may be prompted (e.g., immediately in response to a verbal command/question from the user) or unprompted (e.g., not immediately in response to a verbal command/question from the user). Current interactive computing devices only allow for a minimal amount of personalization of the verbal output. In one example, a user may configure an interactive computing device to address the user by a particular name. In another example, a user may configure an interactive computing device to provide verbal output in one of plural predefined voice styles. However, once a particular one of the plural predefined voice styles is selected, the interactive computing device uses only the selected voice style, and does not produce any verbal output that varies from the selected voice style. SUMMARY In a first aspect of the invention, there is a computer-implemented method including: creating, by a server, a corpus of data based on plural verbal interactions involving a user; analyzing, by the server, the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determining, by the server, a verbal output of an interactive computing device to the user; determining, by the server, a current context; modifying, by the server, the verbal output based on the determined output parameters and the current context; and causing, by the server, the interactive computing device to output the modified verbal output to the user. In embodiments, the corpus of data includes plural entries, wherein each entry includes: a context associated with a respective verbal output; a measure of effectiveness of the respective verbal output; and at least one parameter of the respective verbal output. In this manner, embodiments of the invention advantageously personalize the verbal output of an interactive computing device in a way that is determined from analyzing historic interactions to be most effective at getting the attention of the target audience. In another aspect of the invention, there is a method comprising: creating, by an interactive computing device, a corpus of data based on plural verbal interactions involving a user; analyzing, by the interactive computing device, the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determining, by the interactive computing device, a verbal output of the interactive computing device to the user; determining, by the interactive computing device, a current context; modifying, by the interactive computing device, the verbal output based on the determined output parameters and the current context; and outputting, by the interactive computing device, the modified verbal output to the user. In embodiments, the interactive computing device is a robot that performs at least one physical task. In this manner, embodiments of the invention advantageously personalize the verbal output of an interactive computing device in a way that is determined from analyzing historic interactions to be most effective at getting the attention of the target audience. In another aspect of the invention, there is a computer program product including a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a computer device to cause the computer device to analyze a corpus of data of a user, wherein the analyzing comprises using machine learning to determine parameters that are most effective in getting the user's attention in different contexts; determine a current context of the user based on the user being a target audience of a verbal output of an interactive computing device; modify the verbal output based on the current context and the determined parameters; and cause the interactive computing device to output the modified verbal output to the user. In embodiments, the current context is selected from the group consisting of: an activity the user is performing; a biometric state of the user; and an amount of environmental noise; and the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. In this manner, embodiments of the invention advantageously personalize the verbal output of an interactive computing device in a way that is determined from analyzing historic interactions to be most effective at getting the attention of the target audience. In another aspect of the invention, there is system including a processor, a computer readable memory, and a computer readable storage medium. The system includes: program instructions to determine a verbal output for an interactive computing device to present to a user; program instructions to determine a current context of the user; program instructions to modify the verbal output based on verbal output parameters determined to the most effective for the user for the current context; and program instructions to cause the interactive computing device to output the modified verbal output to the user. The program instructions are stored on the computer readable storage medium for execution by the processor via the computer readable memory. In embodiments, the current context is selected from the group consisting of: an activity the user is performing; a biometric state of the user; and an amount of environmental noise; and the modifying the verbal output comprises changing at least one selected from the group consisting of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. In this manner, embodiments of the invention advantageously personalize the verbal output of an interactive computing device in a way that is determined from analyzing historic interactions to be most effective at getting the attention of the target audience. In another aspect of the invention, there is a method comprising: determining, by a computer device, a current context associated with a user that is the target audience of an unprompted verbal output of an interactive computing device; determining, by the computer device, one or more parameters that are most effective in getting the attention of the user for the determined current context; and modifying, by the computer device, the unprompted verbal output of the interactive computing device using the determined one or more parameters. In some embodiments, the computer device comprises a server that communicates with the interactive computing device via a network, and further comprising the server transmitting data defining the modified unprompted verbal output to the interactive computing device for output to the user by the interactive computing device. In some embodiments, the interactive computing device is a robot that performs at least one physical task. In this manner, embodiments of the invention advantageously personalize the verbal output of an interactive computing device in a way that is determined from analyzing historic interactions to be most effective at getting the attention of the target audience. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is described in the detailed description which follows, in reference to the noted plurality of drawings by way of non-limiting examples of exemplary embodiments of the present invention. FIG. 1 depicts a computer infrastructure according to an embodiment of the present invention. FIG. 2A shows a block diagram of an exemplary environment in accordance with aspects of the invention. FIG. 2B shows a block diagram of another exemplary environment in accordance with aspects of the invention. FIG. 3 shows a flowchart of an exemplary method in accordance with aspects of the invention. DETAILED DESCRIPTION The present invention relates generally to interactive computing devices and, more particularly, to cognitive modification of verbal communications from an interactive computing device. According to aspects of the invention there is a system and method to personalize a verbal output of an interactive computing device based on a determination of what is most likely to get a user's attention in the current context. In embodiments, the determination is made based on an analysis of historic interactions between the user and the interactive computing device (or between the user and another user). The analysis is performed to determine which parameters are most effective at getting a particular user's attention in a particular context. In embodiments, the parameters include at least one of: name(s) used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation used in the verbal output; and language used in the verbal output. In this manner, implementations of the invention modify at least one of these parameters of the verbal output of an interactive computing device to attempt to get the attention of a target user of the verbal output. As described herein, interactive computing devices typically perform both prompted verbal outputs and unprompted verbal output. Unprompted verbal outputs involve the interactive computing device attempting to get the attention of a user that is the target audience of the output. The inventors have found that different users react differently and are more attentive to different types of verbal output in different contexts. As a result, different individuals respond differently depending on the volume, word selection, cadence, and pronunciation of a verbal output, and also depending on the context surrounding the verbal output. However, conventional interactive computing devices do not vary any facets of an unprompted verbal output depending on the user and/or the context. Instead, conventional interactive computing devices always use the same user-selected voice style for all verbal outputs. Aspects of the invention address this shortcoming of conventional interactive computing devices by providing a system that modifies a verbal output of an interactive computing device based on parameters that are determined to get (e.g., increase) the attention of the target user. In embodiments, the system analyzes historic interactions between a user and an interactive computing device (and/or between the user and another user), and determines from these historic interactions what types of parameters are most effective in getting (e.g., increasing) the user's attention for a verbal output. The parameters may include, for example, at least one of: name or names used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation of words used in the verbal output; and language used in the verbal output. In embodiments, the analysis takes into account contextual information associated with the historic interactions, such as ambient/extraneous noise level during the interaction, activity being performed by the user during the interaction, and biometric state of the user during the interaction. In particular embodiments, the system determines the current context associated with a user that is the target audience of an unprompted verbal output, determines the one or more parameters that are most effective in getting the attention of the user for the determined context, and modifies the verbal output of the interactive computing device using the determined one or more parameters. In this manner, implementations of the invention advantageously personalize the verbal output of the interactive computing device based on a determination of what is most likely to get the user's attention in the current context. As a result, implementations of the invention provide a technical solution to the technical problem of interactive computing devices that deliver verbal output in a non-personalized manner, where such non-personalized verbal output is less likely to get the attention of a user in a particular context. Embodiments of the invention utilize IoT (Internet of Things) devices to capture information about an individual that will help them respond as desired. In embodiments, the interactive computing device is configured to modify its output volume based on the individual it is addressing and the current environmental noises, change the choice of names based on learned responses from others addressing the individual, change the pace of the conversation when addressing someone based on learned responses to how a person responds, or even respond in a different language. Embodiments of the invention are thus directed to modifying an interactive conversation between a person and an interactive computing device based on cognitive analysis of historical responses and current conditions. Embodiments of the invention improve the technology of interactive computing devices by modifying verbal outputs of the interactive computing device based on the user and, more particularly, based on a determination of what is most likely to get the user's attention in the current situation. Embodiments of the invention employ an unconventional arrangement of steps including: creating a corpus of data based on plural verbal interactions involving a user; analyzing the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determining a verbal output of an interactive computing device to the user; determining a current context; modifying the verbal output based on the determined output parameters and the current context; and causing the interactive computing device to output the modified output to the user. The steps themselves are unconventional, and the combination of the steps is also unconventional. For example, the step of analyzing the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts creates new information that does not exist in the system, and this new data is then used in subsequent steps in an unconventional manner (e.g., in the step of modifying the verbal output based on the determined output parameters and the current context). Embodiments of the invention also utilize elements and/or techniques that are necessarily rooted in computer technology, including generating and modifying verbal outputs of interactive computing devices. To the extent implementations of the invention collect, store, or employ personal information provided by, or obtained from, individuals (for example, data associated with user interactions with an interactive computing device, etc.), such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information may be subject to consent of the individual to such activity, for example, through “opt-in” or “opt-out” processes as may be appropriate for the situation and type of information. Storage and use of personal information may be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Referring now to FIG. 1, a schematic of an example of a computer infrastructure is shown. Computer infrastructure 10 is only one example of a suitable computer infrastructure and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, computer infrastructure 10 is capable of being implemented and/or performing any of the functionality set forth hereinabove. In computer infrastructure 10 there is a computer system 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system 12 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown in FIG. 1, computer system 12 in computer infrastructure 10 is shown in the form of a general-purpose computing device. The components of computer system 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16. Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system 12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32. Computer system 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 34 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 18 by one or more data media interfaces. As will be further depicted and described below, memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility 40, having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a display 24, etc.; one or more devices that enable a user to interact with computer system 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22. Still yet, computer system 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As depicted, network adapter 20 communicates with the other components of computer system 12 via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system 12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. FIG. 2A shows a block diagram of an exemplary environment in accordance with aspects of the invention. In embodiments, the environment includes an interactive computing device 105, a plurality of Internet of Things (IoT) devices 110a-n, and interaction server 115 connected to a network 120. The network 120 comprises one or more communication networks such as one or more of a LAN, a WAN, and the Internet. In implementations of embodiments of the invention, the interactive computing device 105 is one of a smart speaker, smartphone, and robot, and includes an integrated virtual assistant, where the integrated virtual assistant is a software agent that is configured to perform tasks for a user based on verbal commands from the user. In such implementations, the interactive computing device 105 is a computer device comprising one or more elements of the computer system 12 of FIG. 1 including at least a microphone, a speaker, and a processor. In some embodiments, the interactive computing device 105 is a component in a smart home or smart facility in which the interactive computing device 105 is wirelessly connected to a network of home automation devices 113a-n, computers, etc. The interactive computing device 105 device may respond to verbal commands (e.g., “turn on the kitchen lights”) by mapping the verbal command to an electronic command, and sending the corresponding command to a one of the devices 113a-n capable of executing the command. Examples of such devices 113a-n include but are not limited to: smart thermostats; smart lights; smart electrical outlets; smart music players; smart televisions; smart cameras; smart stoves/ovens; smart refrigerators; smart doorbells; and smart exercise equipment. Additionally, or alternatively, the interactive computing device 105 provides verbal responses to verbal questions presented by a human user. In these implementations, the interactive computing device 105 receives the verbal question from the user via a microphone, converts the received audio to text, and transmits the text of the question to the interaction server 115 via the network 120. The interaction server 115 determines an output based on the question (e.g., typically an answer to the question) and transmits data defining the output to the interactive computing device 105, which converts the data to an audio signal that is output to the user via a speaker of the interactive computing device 105 (e.g., as a verbal output). In particular embodiments, the interactive computing device 105 provides an unprompted verbal output to a user. Examples of unprompted verbal outputs include: alerting the user of an incoming telephone call, email or text message; alerting the user of a person ringing the doorbell; alerting the user of sensor triggered in a personal security system (e.g., a door sensor, a window sensor, a motion sensor, etc.); alerting the user of sensor triggered in an environmental security system (e.g., a smoke detector, a carbon monoxide detector, a water leak detector, etc.); alerting the user of the expiration of a timer; alerting the user of smoke detected inside an oven (e.g., indicating that something is burning); alerting a user to perform a user-defined task at a user-defined time; reminding a user to leave at a user-defined time for a user-defined appointment; alerting a user of an interpreted or derived task (e.g., email indicates that your library books are due today). These examples are not intended to be limiting, and other types of unprompted verbal output may be utilized in implementations. In embodiments, the unprompted verbal output is generated and output by the interactive computing device 105, or is generated by the interaction server 115 and output to the user by the interactive computing device 105. With continued reference to FIG. 2A, the IoT devices 110a-n are network connected devices that are configured to obtain data that is used to determine a context associated with a user (e.g., a human). In embodiments, the IoT devices 110a-n comprise one or more of: at least one microphone that obtains environmental audio data including ambient noise and/or verbal outputs by the user and/or other users; at least one video camera that obtains video data of at least one of body position, physical gestures, and facial expression of the user and/or other users; at least one biometric sensor that obtains biometric data of the user including at least one of heart rate, body temperature, electrodermal activity, blood pressure, and respiration rate; at least one spatial sensor that obtains spatial data of the user including at least one of an global positioning system (GPS) sensor, an accelerometer, a gyroscope, and a compass; and at least one of a motion sensor and a proximity sensor that detects the presence of a person in a predefined area. As described in detail herein, the system uses data from these types of sensors in determining a context associated with a user, and the system uses the determined context as part of the process for determining a modified verbal output of the interactive computing device 105 to present to the user. Still referring to FIG. 2A, the interaction server 115 comprises one or more server computer devices comprising one or more elements of computer system 12 of FIG. 1. In embodiments, the interaction server 115 comprises an output module 125 that is configured to perform one or more of the functions described herein, including: create a corpus of data based on plural verbal interactions involving a user; analyze the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts; determine a verbal output of an interactive computing device to the user; determine a current context; modify the verbal output based on the determined output parameters and the current context; and cause the interactive computing device 105 to output the modified verbal output to the user. In embodiments, the output module 125 comprises one or more program modules 42 as described with respect to FIG. 1. The interaction server 115 also comprises or communicates with a data repository 130, which may comprise a storage system 34 as described with respect to FIG. 1 or some other data storage system. In accordance with aspects of the invention, the interaction server 115 is configured to modify a verbal output to a user based on determined output parameters and a current context, and to transmit the modified verbal output to the interactive computing device 105, which then outputs the modified verbal output to the user. In particular embodiments, the output module 125 is configured to: identify environmental conditions that may impact a person's response; identify user responses to verbal interactions; and modify standard verbal outputs based on cognitive analysis. In embodiments, the output module 125 identifies environmental conditions that may include outside activities or moods that may interfere with a person's ability to focus or other noises that may impact a person's ability to hear the verbal output of the interactive computing device 105. In particular embodiments, the output module 125 communicates with the IoT devices 110a-n to obtain sound data, biometric data, and image data in the environment of a user. The output module 125 uses this data to determine at least one of: noise level around the user; biometric state of the user; activities the user is engaged in; and devices the user is engaged with. In embodiments, the output module 125 identifies the user responses to verbal interactions by tracking how a person responds to human and computing verbal interaction that can be influenced by the environmental conditions, and what modifications to the response have elicited more favorable response. In particular embodiments, the output module 125 is configured to: obtain audio data from the interactive computing device 105 and/or IoT devices 110a-n; and track a user's response to questions (posed by the interactive computing device 105 and/or other users) to determine verbal output parameters associated with when this particular user does and does not respond. The parameters may include at least one of: name used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation (e.g., intonation, stress, rhythm, and/or accent) used in the verbal output; and language (e.g., English, Spanish, French, German, etc.) used in the verbal output. In embodiments, the output module 125 modifies a standard verbal output based on cognitive analysis by utilizing the current environmental conditions and comparing those to an individual's historic responses in those conditions to modify the actions by the interactive computing device 105 to produce a more favorable response. In particular embodiments, the output module 125 is configured to: compare current environment information and individuals needing to respond and look up historical information for such conditions; when the corpus recommends change from the standard verbal output, modify one or more parameters of the verbal output; and observe the user response to the modified verbal input and feed this data back into the system for future analysis. FIG. 2B shows a block diagram of another exemplary environment in accordance with aspects of the invention. The environment includes the IoT devices 110a-n, the automation devices 113a-n, and the network 120 as described with respect to FIG. 2A. In the embodiment shown in FIG. 2B, the environment includes an interactive computing device 105′ that includes the output module 125 and the data repository 130. In this manner, the interactive computing device 105′ does not communicate with a remote server (e.g., interaction server 115), and instead performs the functions of the output module 125 locally. In both environments (e.g., FIGS. 2A and 2B), the interactive computing device 105/105′ may comprise a static device such as a smart speaker, for example. Alternatively, in both environments (e.g., FIGS. 2A and 2B), the interactive computing device 105/105′ may comprise a robotic device that additionally performs physical tasks for the user (e.g., such as a kitchen assistant robot, a home security robot, etc.). In both environments, the interactive computing device 105/105′ (referred to hereinafter simply as the interactive computing device 105) and the IoT devices 110a-n obtain data associated with historic interactions between the user and another user and between the user and the interactive computing device 105. The output module 125 obtains this data from the interactive computing device 105 and/or the IoT devices 110a-n, and stores this data as part of the corpus of data for a particular user in the data repository 130. The output module 125 analyzes the historic data for a particular user, e.g., using machine learning or other analysis techniques, to determine which parameters are most effective at getting this user's attention in different contexts. Then, when a current situation warrants a verbal output from the interactive computing device 105, the output module 125 determines a current context from data from the interactive computing device 105 and/or the IoT devices 110a-n. The output module 125 then determines parameters for modifying the verbal output that is to be provided to the user. In embodiments, the output module 125 compares the current context to the historic contexts associated with historic interactions in the data repository, and for historic interactions having the same context as the current context, determines which parameters are most effective in getting the user's attention. The output module 125 then modifies the current verbal output using the determined parameters, e.g., by applying the parameters to the current verbal output. With regard to determining context, in accordance with aspects of the invention, the output module 125 analyzes data from at least one of the interactive computing device 105, one or more of the IoT devices 110a-n, and/or one or more of the automation devices 113a-n to determine a context of a user involved in a verbal interaction with another user or with the interactive computing device 105. The context can comprise at least one of: an amount of environmental noise; at least one activity the user is engaged in; and a biometric state of the user. In some implementations, the context comprises an amount of environmental noise that includes, for example, a volume level of noise (e.g., measured in dB) in a same area as the user, e.g., as detected by one or more microphones of the IoT devices 110a-n in the vicinity of the user. In some implementations, the context comprises an activity the user is engaged in, such as: watching television; listening to music; listening to a podcast; having a discussion with another person; cooking; exercising; reading; working at a computer; playing video games; and sleeping. These examples are not limiting, and the context may comprise other activities. In embodiments, the system determines the activity by analyzing data from one at least one of the IoT devices 110a-n and/or at least one of the automation devices 113a-n. For example, the system might determine that the context comprises the user is watching television based on a combination of: data from the television (e.g., an automation device 113a-n) used to determine the television is on; data from a motion sensor and/or a proximity sensor (e.g., an IoT device 110a-n) used to determine a person is in the same room as the television that is on; and data from a video camera (e.g., an IoT device 110a-n) used to identify the user from the face of the person that is in the same room as the television. In another example, the system might determine that the context comprises the user is cooking while listening to music and talking to another person based on a combination of: data from a music player (e.g., an automation device 113a-n) used to determine that music is playing; data from a stove (e.g., an automation device 113a-n) used to determine that the a burner of the stove is on; data from a refrigerator (e.g., an automation device 113a-n) used to determine that the user is opening and closing the refrigerator door; data from a microphone (e.g., an IoT device 110a-n) used to determine a volume level of the music; and data from a microphone (e.g., an IoT device 110a-n) used to determine that the user is engaged in a conversation with another person. In some implementations, the context comprises a biometric state of user, such as: tired; alert; and relaxed. For example, the system may be programmed to determine the biometric state of the user based on analyzing data from at least one of: one or more biometric sensors (e.g., to determine one or more of heart rate, blood pressure, and respiration rate); and a camera (e.g., to determine facial expression). These examples are not limiting, and the context may comprise other biometric states. In embodiments, the determined context may include a single context or plural contexts. An example of a single context is: the user is currently watching television (e.g., an activity). An example of plural contexts is: the user is currently watching television (e.g., an activity); the volume level of the environment is 70 dB (e.g., environmental noise); and the user is relaxed (e.g., biometric state). With regard to determining parameters that are effective in getting the user's attention, in accordance with aspects of the invention, the output module 125 analyzes data from at least one of the interactive computing device 105 and one or more IoT devices 110a-n to determine a measure of how much or how little a user responds to a verbal output from another user or from the interactive computing device 105. In some embodiments, the output module 125 analyzes data obtained from one or more microphones (e.g., IoT devices 110a-n) to determine that a target user of a verbal output reacted to the verbal output in one of the following ways: the user acknowledged the verbal output; the user asked for clarification of the verbal output; the user provided a response that is not related to the verbal output; and the user did not respond to the verbal output. These examples are not limiting, and other types of reactions captured via audio data can be analyzed by the system to measure a degree of how much or how little the user reacted to the verbal output. In embodiments, each such detected action may be assigned a predefined score (e.g., measure) defining a degree of how much or how little the user reacted to the verbal output. In some embodiments, the output module 125 analyzes data obtained from one or more cameras (e.g., IoT devices 110a-n) to determine that a target user of a verbal output reacted to the verbal output in one of the following ways: the user turned to face the speaker of the verbal output; the user did not turn the face the speaker of the verbal output; the user's facial expression changed immediately after the verbal output; the user's facial expression did not change immediately after the verbal output; the user stopped performing an activity immediately after the verbal output; the user did not stop performing an activity immediately after the verbal output. These examples are not limiting, and other types of reactions captured via video data can be analyzed by the system to measure a degree of how much or how little the user reacted to the verbal output. The system may be programmed with image analysis techniques, such as computer vision techniques, to perform this step of analyzing video data to detect a predefined type of response by a person in the video. In embodiments, each such detected action may be assigned a predefined score (e.g., measure) defining a degree of how much or how little the user reacted to the verbal output. In some embodiments, the output module 125 analyzes data obtained from one or more one or more automation devices 113a-n to determine that a target user of a verbal output reacted to the verbal output in one of the following ways: the user stopped their current activity in response to the verbal output (e.g., the user turned off the television immediately after the verbal output, etc.); the user did not stop their current activity in response to the verbal output (e.g., the user did not turn off the television immediately after the verbal output, etc.). These examples are not limiting, and other types of reactions captured via device data can be analyzed by the system to measure a degree of how much or how little the user reacted to the verbal output. In embodiments, each such detected action may be assigned a predefined score (e.g., measure) defining a degree of how much or how little the user reacted to the verbal output. Still regarding determining parameters that are effective in getting the user's attention, in addition to scoring the user's response to a verbal output, the system analyzes the verbal output to determine parameters that are present in the verbal output. In embodiments, the output module 125 analyzes the text of the verbal output and the audio of the verbal output to determine at least one of: name(s) used to address the user in the verbal output; volume (e.g., dB level) of the verbal output; cadence (e.g., speed) of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation (e.g., intonation, stress, rhythm, and/or accent) used in the verbal output; and language (e.g., English, Spanish, French, German, etc.) used in the verbal output. For example, the output module 125 may determine these presence of these parameters in a verbal output by analyzing the verbal output using at least one of: natural language processing (NLP) techniques (such as natural language understanding (NLU) and/or natural language classification (NLC)); and tone analysis. In embodiments, for each respective one of plural historic interactions between the user and another user or the interactive computing device 105, the output module 125 determines a context, a measure of effectiveness, and at least one parameter of the verbal output. As described above, the context may include one context or plural contexts. As also described above, the measure of effectiveness may be a single measure or a combination of measures from plural detected reactions by the user. As also described above, the at least one parameter of the verbal output may comprise a single parameter of plural different parameters. The output module 125 stores this data for each respective one of the historic interactions involving the user in a corpus of data for this user in the data repository 130. In embodiments, the system observes interactions of plural different users and creates a respective corpus of data as described herein for each respective user. According to aspects of the invention, the system analyzes the corpus of data for a particular user to determine which parameters have a high measure of getting the attention of the particular user in a given context. As described above, the corpus of data for the user includes plural entries, wherein each entry includes: a context associated with a verbal output; a measure of effectiveness of the verbal output; and at least one parameter of the verbal output. In embodiments, the output module 125 analyzes the plural entries in the corpus of data of a single user to determine what one or more parameters result in the highest measure of effectiveness for a respective context or group of contexts. In particular embodiments, the output module 125 uses machine learning techniques to analyze the corpus of data to make this determination as to which parameters are the most effective for this user for a particular context. The following use cases are described to illustrate functionality of the system and method as described herein. These use cases are exemplary and are not intended to limit the scope of implementations of the invention. In a first exemplary use case, the system observes plural interactions between Pam (a parent) and Claire (a child of the parent, Pam). Based on collecting data from the plural interactions between Pam and Claire and analyzing this data, the system determines that when Claire is engaged in the activity of watching television (e.g., a determined context), the most effective parameter for getting Claire's attention is to increase the volume level of a verbal output by 20% above the normal volume level of a verbal output (e.g., modify the verbal output using a determined parameter). At a later time, the system is scheduled to provide a verbal output to Claire, the verbal output being a reminder to start doing homework at 8:00 PM. At the time of the reminder, the system determines that Claire is currently watching television. As a result, the system modifies the verbal output to Claire to increase the volume of the verbal output by 20% above the normal volume. In a second exemplary use case, the system observes plural interactions between Paul (a parent) and Charles (a child of the parent, Paul). Based on collecting data from the plural interactions between Paul and Charles and analyzing this data, the system determines that when Charles is reading, the most effective manner of getting Charles's attention is to slow the cadence of a verbal output by 10% and to repeat Charles's name until he responds. At a later time, the system is scheduled to provide a verbal output to Charles, the verbal output being a reminder to brush his teeth at 8:00 PM. At the time of the reminder, the system determines that Charles is currently reading. As a result, the system modifies the verbal output to Charles to decrease the cadence of the verbal output and to repeat Charles's name three times at the beginning of the verbal output. In a third exemplary use case, the system observes plural interactions between Frank and the interactive computing device, and also plural interactions between Frank and other people. Based on collecting data from the plural interactions involving Frank and analyzing this data, the system determines that when Frank is listening to music, Frank only responds a subset of verbal outputs from the interactive computing device, that Frank frequently asks other people to repeat themselves, and that Frank's responses to other people and the interactive computing device are not related to the original interaction. Based on this analysis, the system determines that when Frank is listening to music, the most effective manner of getting Frank's attention is to decrease the volume of the music and to decrease the cadence of the verbal output to Frank. The system utilizes these modifications to the next verbal output to Frank when Frank is listening to music. In a fourth exemplary use case, the system observes plural interactions between Kevin and other people. Based on collecting data from the plural interactions between Kevin and other people and analyzing this data, the system determines that when Kevin is on a telephone call with another person (e.g., a determined context) and is tired (e.g., a determined biometric state), the most effective manner of getting Kevin's attention is to say his first name and then his and last name fast and loud (e.g., at an elevated cadence and elevated volume), wait for two seconds, and then output the remainder of the verbal output at a normal cadence and volume. At a later time, the system resolves to inform Kevin via verbal output that a water leak sensor (e.g., an automation device 113a-n) is detecting water. At the time of the verbal output, the system determines that Kevin is currently on a telephone call with another person and is tired. As a result, the system modifies the verbal output to Kevin to say his first name and then his and last name fast and loud (e.g., at an elevated cadence and elevated volume), wait for two seconds, and then output the remainder of the verbal output at a normal cadence and volume. Continuing the fourth exemplary use case, based on collecting data from the plural interactions between Kevin and other people and analyzing this data, the system determines that when Kevin is cooking dinner and listening to music, the most effective manner of getting Kevin's attention is to start by saying “Hey Kevin” with a stress on the word “Hey,” and then proceed with the rest of the verbal output. At a later time, the system resolves to inform Kevin via verbal output that the water leak sensor (e.g., an automation device 113a-n) is detecting water. At the time of the verbal output, the system determines that Kevin is currently cooking dinner and listening to music. As a result, the system modifies the verbal output to Kevin to say “Hey Kevin” with a stress on the word “Hey,” and then proceed with the rest of the verbal output. In a fifth exemplary use case, the system observes plural interactions between Teresa and other people. Based on collecting data from the plural interactions between Teresa and other people and analyzing this data, the system determines that Teresa's native language is French that Teresa speaks some English but not at the same vocabulary level as she does French, and that Teresa only speaks English when speaking with Tanya. At a later time, the system resolves to provide an alert to Teresa that a package has been delivered to the front door. Teresa has her interactive computing device 105 configured to use French as the default language for verbal outputs. At the time of the verbal output regarding the delivery, however, the system detects that Teresa is speaking English with Tanya, and therefore modifies the verbal output to be spoken in English instead of French. As is understood from these use cases, the system is configured to provide different modifications for different users based on each user's past interactions. Additionally, as is understood from the fourth use case, the system is configured to personalize (e.g., modify) different verbal outputs differently for a same user (e.g., Kevin in this case) based on different contexts and/or biometric states. FIG. 3 shows a flowchart of an exemplary method in accordance with aspects of the present invention. Steps of the method may be carried out in the environment of FIG. 2A or FIG. 2B and are described with reference to elements depicted in FIGS. 2A and 2B. At step 301, the system creates a corpus of data for a user. In embodiments, the output module 125 creates the corpus of data based on: (i) plural verbal interactions between a user and one of another user and an interactive computing device; and (ii) respective context information associated with each one of the plural verbal interactions. In particular embodiments, and as described with respect to FIGS. 2A and 2B, the corpus of data for the user includes plural entries, wherein each entry includes: a context associated with a verbal output; a measure of effectiveness of the verbal output; and at least one parameter of the verbal output. The context, measure of effectiveness, and at least one parameter are determined in the manner described with respect to FIGS. 2A and 2B. Step 305 may include the step of collecting the data associated with each one of the plural verbal interactions, where the data is collected from at least one of: the interactive computing device 105, at least one IoT device 110a-n, and at least one automation device 113a-n. In embodiments, the corpus of data for a particular user is stored in the data repository 130, which may store data defining plural different corpora of data for plural different users. At step 302, the system analyzes the corpus of data to determine output parameters that increase an attentiveness of the user in different contexts. In embodiments, and as described with respect to FIGS. 2A and 2B, the output module 125 analyzes the plural entries in the corpus of data of the user to determine what one or more parameters result in the highest measure of effectiveness for a respective context or group of contexts. In particular embodiments, the output module 125 utilizes machine learning to analyze the corpus of data to make this determination as to which parameters are the most effective for this user for each particular context. In accordance with aspects of the invention, steps 301 and 302 can occur in an endless loop as the system observes subsequent verbal interactions involving this user and other people. In embodiments, the system collecting data from such subsequent interactions between the user and other people, adds this data to the corpus of data at step 301, and analyzes the updated corpus of data at step 302. In this manner the system continuously updates the corpus of data and refines the determination of the output parameters that increase the attentiveness of the user in different contexts. At step 303, the system determines a verbal output to present to the user. In embodiments, and as described with respect to FIGS. 2A and 2B, the output module 125 determines a verbal output in a standard manner. Examples of verbal outputs include but are not limited to: alerting the user of an incoming telephone call, email, or text message; alerting the user of a person ringing the doorbell; alerting the user of sensor triggered in a personal security system (e.g., a door sensor, a window sensor, a motion sensor, etc.); alerting the user of sensor triggered in an environmental security system (e.g., a smoke detector, a carbon monoxide detector, a water leak detector, etc.); alerting the user of the expiration of a timer; alerting the user of smoke detected inside an oven (e.g., indicating that something is burning); alerting a user to perform a user-defined task at a user-defined time; reminding a user to leave at a user-defined time for a user-defined appointment; alerting a user of an interpreted or derived task (e.g., email indicates that your library books are due today). These examples are not limiting, and other verbal outputs may be used in implementations. At step 304, the system determines a current context. In embodiments, and as described with respect to FIGS. 2A and 2B, the output module 125 determines a context of the user at the time that the system resolves to present the verbal output (from step 303) to the user. In particular embodiments, the context comprises at least one of: an amount of environmental noise; at least one activity the user is engaged in; and a biometric state of the user. As described herein, the output module 125 determines the context by analyzing data from at least one of: the interactive computing device 105; at least one IoT device 110a-n; and at least one automation device 113a-n. At step 305, the system modifies the verbal output (from step 303) based on the determined output parameters (from step 302) and the current context (from step 304). As described herein, an output of step 302 is a determination of parameters that are the most effective for this user for each particular context. Accordingly, at step 305 the output module 125 analyzes the output of step 302 for a context that matches the current context (from step 304). Based on finding a matching context, the output module 125 uses the parameters associated with the matching context (i.e., the parameters determined for this context at step 302) to modify the verbal output that is initially determined at step 304. For example, at step 305, the output module 125 may modify verbal output that is initially determined at step 304 by changing at least one of: the name(s) used to address the user in the verbal output; volume of the verbal output; cadence of the verbal output; specific words used in the verbal output; categories of words used in the verbal output; pronunciation used in the verbal output; and language used in the verbal output. At step 306, the system causes the interactive computing device to output the modified verbal output (from step 305) to the user. In the environment of FIG. 2A, step 306 comprises the server 115 sending data defining the modified verbal output to the interactive computing device 105, which then outputs the modified verbal output via one or more speakers. In the environment of FIG. 2A, where the output module 125 resides at the interactive computing device 105, step 306 comprises the interactive computing device 105 outputting the modified verbal output via one or more speakers. According to aspects of the invention, the system observes the response of the user to the modified verbal output that is presented at step 306, and feeds this data back to the output module to be added to the corpus of data for this user. In this manner, the system continues to learn from each modified verbal output that is presented. For example, if the user fails to respond to the modified verbal output at step 306, then by including this data in the next iterations of step 301 and step 302, the system will be less likely to use these modifications for this user for this same context for future verbal outputs. Conversely, if the user responds to the modified verbal output at step 306, then by including this data in the next iterations of step 301 and step 302, the system will be more likely to use these modifications for this user for this same context for future verbal outputs. As described herein, some implementations of the invention provide a computer enabled system and method to modify verbal communications from a computing device to a human, the method comprising: using IoT fed data, identifying current environmental conditions that impact a person's response; tracking responses from an individual to different volume, language, cadence and work selection after interaction with another person or computer; and matching the current environment to historical responses to identify changes in the communications with the individual. In some embodiments, the method comprises at least one of: identifying extraneous noise; monitoring biometric state; identifying other activities a person is engaged in; and classifying the skill level required to understand a specific phrase. In some embodiments, the method comprises monitoring interactions to determine whether a user asks for clarification on the interaction, answers a non-related question, or provides no response at all. In some embodiments, the method comprises comparing a corpus of information about a person to a selection of current environmental variables to see positive changes that resulted from changing the response from the norm. As described herein, some implementations of the invention provide a computer-implemented method comprising: determining contextual information of an interactive conversation between a user and a computing device; and modifying the interactive conversation based on a cognitive analysis of historical responses and the determined contextual information. In some embodiments, the determined contextual information comprises extraneous noise, biometric state of the user, activities currently being performed by the user, vocabulary level of the user. In some embodiments, the determining contextual information of an interactive conversation between a user and a computing device comprises: registering one or more Internet of Things (IoT) devices; in response to detecting audio, video, and image information from the one or more IoT devices, determining vocabulary level, noise, biometric state, and activities associated with the detected audio, video, and image information. In some embodiments, the method comprises determining a language spoken by the user and vocabulary level of the user in response to detecting one or more user responses. In some embodiments, the modifying the interactive conversation based on a cognitive analysis of historical responses and the determined contextual information comprises: comparing contextual information with historical information associated with the user; identifying when interaction between the user and the computing device should be modified; and in response to identifying that the interaction between the user and the computing device should be modified, modifying at least one of: words used by the computing device, volume of the computing device, and language used by the computing device. In some embodiments, the method comprises storing the modified action in a database for future analysis. In embodiments, a service provider could offer to perform the processes described herein. In this case, the service provider can create, maintain, deploy, support, etc., the computer infrastructure that performs the process steps of the invention for one or more customers. These customers may be, for example, any business that uses technology. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties. In still additional embodiments, the invention provides a computer-implemented method, via a network. In this case, a computer infrastructure, such as computer system 12 (FIG. 1), can be provided and one or more systems for performing the processes of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure. To this extent, the deployment of a system can comprise one or more of: (1) installing program code on a computing device, such as computer system 12 (as shown in FIG. 1), from a computer-readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the processes of the invention. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. 16451611 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Nov 25th, 2019 12:00AM https://www.uspto.gov?id=US11316713-20220426 Virtual drawers in a server A computer-implemented method comprises receiving an index number for each of a plurality of physical processing units, each of the plurality of physical processing units communicatively coupled to each of a plurality of switch chips in a leaf-spine topology; assigning at least one of the plurality of physical processing units to a first virtual drawer by updating an entry in a virtual drawer table indicating an association between the respective index number of the at least one physical processing unit and an index of the first virtual drawer; and performing a drawer management function based on the virtual drawer table. 11316713 1. A computer-implemented method comprising: receiving an index number for each of a plurality of physical processing units, each of the plurality of physical processing units communicatively coupled to each of a plurality of switch chips in a leaf-spine topology; assigning at least one of the plurality of physical processing units to a first virtual drawer by updating an entry in a virtual drawer table indicating an association between the respective index number of the at least one physical processing unit and an index of the first virtual drawer wherein the first virtual drawer includes a first subset of the physical processing units; assigning a second subset of the plurality of physical processing units to a second virtual drawer by updating the virtual drawer table, wherein the number of physical processing units in the second subset is not equal to the number of physical processing units in the first subset; and performing a drawer management function based on the virtual drawer table. 2. The computer-implemented method of claim 1, wherein the virtual drawer table includes at least one of a V2C table mapping virtual drawer indices to physical processing unit indices or a C2V table mapping physical processing unit indices to virtual drawer indices. 3. The computer-implemented method of claim 1, further comprising: selecting the second subset of the physical processing units for inclusion in the second virtual drawer based on requirements of a workload to be executed; and executing the workload using the second subset of physical processing units assigned to the second virtual drawer. 4. The computer-implemented method of claim 1, further comprising creating a logical partition comprising a plurality of virtual drawers, wherein each of the plurality of virtual drawers is assigned, in the virtual drawer table, at least one physical processing unit of the plurality of physical processing units. 5. The computer-implemented method of claim 1, further comprising: replacing the at least one physical processing unit assigned to the first virtual drawer with a second physical processing unit by updating the virtual drawer table to associate the virtual drawer index of the first virtual drawer with an index of the second physical processing unit and to remove the association of the virtual drawer index of the first virtual drawer with the index of the at least one physical processing unit. 6. The computer-implemented method of claim 1, wherein assigning the at least one physical processing unit to the first virtual drawer comprises: assigning a first physical processing unit on a first board to the first virtual drawer by updating the virtual drawer table; and assigning a second physical processing unit on a second board to the first virtual drawer by updating the virtual drawer table. 7. The computer-implemented method of claim 1, further comprising laying out memory addresses for the first virtual drawer such that within the first virtual drawer the memory addresses are contiguous and interleaved across the at least one physical processing unit assigned to the first virtual drawer. 8. A computer system comprising: a plurality of central processing unit (CPU) boards, each CPU board including one or more physical CPU chips each having a respective index number and a first plurality of orthogonal-direct connectors; a plurality of switch chip (SC) boards, each SC board including at least one switch chip and a second plurality of orthogonal-direct connectors, wherein each of the orthogonal-direct connectors in the second plurality of orthogonal-direct connectors is configured to connect with a corresponding one of orthogonal-direct connectors in the first plurality of orthogonal-direct connectors on each of the plurality of CPU boards such that the plurality of CPU boards and the plurality of SC boards are connected in an orthogonal-direct topology; a memory configured to store a virtual drawer table; and a processing unit communicatively coupled to the memory and configured to update the virtual drawer table to indicate an association between the respective index number of at least one physical CPU chip and an index of a first virtual drawer such that the at least one physical CPU chip is assigned to the first virtual drawer based on the association in the virtual drawer table; wherein the processing unit is configured to perform a drawer management function based on the virtual drawer table. 9. The computer system of claim 8, wherein the processing unit is a CPU chip on one of the plurality of CPU boards. 10. The computer system of claim 8, wherein the virtual drawer table includes at least one of a V2C table mapping virtual drawer indices to physical CPU chip indices or a C2V table mapping physical CPU chip indices to virtual drawer indices. 11. The computer system of claim 8, wherein the processing unit is further configured to: select a first plurality of physical CPU chips on one or more CPU boards for inclusion in a second virtual drawer, the number of physical CPU chips in the first plurality of CPU chips based on requirements of a workload to be executed; assign the selected first plurality of the physical CPU chips to the second virtual drawer by updating the virtual drawer table; and wherein the workload is executed using the first plurality of physical CPU chips assigned to the second virtual drawer. 12. The computer system of claim 8, wherein the processing unit is configured to create a logical partition comprising a plurality of virtual drawers, wherein each of the plurality of virtual drawers is assigned, in the virtual drawer table, at least one physical CPU chip. 13. The computer system of claim 8, wherein the first virtual drawer includes a first subset of physical CPU chips; and wherein the processing unit is configured to assign a second subset of physical CPU chips to a second virtual drawer by updating the virtual drawer table, wherein the number of physical CPU chips in the second subset is not equal to the number of physical CPU chips in the first subset. 14. The computer system of claim 8, wherein the processing unit is configured to: replace the at least one physical CPU chip assigned to the first virtual drawer with a second physical CPU chip by updating the virtual drawer table to associate the virtual drawer index of the first virtual drawer with an index of the second physical CPU chip and to remove the association of the virtual drawer index of the first virtual drawer with the index of the at least one physical CPU chip. 15. The computer system of claim 8, wherein within the first virtual drawer, memory addresses are contiguous and interleaved across the at least one physical CPU chip assigned to the first virtual drawer. 16. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed by a processor on a first vehicle, causes the processor to: receive an index number for each of a plurality of physical processing units, each of the plurality of physical processing units communicatively coupled to each of a plurality of switch chips in a leaf-spine topology; assign at least one of the plurality of physical processing units to a first virtual drawer by updating an entry in a virtual drawer table indicating an association between the respective index number of the at least one physical processing unit and an index of the first virtual drawer wherein the first virtual drawer includes a first subset of the physical processing units; assign a second subset of the plurality of physical processing units to a second virtual drawer by updating the virtual drawer table, wherein the number of physical processing units in the second subset is not equal to the number of physical processing units in the first subset; and perform a drawer management function based on the virtual drawer table. 17. The computer program product of claim 16, wherein the computer readable program is further configured to cause the processor to: replace the at least one physical processing unit assigned to the first virtual drawer with a second physical processing unit by updating the virtual drawer table to associate the virtual drawer index of the first virtual drawer with an index of the second physical processing unit and to remove the association of the virtual drawer index of the first virtual drawer with the index of the at least one physical processing unit. 18. The computer program product of claim 16, wherein the computer readable program is further configured to cause the processor to lay out memory addresses for the first virtual drawer such that within the first virtual drawer the memory addresses are contiguous and interleaved across the at least one physical processing unit assigned to the first virtual drawer. 18 BACKGROUND Conventional large servers are packaged or physically constructed by cabling together 2 or more physical drawers. A physical drawer can contain 1 or more central processing unit (CPU) chips. Typically, each CPU chip is connected to memory chips and each CPU chip has connectors, such as PCIe connectors, for expansion cards. Additionally, each CPU chip has 1 or more symmetric multiprocessing (SMP) links to other CPU chips. Within a drawer, an SMP link can implemented using board traces. Across 2 drawers, an SMP link can use a cable. SUMMARY Aspects of the disclosure may include a computer-implemented method, computer program product, and system. One example of the computer-implemented method comprises receiving an index number for each of a plurality of physical processing units, each of the plurality of physical processing units communicatively coupled to each of a plurality of switch chips in a leaf-spine topology; assigning at least one of the plurality of physical processing units to a first virtual drawer by updating an entry in a virtual drawer table indicating an association between the respective index number of the at least one physical processing unit and an index of the first virtual drawer; and performing a drawer management function based on the virtual drawer table. DRAWINGS Understanding that the drawings depict only exemplary embodiments and are not therefore to be considered limiting in scope, the exemplary embodiments will be described with additional specificity and detail through the use of the accompanying drawings, in which: FIG. 1 depicts one embodiment of an example computer system utilizing virtual drawers. FIG. 2 is a high-level block diagram of one embodiment of an example computing device. FIG. 3 is a depiction of one embodiment of an example leaf-spine topology for the computer system of FIG. 1. FIG. 4 is a flow chart depicting one embodiment of an example method of managing virtual drawers. In accordance with common practice, the various described features are not drawn to scale but are drawn to emphasize specific features relevant to the exemplary embodiments. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific illustrative embodiments. However, it is to be understood that other embodiments may be utilized and that logical, mechanical, and electrical changes may be made. Furthermore, the method presented in the drawing figures and the specification is not to be construed as limiting the order in which the individual steps may be performed. The following detailed description is, therefore, not to be taken in a limiting sense. In some conventional systems, a computer system is packaged using a plurality of physical drawers (also referred to as physical books, boards, islands). Each physical drawer contains a plurality of computer processing unit (CPU) chips. For example, in some conventional systems, each physical drawer includes two clusters of CPU chips with 3 CPU chips in each cluster. In some such systems, each CPU chip is communicatively coupled to the other CPU chips in its cluster and to a switch chip which is in turn communicatively coupled to switch chips in other physical drawers. It is to be understood that other conventional systems can include other topologies (e.g. more or fewer CPU chips in each drawer, different connections between switch chips and CPU chips, etc.). However, such conventional systems, regardless of the specific topology, utilize the physical drawer concept for management functions, as known to one of skill in the art. For example, logical partitions can use all or part of multiple drawers. Similarly, an existing partition with a running workload can add an additional drawer. Additionally, the layout of addresses in memory typically uses the physical drawer topology. In particular, addresses are contiguous and interleaved across CPU chips within a drawer. Furthermore, the physical drawer concept can be used to implement Reliability & Availability & Serviceability (RAS). Other management functions can also be implemented using the physical drawer concept, such as but not limited to software licenses or other business aspects. However, the reliance on physical drawers also has limitations, such as, but not limited to, on performance of a given workload for a server. For example, there can be inefficiencies by executing a workload across physical drawers. In particular, a CPU pair connected within a physical drawer typically has a higher bandwidth than a CPU pair connected across physical drawers. As a result, typically there is an effort to fit a workload to a single drawer. However, a large server does not necessarily correspond to a single physical drawer. Thus, a physical drawer in such a server fits a smaller workload than the server. Additionally, for a given server size, the number of physical drawers in the server is determined by the physical packaging. However, a given workload may be more efficiently executed with a larger or smaller number of physical drawers than are present in the server. Furthermore, the physical drawer size is fixed and is determined by the packaging. Software or services in other layers of the solution stack may have improved performance with smaller or larger drawer sizes than the fixed physical drawer size determined by the packaging. In other words, some workloads may be able to be run with fewer CPU chips and resources than are included in a physical drawer of a server while others would run better with more CPU chips and resources than are included in the physical drawer of the server. However, using less than all of the resources of a physical drawer on a workload is not efficient and spreading a workload across physical drawers also introduces efficiencies due, for example, to potential bottlenecks at the switch chips communicatively coupling the physical drawers to each other. The embodiments described herein enable the beneficial use of a drawer concept for management functions while addressing limitations of the physical drawer by enabling the decoupling of the drawer concept from the physical packaging. In particular, the embodiments described herein enable the use of virtual drawers. For example, FIG. 1 depicts one embodiment of an example computer system 100 utilizing virtual drawers. In particular, FIG. 1 depicts one example of a leaf-spine computer system or a leaf-spine server. The system 100 leverages a leaf-spine topology in which each of a plurality of CPU chips 102 (labelled CP 0 to CP 15 in this example) is communicatively coupled with each of a plurality of switch chips 104 (labelled SC 0 to SC 7 in this example). In this way, any two or more CPU chips 102 can be communicatively coupled to one another. Each CPU chip 102 can be a single-chip-module (SCM) in some embodiments, or a dual-chip-module (DCM), in other embodiments. The computer system 100 includes a computer management module 108 configured to perform management functions for the system 100 similar to conventional computer management modules. However, the computer management module 108 in the example embodiment of FIG. 1 includes include a drawer management module 106 which utilizes a virtual drawer table 110 to manage the dynamic creation/management of virtual drawers. For example, the drawer management module 106 can group one or more of the CPU chips 102 into respective virtual drawers. In the example, shown in FIG. 1, drawer management module 106 groups a subset of the CPU chips 102 into 5 virtual drawers, 112-1 . . . 112-5 (referred to collectively as virtual drawers 112). As can be seen in FIG. 1, each of the virtual drawers 112 does not have to have the same number of CPU chips 102. In particular, a virtual drawer 112 can include a single CPU chip 102, such as virtual drawers 112-3, 112-4, and 112-5, or a plurality of CPU chips 102, such as virtual drawers 112-1 and 112-2. In some embodiments, all of the CPU chips 102 can be included in a single virtual drawer 112. Thus, the number of CPU chips 102 in a virtual drawer as well as the number of virtual drawers 112 can vary in different embodiments to more efficiently manage the workloads assigned to the computer system 100, for example. Additionally, a logical partition (LPAR), such as LPAR 114, can be assigned one or more virtual drawers 112. Thus, as with conventional systems, drawers can be added to a LPAR. However, by decoupling the virtual drawers 112 from the physical packaging, each virtual drawer can contain different numbers of CPU chips 102, as discussed above. Thus, the LPAR 114, for example, can contain 3 virtual drawers 112-3 . . . 112-5, with each virtual drawer containing a single CPU chip 102. Furthermore, by enabling virtual drawers which are decoupled from the physical packaging, a given virtual drawer can contain CPU chips 102 which utilize different Instruction Set Architectures (ISA), in some embodiments. For example, in FIG. 1, virtual drawer 112-2 includes CP 9 and CP 10 which implement an x86 architecture as well as CP 7, CP 4, and CP 14 which implement a z/Architecture. Thus, in some embodiments, each CPU chip 102 in a given virtual drawer implements the same ISA whereas, in other embodiments, CPU chips in a given virtual drawer can implement different ISA. In this way, the computer system 100 enables greater flexibility in meeting requirements of workloads. In addition, the use of virtual drawers decoupled from the physical packaging enables greater flexibility in fault recovery and maintenance. For example, if a given CPU chip 102 in a virtual drawer fails, that CPU chip 102 can be replaced by another CPU chip 102. For example, a CPU chip not currently assigned to a virtual drawer can replace the failed CPU chip or a CPU chip in another virtual drawer can be reassigned to replace the failed CPU chip. Additionally, failure of a CPU chip 102, an SC chip 104, or a link between a CPU chip 102 and an SC chip 104 can be mitigated more efficiently than with physical drawers. In particular, such a failure impacts only one CPU chip 102 or SC chip 104 rather than all the CPU chips in a physical drawer. Additionally, for performing maintenance in which a CPU chip 102 is to be powered off (e.g. to replace hardware), the computer management module 108 is able to power off selected CPU chips 102 with finer granularity than in conventional physical drawers since the entire virtual drawer 112 does not necessarily need to be powered off (e.g. when the virtual drawer 112 contains a plurality of CPU chips 102 on different boards). Grouping or assigning CPU chips 102 to a virtual drawer 112 includes, in some embodiments, laying out addresses in memory such that within a virtual drawer 112 the addresses are contiguous and interleaved across the CPU chips 102 that are assigned to the virtual drawer 112. That is, each CPU chip 102 can include memory storage and the drawer management module 108 can configure the total storage as a single contiguous address space with the addresses of consecutive memory blocks interleaved over the CPU chips 102 of the respective virtual drawer 112. Thus, even though the CPU chips 102 in a given virtual drawer 112 may not be on the same physical board, the layout of the memory addresses is still contiguous and interleaved across the CPU chips 102 in the virtual drawer 112. Furthermore, the computer management module 108 can manage the CPU chips 102 and present the CPU chips 102 to an administrator in terms of the virtual drawers 112. Thus, the computer management module 108 is able to perform similar management functions with the virtual drawers as conventional computer management modules utilize physical drawers. For example, the virtual drawers 112 can be used to implement RAS similar to the manner in which physical drawers are used to implement RAS. The drawer management module 106 creates/manages the virtual drawers 112 through the use of one or more virtual drawer tables 110, as discussed above. One example virtual drawer table is depicted below as Table 1. Table 1 depicts an example mapping of physical CPUs to virtual drawers. Thus, Table 1 can be referred to herein as a C2V Table. In particular, Table 1 depicts the example mapping for the CPU chips 102 of FIG. 1. TABLE 1 C2VTable (—=Not Configured) Row is Physical CPU index Vdrawer index VCPU index 0 3 0 1 3 1 2 — — 3 3 3 4 2 3 5 3 4 6 3 5 7 2 0 8 — — 9 2 1 10 2 2 11 15  0 12 13  0 13 10  0 14 2 4 15 3 2 The first column in Table 1 indicates an index for each physical CPU chip 102. It is to be understood that, in some embodiments, the values in the first column are not stored in Table 1. Instead, each value in the first column illustrates that the “Physical CPU index” value is used as the row offset into Table 1 to directly access the Vdrawer and VCPU entries corresponding to the given physical CPU. The second column indicates a virtual drawer index. In particular, each virtual drawer is configured with a unique virtual drawer index. For example, virtual drawer 112-1 in FIG. 1 is assigned a virtual drawer index of 3. As shown in FIG. 1, the CPU chips CP 0, CP 1, CP 15, CP 3, CP 5, and CP 6 have been assigned to virtual drawer 112-1. Thus, in Table 1, column 2 corresponding to each of those CPU chips includes the entry of virtual drawer index 3. Similarly, each of the CPU chips 102 included in virtual drawer 112-2 is assigned the virtual drawer index 2. Virtual drawers 112-3, 112-4, and 112-5 are assigned virtual index 15, 13, and 10, respectively. Thus, the CPU chips 102 corresponding to each of virtual drawers 112-3, 112-4, and 112-5 include the corresponding virtual drawer index in column 2. CPU chips CP 2 and CP 8 have not been assigned to a virtual drawer. Thus, column 2 corresponding to each of CP 2 and CP 8 does not include an entry. In addition, Table 1 includes a third column for a virtual CPU index. That is, within a virtual drawer, each CPU chip 102 is assigned a virtual CPU index. For example, virtual drawer 112-1 includes six CPU chips 102. Thus, each of the six CPU chips is assigned a consecutive virtual CPU index. In this example, the consecutive virtual CPU indices begin with 0 and are incremented by 1 until the last CPU chip in the virtual drawer is assigned a virtual CPU index. Thus, for the example virtual drawer 112-1, CP 0 is assigned the virtual CPU (VCPU) index 0; CP 1 is assigned the VCPU index 1 CP 15 is assigned the VCPU index 2; CP 3 is assigned the VCPU index 3; CP 5 is assigned the VCPU index 4; and CP 6 is assigned the VCPU 5. Similar assignments of VCPU indices are made for the other virtual drawers 112. Notably, the VCPU index is unique within a given virtual drawer, but is not globally unique. That is, each virtual drawer can include a VCPU index 0, for example, but there is only one VCPU index 0 for a given virtual drawer. The VCPU index helps organize and identify the CPU chips 102 within a given virtual drawer 112. Table 2 is another example virtual drawer table which can be utilized by drawer management module 106. Table 2 depicts an example mapping of virtual drawer to physical CPU. Thus, Table 2 can be referred to as a V2C Table. As with Table 1, Table 2 depicts an example mapping for the CPU chips 102 of FIG. 1. TABLE 2 Row is V2CTable (−=Not Configured) Vdrawer Number Column is VCPU index. Entry is Physical CPU index. index of VCPU 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 — z z z z z z z z z z 1 — 2 5 7 9 10 4 14 3 6 0 1 15 3 5 6 4 — 5 — 6 — 7 — 8 — 9 — 10 1 13 11 — 12 — 13 1 12 14 — 15 1 11 The first column in Table 2 indicates a virtual drawer index. As with Table 1, it is to be understood that in some embodiments, the values in the first column are not stored in Table 2. Instead, each value in the first column illustrates that the “Vdrawer index” value is used as the row offset into Table 2 to directly access the VCPU number and physical CPU index entries corresponding to a given virtual drawer. In some embodiments, the number of available virtual drawer (Vdrawer) indices is equal to the number of CPU chips 102. In that way, it is possible to configure each virtual drawer 112 to contain only a single CPU chip 102. However, as seen in Table 2, some of the Vdrawer indices are not utilized when one or more virtual drawers includes more than one CPU chip 102. The second column of Table 2 indicates the number of CPU chips in the virtual drawer. The subsequent columns after column 2 indicate a VCPU index. The entry for each row in these subsequent columns is the physical CPU index of the corresponding CPU chip 102. For example, as discussed above with respect to Table 1, in virtual drawer 3, the VCPU 0 index corresponds to the physical CPU index 0 (CP 0). Thus, Tables 1 and 2 enable the drawer management module 106 to organize, create, and manage the virtual drawers 112. Indeed, by adjusting entries in Tables 1 and 2, the drawer management module 106 is able to update existing virtual drawers, create a new virtual drawer, or remove a virtual drawer. It is to be understood that Tables 1 and 2 are provided by way of example only and that, in other embodiments, the Tables 1 and 2 can be configured differently. Furthermore, in some embodiments, the drawer management module 106 is configured to utilize both a C2V table and a V2C table. For example, through the use of both a C2V table and a V2C table, the drawer management module 106 is able to efficiently look up information in different scenarios. For example, if a virtual drawer index is provided, the drawer management module 106 can identify the indices of the corresponding physical CPUs by using the V2C table. Similarly, if a physical CPU index is provided, the drawer management module 106 can identify the corresponding virtual drawer using the C2V table. However, in other embodiments, the drawer management module 106 is configured to manage the virtual drawers utilizing only one of a C2V table or a V2C table. In addition, it is to be understood that although FIG. 1 and FIG. 2 are discussed with respect to 16 physical CPU chips and 8 SC chips, other embodiments can include other numbers of physical CPU chips and/or other number of SC chips. The computer management module 108 and the drawer management module 106 can be implemented in hardware, software, or a combination of hardware and software. For example, in some embodiments, the computer management module 108 and drawer management module 106 can be implemented by software executing on one or more of CPU chips 102. In other embodiments, the computer management module 108 and drawer management module 106 can be implemented as software or firmware executing on a separate processing unit. For example, in some embodiments, the computer management module 108 and the drawer management module 106 are implemented as firmware utilizing a baseboard management controller (BMC) of an Intelligent Platform Management Interface (IPMI) sub-system. One example computing device configured to implement the computer management module 108 and the drawer management module 106 is described below with respect to FIG. 2. FIG. 2 is a high-level block diagram of one embodiment of an example computing device 200. In the example shown in FIG. 2, the computing device 200 includes a memory 225, storage 230, an interconnect (e.g., BUS) 220, one or more processors 205 (also referred to as CPU 205 herein), and a network interface 215. It is to be understood that the computing device 200 is provided by way of example only and that the computing device 200 can be implemented differently in other embodiments. For example, in other embodiments, some of the components shown in FIG. 2 can be omitted and/or other components can be included. Each CPU 205 retrieves and executes programming instructions stored in the memory 225 and/or storage 230. The interconnect 220 is used to move data, such as programming instructions, between the CPU 205, storage 230, network interface 215, and memory 225. The interconnect 220 can be implemented using one or more busses. The CPUs 205 can be a single CPU, multiple CPUs, or a single CPU having multiple processing cores in various embodiments. In some embodiments, a processor 205 can be a digital signal processor (DSP). Memory 225 is generally included to be representative of a random access memory (e.g., static random access memory (SRAM), dynamic random access memory (DRAM), or Flash). The storage 230 is generally included to be representative of a non-volatile memory, such as a hard disk drive, solid state device (SSD), removable memory cards, optical storage, or flash memory devices. In an alternative embodiment, the storage 230 can be replaced by storage area-network (SAN) devices, the cloud, or other devices connected to the computing device 200 via a communication network coupled to the network interface 215. In some embodiments, the memory 225 stores instructions 210 and the storage 230 stores C2V table 209 and V2C table 211. The C2V table 209 and the V2C table 211 can be implemented similar to Tables 1 and 2 described above. In other embodiments, the instructions 210, the C2V table 209 and the V2C table 211 are stored partially in memory 225 and partially in storage 230, or they are stored entirely in memory 225 or entirely in storage 230, or they are accessed over a network via the network interface 215. When executed, the instructions 210 cause the CPU 205 to manage virtual drawers as discussed above. In particular, the instructions 210 cause the CPU 205 to implement the computer management module 108 and the drawer management module 106 discussed above. Further details regarding operation of the computing device 200 are also described below with respect to method 400. Furthermore, as discussed above, in some embodiments, one or more of the components and data shown in FIG. 2 include instructions or statements that execute on the processor 205 or instructions or statements that are interpreted by instructions or statements that execute on the processor 205 to carry out the functions as described herein. In other embodiments, one or more of the components shown in FIG. 2 are implemented in hardware via semiconductor devices, chips, logical gates, circuits, circuit cards, and/or other physical hardware devices in lieu of, or in addition to, a processor-based system. FIG. 3 is a depiction of one embodiment of an example leaf-spine packaging 300 for computer system 100. Leaf-spine packaging 300 includes a plurality of CPU boards 301-1 . . . 301-N (collectively referred to as CPU boards 301) and a plurality of SC boards 303-1 . . . 303-M (collectively referred to as SC boards 303). Although only 3 CPU boards 301 and 3 SC boards 303 are shown for ease of illustration, it is to be understood that any suitable number of CPU boards 301 and SC boards 303 can be used. Furthermore, for computer system 100, the example packaging 300 includes 8 SC boards 303 and 8 CPU boards 301. Thus, each SC board 303 includes one SC chip 304 and each CPU board 301 includes two CPU chips 302 in this example. However, it is to be understood that, in other embodiments, other configurations can be used. For example, in some embodiments, each CPU board 301 includes one CPU chip 302. In such examples, 16 CPU chips use 16 CPU boards 301. Additionally, in some embodiments, each SC board 303 can include more than 1 SC chip. For example, in some such embodiments, each SC board 303 includes 2 SC chips. In such embodiments, 4 SC boards 303 could be used for the example computer system 100 instead of the 8 SC boards in the example of FIG. 4. Each SC board 303 and each CPU board 301 in the example of FIG. 4 also include 8 orthogonal direct connectors 307. It is to be understood that the number of orthogonal direct connectors 307 included on each CPU board 301 is at least equal to the number of SC boards 303 in the packaging 300. Similarly, the number of orthogonal direct connectors 307 mounted on each SC board 303 is at least equal to the number of CPU boards 301 in the packaging 300. The orthogonal direct connectors 307 enable the SC boards 303 and the CPU boards 303 to be connected in an orthogonal-direct topology such that each CPU chip 302 is communicatively coupled with each SC chip 304. It is to be understood that the leaf-spine packaging 300 is provided by way of example and that other configurations can be used in other embodiments. For example, it is to be understood that each CPU board 301 can include components similar to conventional CPU boards (e.g. memory chips, SMP links, etc.). In some embodiments, other components can be included on the CPU boards 301 and/or the SC boards 303. For example, in this embodiment, each CPU board 301 includes memory chips 321. However, in other embodiments, each SC board can include a memory chip in addition to or in lieu of the memory chips 321 on CPU boards 301. In some such embodiments, at least part of the memory on one or more SC boards 301 can be assigned to a virtual drawer. In some embodiments, each virtual drawer configured by drawer management module 106 can include one or more CPU boards 301 in some embodiments. In other embodiments in which multiple CPU chips 302 are included on each CPU board 301, each virtual drawer can include one or more CPU chips 301. Thus, each CPU chip on a CPU board 301 in such embodiments can be assigned to a different virtual drawer. FIG. 4 is a flow chart depicting one embodiment of an example method 400 of managing virtual drawers. Method 400 can be implemented with drawer management module 106. For example, in some embodiments, method 400 can be implemented by executing instructions 210 on CPU 205 in FIG. 2 above. It is to be understood that the order of actions in example method 400 is provided for purposes of explanation and that the method can be performed in a different order in other embodiments. Similarly, it is to be understood that some actions can be omitted or additional actions can be included in other embodiments. At block 402, an index number for each of a plurality of physical processing units is received. Each of the plurality of physical processing units is communicatively coupled to each of a plurality of switch chips in a leaf-spine topology, as discussed above. The index number can be received or obtained using techniques known to one of skill in the art. At block 404, at least one of the plurality of physical processing units is assigned to a first virtual drawer by updating an entry in a virtual drawer table indicating an association between the respective index number of the at least one physical processing unit and an index of the first virtual drawer. It is to be understood that, as used herein, updating an entry can include both making changes to an existing entry in the virtual drawer table as well as creating a new entry in the virtual drawer table. Furthermore, where a virtual drawer table does not currently exist, updating an entry can include creating the virtual drawer table and creating a new entry in the table. In addition, as discussed above, in some embodiments, two virtual drawers can be used. Thus, updating an entry in a virtual drawer table can include updating a respective entry in each of the two virtual drawer tables. At block 406, a drawer management function is performed based on the virtual drawer table. That is, as discussed above, one or both of a C2V table and a V2C table can be used to manage the performance of drawer management functions. For example, given the index of a physical CPU, the C2V table can be used to identify the index of the corresponding virtual drawer. Similarly, given the index of a virtual drawer, the V2C table can be used to identify the indices of the physical CPUs of the virtual drawer. Additionally, given the indices of the physical CPUs of the virtual drawer, the desired drawer management function can be performed on or using the physical CPUs of the virtual drawer. Some example drawer management functions are discussed below. However, it is to be understood the drawer management functions discussed below are provided by way of example only and that other drawer management functions can be performed in lieu of or in addition to those examples discussed herein. As discussed above, the assignment of physical processing units to virtual drawers using the virtual drawer tables enables flexibility in managing virtual drawers. For example, in some embodiments, a subset of the physical processing units can be selected for inclusion in a virtual drawer based on requirements of a workload to be executed. Thus, the workload can be executed more efficiently by the processing units of the virtual drawer since the number of processing units included in the virtual drawer is selected based on the specific requirements of the workload. The assignment of processing units to virtual drawers can be done automatically by the drawer management module or in response to user input received via a user input device. Additionally, as discussed above, different virtual drawers can have different numbers of processing units. For example, in some embodiments, a first virtual drawer includes a first subset of the physical processing units and a second virtual drawer is assigned a second subset of the physical processing units where the number of processing units in the second subset is not equal to the number of processing units in the first subset. In addition, as discussed above, in some embodiments, a virtual drawer can include a single processing unit. Furthermore, a logical partition can be created from a plurality of virtual drawers, where each of the virtual drawers includes at least one physical processing unit. Thus, the virtual drawer table can be used to manage the computer system (e.g. a virtual drawer can be powered down, powered up, or rebooted based on the CPU assignments in the virtual drawer table). Additionally, the virtual drawer table can be used to manage workloads of a computer system. For example, a virtual drawer can added or removed from an LPAR running the workload and/or a workload can be moved from the CPUs associated with a first virtual drawer to the CPUs associated with a second virtual drawer based on entries in the virtual drawer table. In addition, by using the virtual drawer table, a processing unit assigned to a virtual drawer can be replaced by another processing unit by updating the virtual drawer table to associate the index number of the other processing unit with the index of the virtual drawer and to remove the association between the index of the virtual drawer and the index of the original processing unit, as discussed above. Furthermore, as discussed above, a virtual drawer can include physical processing units on different CPU boards. For example, a first physical processing unit on a first board can be assigned to the same virtual drawer as a second physical processing unit on a second board by updating the virtual drawer table. Thus, one example drawer management function enabled through use of the virtual drawer table includes providing the capability to concurrently increase the capacity of the system. For example, the capacity can be increased by concurrently activating more CPUs on a given virtual drawer or by adding a virtual drawer concurrently to an LPAR to activate more CPUs, more memory and/or more expansion devices. The additional CPUs can be activated on a given virtual drawer or added concurrently to an LPAR through appropriate modification of the virtual drawer table, as described herein. Another example, drawer management function managed through the use of the virtual drawer table includes concurrent repair of drawers. Some systems, for example, require a minimum of physical drawers (e.g. 2 physical drawers) for concurrent drawer repair. Through the use of the virtual drawer table, smaller virtual drawers can be configured which make it easier to meet the system requirement of a minimum of 2 drawers to perform concurrent repair. Also, the function of removing a drawer for upgrade or repair can be managed through the virtual drawer table, as described herein. For example, through the use of the virtual drawer table, the virtual drawers can be configured with finer granularity than physical drawers, as discussed herein. Thus, through appropriate modifications to the virtual drawer table, sufficient resources can be made available to accommodate resources that are rendered unavailable when the physical CPUs associated with a given virtual drawer are removed for upgrade or repair. Thus, the enhanced drawer availability allows the CPUs associated with a single virtual drawer to be removed and reinstalled concurrently for an upgrade or repair. Another example drawer management function performed based on the virtual drawer table involves preventing loss of connectivity to Input/Output (I/O devices) when physical CPUs associated with a virtual drawer are removed. That is, removing the CPUs means that the I/O devices connected to the physical CPUs are lost. However, an I/O device can have an I/O interconnect to more than one CPU. Thus, for a given virtual drawer including multiple I/O devices, each I/O device connected to more than one CPU, there can be various subsets of CPUs which in aggregate connect to the I/O devices. The virtual drawer table can be modified to choose a subset which meets customer needs (e.g. such as meeting service level agreements, performing specific tasks, etc.). For example, through the use of the virtual drawer table, the number of CPUs being removed can be minimized. Physical drawers do not offer the same flexibility for interconnecting to I/O devices as through the use of virtual drawers managed by the virtual drawer tables. Thus, as discussed above, the embodiments described herein enable the use of virtual drawers for various management functions while decoupling the assignment of processing units to virtual drawers from the physical packaging of the virtual drawers. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement, which is calculated to achieve the same purpose, may be substituted for the specific embodiments shown. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof. 16695135 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Sep 3rd, 2019 12:00AM https://www.uspto.gov?id=US11314579-20220426 Application protection from bit-flip effects A processor may receive information about one or more environmental factors. The processor may predict, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period. The processor may select, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected. The processor may protect the selected one or more specific applications during the certain time period of the predicted elevated bit-flip rates. 11314579 1. A computer-implemented method comprising: receiving, by a processor, information about one or more environmental factors; predicting, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period; selecting, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected; and protecting the selected one or more specific applications during the certain time period of the predicted elevated bit-flip error rates, wherein protecting the selected one or more specific applications includes: identifying a second datacenter in the distributed computing environment, wherein the second datacenter is determined to have a lower bit-flip error rate during the certain time period, transferring the selected one or more specific applications to the second datacenter, and receiving the selected one or more specific applications back from the second datacenter after the certain time period. 2. The method of claim 1, wherein the one or more environmental factors include elevation, temperatures, voltage spikes, and solar flares. 3. The method of claim 1, wherein the one or more specific applications are selected to be protected based on a determined threshold level of the predicted elevated bit-flip error rates being exceeded. 4. The method of claim 1, wherein the one or more specific applications are selected to be protected based on respective bit-flip error tolerances of each of the one or more specific applications, wherein the selected one or more specific applications are identified as having a bit-flip error tolerance below a tolerance threshold. 5. The method of claim 1, wherein the one or more specific applications are selected to be protected based on a priority level automatically set by the particular datacenter, wherein the priority level is associated with a user input, and wherein the user input indicates a preference for each of the one or more specific applications. 6. The method of claim 1, wherein protecting the selected one or more specific applications further comprises: replicating each of the selected one or more specific applications, wherein the selected one or more specific applications being replicated are classified as originals; storing each of the replicas in a different location within the particular datacenter; generating an indication of the locations of each of the replicas; selecting, after the certain time period, an uncorrupted version of each of the one or more specific applications from the originals and the replicas; and deleting the originals and the replicas that were not selected from the particular datacenter. 7. A system comprising: a memory; and a processor in communication with the memory, the processor executing instructions contained within the memory in order to perform operations comprising: receiving information about one or more environmental factors; predicting, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period; selecting, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected; and protecting the selected one or more specific applications during the certain time period of the predicted elevated bit-flip error rates, wherein protecting the selected one or more specific applications includes: identifying a second datacenter in the distributed computing environment, wherein the second datacenter is determined to have a lower bit-flip error rate during the certain time period, transferring the selected one or more specific applications to the second datacenter, and receiving the selected one or more specific applications back from the second datacenter after the certain time period. 8. The system of claim 7, wherein the one or more environmental factors include elevation, temperatures, voltage spikes, and solar flares. 9. The system of claim 7, wherein the one or more specific applications are selected to be protected based on a determined threshold level of the predicted elevated bit-flip error rates being exceeded. 10. The system of claim 7, wherein the one or more specific applications are selected to be protected based on respective bit-flip error tolerances of each of the one or more specific applications, wherein the selected one or more specific applications are identified as having a bit-flip error tolerance below a tolerance threshold. 11. The system of claim 7, wherein the one or more specific applications are selected to be protected based on a priority level automatically set by the particular datacenter, wherein the priority level is associated with a user input, and wherein the user input indicates a preference for each of the one or more specific applications. 12. The system of claim 7, wherein protecting the selected one or more specific applications further comprises: replicating each of the selected one or more specific applications, wherein the selected one or more specific applications being replicated are classified as originals; storing each of the replicas in a different location within the particular datacenter; generating an indication of the locations of each of the replicas; selecting, after the certain time period, an uncorrupted version of each of the one or more specific applications from the originals and the replicas; and deleting the originals and the replicas that were not selected from the particular datacenter. 13. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method, the method comprising: receiving information about one or more environmental factors; predicting, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period; selecting, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected; and protecting the selected one or more specific applications during the certain time period of the predicted elevated bit-flip error rates, wherein protecting the selected one or more specific applications includes: identifying a second datacenter in the distributed computing environment, wherein the second datacenter is determined to have a lower bit-flip error rate during the certain time period, transferring the selected one or more specific applications to the second datacenter, and receiving the selected one or more specific applications back from the second datacenter after the certain time period. 14. The computer program product of claim 13, wherein the one or more environmental factors include elevation, temperatures, voltage spikes, and solar flares. 15. The computer program product of claim 13, wherein the one or more specific applications are selected to be protected based on a determined threshold level of the predicted elevated bit-flip error rates being exceeded. 16. The computer program product of claim 13, wherein the one or more specific applications are selected to be moved protected on respective bit-flip error tolerances of each of the one or more specific applications, wherein the selected one or more specific applications are identified as having a bit-flip error tolerance below a tolerance threshold. 17. The computer program product of claim 13, wherein the one or more specific applications are selected to be protected based on a priority level automatically set by the particular datacenter, wherein the priority level is associated with a user input, and wherein the user input indicates a preference for each of the one or more specific applications. 18. The computer program product of claim 13, wherein protecting the selected one or more specific applications further comprises: replicating each of the selected one or more specific applications, wherein the selected one or more specific applications being replicated are classified as originals; storing each of the replicas in a different location within the particular datacenter; generating an indication of the locations of each of the replicas; selecting, after the certain time period, an uncorrupted version of each of the one or more specific applications from the originals and the replicas; and deleting the originals and the replicas that were not selected from the particular datacenter. 18 BACKGROUND The present disclosure relates generally to the field of distributed computing environments, and more specifically to protecting applications from the effects of bit-flips in a distributed computing environment. Bit-flips, also called single event upsets, can lead to a variety of system errors, some of which can result in service/machine failures or security violations (e.g., unauthorized access, data leakage, etc.). Bit-flips may be caused by cosmic ray neutrons that are present at any given location on the earth's surface. A tiny fraction of neutrons hitting a CPU may end up striking the nuclei of atoms in the CPU, particularly if those neutrons are very high-energy neutrons. If the neutron ends up displacing the nucleus, then a bit-flip may occur, leading to data corruption. In supercomputers, datacenters, and cloud hosting centers, these bit-flips are more likely to happen because of the high concentration of CPUs in a relatively small area. SUMMARY Embodiments of the present disclosure include a method, computer program product, and system for protecting applications from the effects of bit-flips in a distributed computing environment. A processor may receive information about one or more environmental factors. The processor may predict, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period. The processor may select, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected. The processor may protect the selected one or more specific applications during the certain time period of the predicted elevated bit-flip rates. The above summary is not intended to describe each illustrated embodiment or every implementation of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS The drawings included in the present disclosure are incorporated into, and form part of, the specification. They illustrate embodiments of the present disclosure and, along with the description, serve to explain the principles of the disclosure. The drawings are only illustrative of certain embodiments and do not limit the disclosure. FIG. 1 illustrates an example system for broadly determining a mitigation strategy for protecting an application from the effects of bit-flips, in accordance with embodiments of the present disclosure. FIG. 2 illustrates an example system for determining a mitigation strategy for protecting a specific application from the effects of bit-flips, in accordance with embodiments of the present disclosure. FIG. 3 illustrates a flowchart of an example method for protecting one or more applications during a time period predicted to have elevated bit-flip error rates, in accordance with embodiments of the present disclosure. FIG. 4 depicts a cloud computing environment, in accordance with embodiments of the present disclosure. FIG. 5 depicts abstraction model layers, in accordance with embodiments of the present disclosure. FIG. 6 illustrates a high-level block diagram of an example computer system that may be used in implementing one or more of the methods, tools, and modules, and any related functions, described herein, in accordance with embodiments of the present disclosure. While the embodiments described herein are amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the particular embodiments described are not to be taken in a limiting sense. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. DETAILED DESCRIPTION Aspects of the present disclosure relate generally to the field of distributed computing environments, and more specifically to protecting applications from the effects of bit-flips in a distributed computing environment. While the present disclosure is not necessarily limited to such applications, various aspects of the disclosure may be appreciated through a discussion of various examples using this context. Due to a high concentration of central processing units (CPUs) in a relatively small area within supercomputers, datacenters, and/or cloud hosting centers, there are increased chances for data corruption and/or soft errors due to bit-flips caused by cosmic ray neutrons, Row Hammer, RAMBleed, etc. Examples of such data corruption has been documented and ranges from a virtual machine owned by an adversary getting access to co-located virtual machines by bit-flipping the public key of an administrator for a second virtual machine, to a single bit corruption leading to the outage of an entire global cloud service. With such damaging consequences being a realistic threat to cloud service providers there is a need to protect mission-critical, business-critical, and essential applications from the potentially harmful effects of bit-flips in a cloud or hosting environment (e.g., distributed computing environment). Accordingly, a processor (e.g., in cloud environment, hosting environment, distributed computing environment, etc.) may receive information about one or more environmental factors. It is noted that the processor described herein, in regard to this disclosure can be any processor that can utilize the disclosed method, system, and/or computer program product. Said processors can be, but are not limited to decision engines and/or controllers that provide the ability to perform the embodiments of the present disclosure. The processor may predict, based on the one or more environmental factors, that a particular datacenter in a distributed computing environment will experience elevated bit-flip error rates during a certain time period. The processor may select, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular datacenter to be protected (e.g., secured, shielded, etc.). The processor may protect (e.g., secure, shield, etc.) the selected one or more specific applications during the certain time period of the predicted elevated bit-flip error rates. For example, a controller for a cloud computing environment may identify that one of the datacenters the controller is managing is located in Denver, Colo., which has an elevation of 1,609.3 meters (5,280 feet). The controller may additionally identify, from municipal sources (e.g., environmental websites, weather channels, etc.), that solar flares will be abnormally active between 1:00 p.m. and 3:00 p.m. on a particular day. The controller may predict that because of the datacenter's high elevation, the datacenter is more likely to be affected by the solar flares on that particular day. The controller may then scan the datacenter for the usage of all of its hosted applications. The controller may identify that the most prolific application hosted at the datacenter is an image sharing application that is owned by the datacenter's best customer. The controller may then protect the image sharing application by automatically copying the image sharing application to a second datacenter. This may be done in case the solar flares cause bit-flip errors at the original datacenter and the image sharing application at the original datacenter is corrupted. In some embodiments, after the time period of concern on the particular day (e.g., 1:00 p.m. to 3:00 p.m.), the controller may determine whether or not the application at the original datacenter is corrupted, and if it is not, then the controller may delete the copy from the second datacenter. In some embodiments, the one or more environmental factors include elevation, excessive temperatures, voltage spikes, and solar flares. Following the example above, the controller may also take into account if the solar flares are likely to affect electronics controlling the datacenter's heating, ventilation, and air conditioning (HVAC) system (e.g., fans, dampers, air-conditioning, etc.) because if the datacenter becomes too hot (e.g., excessive temperatures) the servers could malfunction. Said another way, the controller may take into account more than one environmental factor and may account for how the one or more environmental factors interact with one another (e.g., a thunderstorm could cause a voltage spike which could surge to servers and damage them, etc.). In some embodiments, the one or more specific applications are selected to be moved based on a determined threshold level of the predicted elevated bit-flip error rates being exceeded. For example, a program installed on a computer in a datacenter may identify from news alerts and other various sources that a wildfire (e.g., which is non-bit flip threat, but can cause excessive heat that would lead to such an error) is 20 miles from the location of the datacenter. The program may, on a scale of 1 to 10, rank the information of having a wildfire 20 miles from the location of the datacenter as a 5, which indicates that the information should be reviewed (e.g., either constantly or at periodic intervals) until the fire is extinguished or for information on if it is getting closer to the datacenter. The program may be preprogrammed to protect applications/data/information at the datacenter if the ranking ever meets or exceeds an 8, indicating that a bit-flip issue is likely to occur. The program may then identify that winds are blowing in the direction of the datacenter and that the wildfire is likely to get within 10 miles of the datacenter within the next hour. The program may then increase the ranking to a 9 and begin to automatically protect the applications/data/information at the datacenter. In some embodiments, the one or more specific applications are selected to be moved based on respective bit-flip tolerances of each of the one or more specific applications. The selected one or more specific applications are identified as having a bit-flip error tolerance below a tolerance threshold. For example, a controller of a datacenter, on a 0% to 100% tolerance scale (where a 0% tolerance indicates that the application will be completely destroyed by a bit-flip error and a 100% tolerance indicates that the application has the resilience to survive a bit-flip error), may protect (e.g., by moving, migrating, replicating, etc.) applications during a predicted, high bit-flip occurrence if the applications have a tolerance below 35%. For example, the controller may first identify a password-generation application housed at the datacenter and the controller may identify that the password-generation application is only active once a day when it creates a new daily password for a user. The controller may give the password-generation application a tolerance of 85% as it is likely that a bit-flip error would not occur during the password-generation application's active time, and even if it did, the password-generation application could generate a new daily password. This may be in contrast to a metric-recording application that constantly records information. The controller may give the metric-recording application a tolerance of 10% because if a bit-flip error occurs all of the recorded data could be lost and never recovered. In some embodiments, the one or more specific applications are selected to be protected based on a priority level automatically set by the particular datacenter. The priority level may be associated with a user input. The user input may indicate a preference for each of the one or more specific applications. In some embodiments, priority level determines in which order applications are protected. For example, before utilizing a datacenter for their needs, a user may indicate the importance of their applications. The user may indicate that a primary application is to be protected from error at all times as it is what all the user's secondary applications operate from. The datacenter may then automatically change a predicted elevated bit-flip error threshold for the primary application to ensure that it is protected/secured at all times. That is, the datacenter could prioritize the main application by saying if the predicted bit-flip errors meet or exceed 1, on a scale of 1 to 10, that the main application should be replicated to other locations within the datacenter (or other datacenters of a cloud computing environment, etc.). Whereas, all the other applications may only be moved if predicted bit-flip errors meet or exceed 9 on the scale of 1 to 10. In some embodiments, protecting the selected one or more specific applications may include the processor identifying a second datacenter in the distributed computing environment. The second datacenter may be determined to have a lower bit-flip error rate during the certain time period. The processor may transfer the selected one or more specific applications to the second datacenter, instead of merely copying them to the second datacenter. Then the processor may receive the selected one or more specific applications back from the second datacenter after the certain time period. For example, a datacenter in Phoenix, Ariz. may be expected to go through a heatwave for two days and the datacenter may determine that an application should be protected from any possibility of the excessive heat interrupting the datacenter. The datacenter may then communicate (e.g., via the cloud, networked communications, etc.) with a second datacenter in Minneapolis, Minn., which will not be going through a heatwave, and transfer the application to the second datacenter for the duration of the heatwave in Phoenix. After the heatwave, the second datacenter may transfer the application back to the original datacenter. In some embodiments, the application is not migrated/transferred to the second datacenter, but instead the application is copied/replicated/duplicated and the copy/replica/duplicate is sent to the second datacenter as a back-up of the application, as discussed below in more detail. In some embodiments, protecting the selected one or more specific applications may include the processor replicating each of the selected one or more specific applications. The selected one or more specific applications being replicated may be classified as originals. The processor may store each of the replicas in a different location within the particular datacenter (or distributed computing environment). The processor may generate an indication of the locations of each of the replicas. The processor may select, after the certain time period, an uncorrupted version of the one or more specific applications from the originals and the replicas. The processor may delete the unselected originals and replicas from the particular datacenter. For example, a datacenter may predict that a bit-flip error incident is 80% likely to happen and that an application should be protected from the likely incident. The datacenter may then generate four copies of the application (e.g., the original application and the four copies for a total of five applications) and store each copy in a different location within the datacenter (e.g., on another server, in a different node within the same server, etc.). The datacenter may additionally note the location of each copy and keep the original application in its current location (however, in some embodiments, the original application could be moved as well). The datacenter may then identify that the time for the incident has passed and examine the original application and the copies. The datacenter may identify that the original application and one copy were corrupted by the incident. The datacenter may discard/delete the original application and the corrupted copy and replace the original application with one of the three remaining uncorrupted copies. The selected copy may be placed in the same location as the original application. The datacenter may then delete the remaining two unselected copies. Referring now to FIG. 1, illustrated is an example system 100 for broadly determining a mitigation strategy for protecting an application from the effects of bit-flips, in accordance with embodiments of the present disclosure. In some embodiments, the system 100 can include location data 101, environmental data 103, device profile data 105, application profile data 107, user profile data 109, and mitigation approach data 111. In some embodiments, data 101, 103, 105, 107, 109, and 111 can be found from external sources (e.g., websites, municipalities, etc.) and/or from internal sources (e.g., profile databases stored in the system, etc.). In some embodiments, the system 100 can further include a bit-flip rate and distribution calculation module 102, a mitigation decision module 104, a mitigation orchestration module 106, an impact analyzer 108, and a cost analyzer 110. In an overarching sense, the system 100 can be set to describe the enumerated steps: one step can involve predicting, or detecting, situations (e.g., incidences, conditions, circumstances, etc.) where applications running in a particular data center, cloud environment, or distributed computing environment are likely to experience higher-than-normal bit-flip error rates. Another step determines which applications running in the impacted data center, etc. should have bit-flip remediation (e.g., protection) actions applied during such situations. For this purpose, the disclosed system 100 can use a mechanism for predicting, or determining, the potential impact of bit-flip on individual applications and the (processing, computing, economic, etc.) cost/value of mitigating bit-flips. The final step involves applying one or more of the appropriate remediations (to negate the impact of errors caused by bit-flip errors) for each of the applications identified in the previous step. Following the steps described above, the bit-flip rate and distribution calculation module 102 can receive (or accesses) the location data 101, the environmental data 103, and the device profile data 105. From the location data 101, the bit-flip rate and distribution calculation module 102 can identify the geographic/physical location of the system 100. From the environmental data 103, the bit-flip rate and distribution calculation module 102 can identify information relating to the location data 101 (e.g., humidity of the region, incoming thunderstorms, solar eclipses, solar storms, time, etc.). It is noted that in some embodiments the environmental data 103 can include the location data 101. From the device profile data 105, the bit-flip rate and distribution calculation module 102 can identify what type of (and/or how many) device or devices the system 100 utilizes (e.g., one hundred application servers, two connected gaming computers, etc.). From ingesting and analyzing the location data 101, environmental data 103, and the device profile data 105, the bit-flip rate and distribution calculation module 102 can predict/determine the probability/likelihood of a bit-flip error to occur in the system 100 at a particular time. The bit-flip rate and distribution calculation module 102 can then send the predicted probability to the mitigation decision module 104. In some embodiments, the impact analyzer 108 can receive the application profile data 107 and the user profile data 109. From the application profile data 107, the impact analyzer 108 can identify what type of applications are housed in the system 100 and which applications would be most affected by bit-flips. From the user profile data 109, the impact analyzer can rank each of the applications based on a user's preference for each application. In some embodiments, the impact analyzer 108 can rank each of the applications in the system 100, based on the application profile data 107 in conjunction with the user profile data 109, and can determine an impact level (e.g., threshold bit-flip level) to keep the system 100 at and then the impact analyzer 108 can send the ranks (e.g., ranked list of applications) and impact level to the mitigation decision module 104. In some embodiments, the cost analyzer 110 can receive the application profile data 107 and the mitigation approach data 111. From the application profile data 107, the cost analyzer 108 can identify what type of applications are housed in the system 100 and which applications would cost (e.g., computing-wise, economically, processing-wise, etc.) the system 100 the most to restore in the case of bit-flip errors. From the mitigation approach data 111, the cost analyzer 110 can identify each of the protection/security/mitigation actions that could be taken to mitigate or prevent the loss or destruction of the applications due to bit-flips and the costs associated with each protection action (e.g., migrating the applications to a new location in the system 100, migrating the applications to a new system, shutting the system 100 down for the certain period of time, etc.). In some embodiments, the protection actions (e.g., mitigation approaches/strategies) of the mitigation approach data 111 can be selected from a static library of protection actions that are preprogramed into the system 100. That is, the system 100 has preprogrammed protection actions for specific types of applications. For example, if the highest priority application is a social media application with millions of users, the protection approach is to replicate the application onto two or more other systems. In some embodiments, the protection actions of the mitigation approach data 111 can be selected via a machine learning aspect of the system 100. For instance, the system 100 may have been preprogrammed with a base set of protection actions, such as, “move applications to new servers” and “go into hibernate mode.” However, in a previous instance of utilizing a protection action, such as going into hibernate mode during a thunderstorm and having a voltage surge damage part of the system 100, the system 100 may now know to not utilize hibernate mode during thunderstorms and may now opt to completely shut down. In some embodiments, the cost analyzer 110, based on the application profile data 107 and the mitigation approach data 111, can determine a cost effective mitigation strategy to utilize to prevent/mitigate bit-flips and the cost analyzer 110 can send the determination to the mitigation decision module 104. In some embodiments, after receiving the outputs from the bit-flip rate and distribution calculation module 102, the impact analyzer 108, and the cost analyzer 110, the mitigation decision module 104 can determine if there is a need to initiate a mitigation protocol (e.g., has a bit-flip probability threshold been met or exceeded?). If the mitigation decision module 104 determines that a mitigation protocol should not be initiated, the system 100 can periodically (or continuously) restart the steps described here for FIG. 1. If the mitigation decision module 104 determines that a mitigation protocol should be initiated, the mitigation decision module 104 can send the ranked list (e.g., priority) of applications to be protected (e.g., based on the bit-flip tolerance as previously discussed above) and the mitigation strategy (protection protocol) to the mitigation orchestration module 106. Upon receiving the information from the mitigation decision module 104, the mitigation orchestration module 106 can enact the mitigation strategy to protect the selected applications in the system 100. Referring now to FIG. 2, illustrated is an example system 200 for determining a mitigation strategy for protecting a specific application from the effects of bit-flips, in accordance with embodiments of the present disclosure. In some embodiments, the system 200 may be the same system or substantially the same system as the system 100, as described above in regard to FIG. 1. In some embodiments, the system 200 includes location data 201, environmental data 203, device profile data 205, application profile data 207, user profile data 209, and mitigation approach data 211. In some embodiments, the data 201, 203, 205, 207, 209, and 211 can be found from external sources (e.g., websites, municipalities, etc.) and/or from internal sources (e.g., profile databases stored in the system, etc.). In some embodiments, the system 200 further includes a bit-flip rate and distribution calculation module 202, a mitigation decision module 204, a mitigation orchestration module 206, an impact analyzer 208, a cost analyzer 210, a calculation output 213, an impact output 215, a cost output 217, and a decision output 219. In some embodiments, the bit-flip rate and distribution calculation module 202 can receive (or access) the location data 201, the environmental data 203, and the device profile data 205. From the location data 201, the bit-flip rate and distribution calculation module 202 can identify the geographic/physical location (e.g., Boulder, Colo.) of the system 200. From the environmental data 203, the bit-flip rate and distribution calculation module 202 can identify information relating to the location data 201 (e.g., elevation, humidity of the region, incoming thunderstorms, solar eclipses, solar storms, time, etc.). It is noted that in some embodiments that the environmental data 203 includes the location data 201. From the device profile data 205, the bit-flip rate and distribution calculation module 202 can identify what type of (and/or number of) device or devices the system 200 utilizes (e.g., two hundred mail servers, two connected supercomputers, etc.). From ingesting and analyzing the location data 201, environmental data 203, and the device profile data 205, the bit-flip rate and distribution calculation module 202 can predict/determine the probability/likelihood of a bit-flip error to occur in the system 200 at a particular time. The prediction of the probability can be output by the bit-flip rate and distribution calculation module 202 as the calculation output 213, which for this example indicates that the system 200 that is located in Boulder, Colo. has a “1.2% chance of a bit-flip error in the next 30 days.” The calculation output 213 of the bit-flip rate and distribution calculation module 202 can then be sent to the mitigation decision module 204. In some embodiments, the impact analyzer 208 can receive the application profile data 207 and the user profile data 209. From the application profile data 207, the impact analyzer 208 can identify what type of applications are housed in the system 200 and which applications would be most affected by bit-flips. From the user profile data 209, the impact analyzer can rank each of the applications based on a user's preference for each application. In some embodiments, the impact analyzer 208 can rank each of the applications in the system 200, based on the application profile data 207 in conjunction with the user profile data 209, and determine an impact level (e.g., threshold bit-flip level) to keep the system 200 at or below. The impact analyzer 208 can output the impact level as the impact output 215, which for this example, indicates that the system 200 should have a predicted impact level kept “below 0.1%/month.” In some embodiments, the impact output 215 includes the ranked list of applications to be protected by not reaching the impact level (e.g., 0.1%/month). The impact output 215 can then be sent to the mitigation decision module 104. In some embodiments, the cost analyzer 210 can receive the application profile data 207 and the mitigation approach data 211. From the application profile data 207, the cost analyzer 210 can identify what type of applications are housed in the system 200 and which applications would cost (e.g., computing-wise, economically, processing-wise, etc.) the system 200 the most to restore in the case of bit-flip errors. From the mitigation approach data 211, the cost analyzer 210 can identify protection actions that could be taken to mitigate or prevent the loss or destruction of the applications due to bit-flips and the costs associated with the protection actions (e.g., migrating the applications to a new location in the system 200, migrating the applications to a new system, shutting the system 200 down for the certain period of time, etc.). In some embodiments, the cost analyzer 210, based on the application profile data 207 and the mitigation approach data 211, can determine the cost effectiveness of mitigation strategies that could be utilized to prevent/mitigate bit-flips and can output the mitigation strategies as the cost output 217, which on a scale of 1 to 10, where 1 means a low cost associated with the strategy and 10 means a high cost associated with the strategy, indicates that “migration=5 [and] replicas=10.” The cost output 217 can then be sent to the mitigation decision module 204 where the mitigation decision module 204 can analyze the mitigation strategies and their associated costs, and in some embodiments, can determine the most cost effective mitigation strategy to utilize. In some embodiments, after receiving the calculation output 213, the impact output 215, and the cost output 217, the mitigation decision module 204 can determine if there is a need to initiate a mitigation protocol (e.g., has a bit-flip probability threshold been met or exceed?). If the mitigation decision module 204 determines that a mitigation protocol should not be initiated, the system 200 can periodically (e.g., in another 30 days, or continuously) restart the steps described here in FIG. 2. If the mitigation decision module 204 determines that a mitigation protocol should be initiated, the mitigation decision module 204 can generate the decision output 219, which can indicate that the system 200 that is located in Boulder, Colo. should have services migrated “from location 1 (e.g., Boulder, Colo.) to location 2 (e.g., New Orleans, La. where the elevation is lower, etc.).” The decision output 219 can then be sent, in some embodiments, with a ranked list of applications to be protected (e.g., based on the bit-flip tolerance as previously discussed above) and the mitigation strategy (protection protocol) to the mitigation orchestration module 206. Upon receiving the decision output 219, the mitigation orchestration module 206 can enact the mitigation strategy to protect the selected applications in the system 200. Referring now to FIG. 3, illustrated is a flowchart of an example method 300 for protecting one or more applications during a time period predicted to have elevated bit-flip error rates, in accordance with embodiments of the present disclosure. In some embodiments, the method 300 is performed by a processor (in a datacenter/system/etc.) In some embodiments, the method 300 begins at operation 302. At operation 302 a processor receives information about one or more environmental factors. The method 300 proceeds to operation 304. At operation 304 the processor predicts, based on the one or more environmental factors, that a particular data center in a distributed computing environment will experience elevated bit-flip error rates during a certain time period. The method 300 proceeds to operation 306. At operation 306, the processor selects, based on the predicted elevated bit-flip error rates, one or more specific applications in the particular data center to be protected. The method 300 proceeds to operation 308. At operation 308, the processor protects the selected one or more specific applications during the certain time period of the predicted elevated bit-flip rates. In some embodiments, after operation 308 the method 300 ends. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now to FIG. 4, illustrative cloud computing environment 410 is depicted. As shown, cloud computing environment 410 includes one or more cloud computing nodes 400 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 400A, desktop computer 400B, laptop computer 400C, and/or automobile computer system 400N may communicate. Nodes 400 may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment 410 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices 400A-N shown in FIG. 4 are intended to be illustrative only and that computing nodes 400 and cloud computing environment 410 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now to FIG. 5, a set of functional abstraction layers provided by cloud computing environment 410 (FIG. 4) is shown. It should be understood in advance that the components, layers, and functions shown in FIG. 5 are intended to be illustrative only and embodiments of the disclosure are not limited thereto. As depicted below, the following layers and corresponding functions are provided. Hardware and software layer 500 includes hardware and software components. Examples of hardware components include: mainframes 502; RISC (Reduced Instruction Set Computer) architecture based servers 504; servers 506; blade servers 508; storage devices 510; and networks and networking components 512. In some embodiments, software components include network application server software 514 and database software 516. Virtualization layer 520 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 522; virtual storage 524; virtual networks 526, including virtual private networks; virtual applications and operating systems 528; and virtual clients 530. In one example, management layer 540 may provide the functions described below. Resource provisioning 542 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 544 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 546 provides access to the cloud computing environment for consumers and system administrators. Service level management 548 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 550 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer 560 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 562; software development and lifecycle management 564; virtual classroom education delivery 566; data analytics processing 568; transaction processing 570; and bit-flip protection processing 572. Referring now to FIG. 6, shown is a high-level block diagram of an example computer system 601 that may be used in implementing one or more of the methods, tools, and modules, and any related functions, described herein (e.g., using one or more processor circuits or computer processors of the computer), in accordance with embodiments of the present disclosure. In some embodiments, the major components of the computer system 601 may comprise one or more CPUs 602, a memory subsystem 604, a terminal interface 612, a storage interface 616, an I/O (Input/Output) device interface 614, and a network interface 618, all of which may be communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 603, an I/O bus 608, and an I/O bus interface unit 610. The computer system 601 may contain one or more general-purpose programmable central processing units (CPUs) 602A, 602B, 602C, and 602D, herein generically referred to as the CPU 602. In some embodiments, the computer system 601 may contain multiple processors typical of a relatively large system; however, in other embodiments the computer system 601 may alternatively be a single CPU system. Each CPU 602 may execute instructions stored in the memory subsystem 604 and may include one or more levels of on-board cache. System memory 604 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 622 or cache memory 624. Computer system 601 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 626 can be provided for reading from and writing to a non-removable, non-volatile magnetic media, such as a “hard drive.” Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), or an optical disk drive for reading from or writing to a removable, non-volatile optical disc such as a CD-ROM, DVD-ROM or other optical media can be provided. In addition, memory 604 can include flash memory, e.g., a flash memory stick drive or a flash drive. Memory devices can be connected to memory bus 603 by one or more data media interfaces. The memory 604 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments. One or more programs/utilities 628, each having at least one set of program modules 630 may be stored in memory 604. The programs/utilities 628 may include a hypervisor (also referred to as a virtual machine monitor), one or more operating systems, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Programs 628 and/or program modules 630 generally perform the functions or methodologies of various embodiments. Although the memory bus 603 is shown in FIG. 6 as a single bus structure providing a direct communication path among the CPUs 602, the memory subsystem 604, and the I/O bus interface 610, the memory bus 603 may, in some embodiments, include multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface 610 and the I/O bus 608 are shown as single respective units, the computer system 601 may, in some embodiments, contain multiple I/O bus interface units 610, multiple I/O buses 608, or both. Further, while multiple I/O interface units are shown, which separate the I/O bus 608 from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices may be connected directly to one or more system I/O buses. In some embodiments, the computer system 601 may be a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface, but receives requests from other computer systems (clients). Further, in some embodiments, the computer system 601 may be implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smartphone, network switches or routers, or any other appropriate type of electronic device. It is noted that FIG. 6 is intended to depict the representative major components of an exemplary computer system 601. In some embodiments, however, individual components may have greater or lesser complexity than as represented in FIG. 6, components other than or in addition to those shown in FIG. 6 may be present, and the number, type, and configuration of such components may vary. As discussed in more detail herein, it is contemplated that some or all of the operations of some of the embodiments of methods described herein may be performed in alternative orders or may not be performed at all; furthermore, multiple operations may occur at the same time or as an internal part of a larger process. The present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Although the present disclosure has been described in terms of specific embodiments, it is anticipated that alterations and modification thereof will become apparent to the skilled in the art. Therefore, it is intended that the following claims be interpreted as covering all such alterations and modifications as fall within the true spirit and scope of the disclosure. 16558545 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Dec 27th, 2019 12:00AM https://www.uspto.gov?id=US11313811-20220426 Dynamic determination of irrigation-related data using machine learning techniques Methods, systems, and computer program products for dynamic determination of irrigation-related data using machine learning techniques are provided herein. A computer-implemented method includes obtaining irrigation-related data pertaining to a region of interest; determining temporal values corresponding to irrigation activity at the region of interest by performing spatiotemporal analysis of the irrigation-related data; determining amounts of water utilized in connection with the irrigation activity corresponding to the temporal values by applying machine learning techniques to the irrigation-related data; determining types of irrigation activity attributed to the irrigation activity by applying machine learning techniques to the irrigation-related data and determined amounts of water; determining irrigation-related variables pertaining to the region of interest by executing a physical model using, as inputs, the determined temporal values, amounts of water, and types of irrigation activity, wherein the irrigation-related variables include an extent of irrigation activity; and outputting the determined irrigation-related variables to a user. 11313811 1. A computer-implemented method comprising: obtaining multiple items of irrigation-related data pertaining to at least one region of interest; determining one or more temporal values corresponding to irrigation activity at one or more portions of the at least one region of interest by performing spatiotemporal analysis of at least a portion of the obtained irrigation-related data; determining one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data; determining one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the obtained irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity; determining one or more irrigation-related variables pertaining to the at least one region of interest by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, wherein the one or more irrigation-related variables comprises at least an extent of the irrigation activity; and outputting the one or more determined irrigation-related variables to at least one user; wherein the method is carried out by at least one computing device. 2. The computer-implemented method of claim 1, wherein the method is carried out without the use of one or more sensors. 3. The computer-implemented method of claim 1, wherein said performing spatiotemporal analysis comprises performing spatiotemporal analysis of one or more backscattering parameters derived from microwave satellite data associated with the at least a portion of the obtained irrigation-related data. 4. The computer-implemented method of claim 1, wherein said applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data comprises using one or more backscattering parameters derived from microwave satellite data associated with the at least a portion of the obtained irrigation-related data. 5. The computer-implemented method of claim 1, wherein said determining one or more types of irrigation activity comprises determining at least one pattern pertaining to one or more aspects of the obtained irrigation-related data. 6. The computer-implemented method of claim 5, wherein the at least one pattern pertaining to one or more aspects of the obtained irrigation-related data comprises at least one surface-related pattern. 7. The computer-implemented method of claim 5, wherein the at least one pattern pertaining to one or more aspects of the obtained irrigation-related data comprises at least one of a sprinkler-related pattern, a pivot-related pattern, and a flood irrigation-related pattern. 8. The computer-implemented method of claim 5, wherein the at least one pattern pertaining to one or more aspects of the obtained irrigation-related data comprises at least one drip-related pattern. 9. The computer-implemented method of claim 1, wherein the multiple items of irrigation-related data comprise one or more items of weather data. 10. The computer-implemented method of claim 1, wherein the multiple items of irrigation-related data comprise one or more items of multispectral data ranging across the electromagnetic spectrum. 11. The computer-implemented method of claim 1, wherein the multiple items of irrigation-related data comprise one or more items of hyperspectral data. 12. The computer-implemented method of claim 1, wherein the multiple items of irrigation-related data comprise one or more items of elevation data. 13. The computer-implemented method of claim 1, wherein the multiple items of irrigation-related data comprise one or more items of data pertaining to soil moisture. 14. The computer-implemented method of claim 1, wherein the one or more irrigation-related variables comprises at least one of soil temperature and evapotranspiration. 15. The computer-implemented method of claim 1, wherein the one or more machine learning techniques comprises at least one random forest algorithm. 16. The computer-implemented method of claim 1, wherein the one or more machine learning techniques comprises a support vector regression. 17. The computer-implemented method of claim 1, wherein the one or more machine learning techniques comprises at least one neural network. 18. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computing device to cause the computing device to: obtain multiple items of irrigation-related data pertaining to at least one region of interest; determine one or more temporal values corresponding to irrigation activity at one or more portions of the at least one region of interest by performing spatiotemporal analysis of at least a portion of the obtained irrigation-related data; determine one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data; determine one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the obtained irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity; determine one or more irrigation-related variables pertaining to the at least one region of interest by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, wherein the one or more irrigation-related variables comprises at least an extent of the irrigation activity; and output the one or more determined irrigation-related variables to at least one user. 19. A system comprising: a memory; and at least one processor operably coupled to the memory and configured for: obtaining multiple items of irrigation-related data pertaining to at least one region of interest; determining one or more temporal values corresponding to irrigation activity at one or more portions of the at least one region of interest by performing spatiotemporal analysis of at least a portion of the obtained irrigation-related data; determining one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data; determining one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the obtained irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity; determining one or more irrigation-related variables pertaining to the at least one region of interest by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, wherein the one or more irrigation-related variables comprises at least an extent of the irrigation activity; and outputting the one or more determined irrigation-related variables to at least one user. 20. A computer-implemented method comprising: determining one or more temporal values corresponding to irrigation activity at one or more portions of at least one region of interest by performing spatiotemporal analysis of one or more backscattering parameters derived from microwave satellite data associated with irrigation-related data; determining one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the irrigation-related data and the one or more backscattering parameters; determining one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity; determining an extent of the irrigation activity by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity; and performing one or more automated actions in response to (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, and (iv) the determined extent of the irrigation activity; wherein the method is carried out by at least one computing device. 20 FIELD The present application generally relates to information technology and, more particularly, to data management using machine learning techniques. BACKGROUND Data pertaining to irrigation dates, types, and amounts can be useful for many agriculture services, such as soil moisture estimation, which affects crop yields and quality, crop advisory and field management, transportation and administration of fertilizers, etc. Existing agricultural management approaches commonly include manually manipulating resources such as water, energy, etc. However, unstructured data, such as irrigation date information, for example, are typically not complete and/or readily available for agricultural practitioners. SUMMARY In one embodiment of the present invention, techniques for dynamic determination of irrigation-related data using machine learning techniques are provided. An exemplary computer-implemented method can include obtaining multiple items of irrigation-related data pertaining to at least one region of interest, and determining one or more temporal values corresponding to irrigation activity at one or more portions of the at least one region of interest by performing spatiotemporal analysis of at least a portion of the obtained irrigation-related data. Such a method also includes determining one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data. Also, such a method includes determining one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the obtained irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity. Further, such a method also includes determining one or more irrigation-related variables pertaining to the at least one region of interest by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, wherein the one or more irrigation-related variables comprises at least an extent of the irrigation activity. Additionally, the method also includes outputting the one or more determined irrigation-related variables to at least one user. Another embodiment of the invention or elements thereof can be implemented in the form of a computer program product tangibly embodying computer readable instructions which, when implemented, cause a computer to carry out a plurality of method steps, as described herein. Furthermore, another embodiment of the invention or elements thereof can be implemented in the form of a system including a memory and at least one processor that is coupled to the memory and configured to perform noted method steps. Yet further, another embodiment of the invention or elements thereof can be implemented in the form of means for carrying out the method steps described herein, or elements thereof; the means can include hardware module(s) or a combination of hardware and software modules, wherein the software modules are stored in a tangible computer-readable storage medium (or multiple such media). These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram illustrating system architecture, according to an exemplary embodiment of the invention; FIG. 2 is a diagram illustrating system architecture, according to an embodiment of the invention; FIG. 3 is a diagram illustrating an example implementation of a land surface model, according to an exemplary embodiment of the invention; FIG. 4 is a flow diagram illustrating techniques according to an embodiment of the invention; FIG. 5 is a system diagram of an exemplary computer system on which at least one embodiment of the invention can be implemented; FIG. 6 depicts a cloud computing environment according to an embodiment of the present invention; and FIG. 7 depicts abstraction model layers according to an embodiment of the present invention. DETAILED DESCRIPTION As described herein, an embodiment of the present invention includes dynamic determination of irrigation-related data using machine learning techniques. At least one embodiment includes estimating irrigation-related information pertaining to at least one region of interest such as, for example, irrigation date(s), extent to which irrigation activity was carried out, as well as amount and type of irrigation activity based on spatiotemporal analysis of backscattering parameters and weather data. As detailed herein, such an embodiment can include obtaining and/or monitoring, for a target region, optical and microwave multi spectral and/or hyperspectral data (e.g., from satellites that use active and/or passive sensors), weather data (e.g., rainfall data), soil moisture data (e.g., from high definition soil moisture (HDSM) sources and/or soil moisture active passive (SMAP) sources), drainage pattern data (e.g., from DEM (digital elevation models (DEMs)), etc. Additionally, such an embodiment can also include analyzing such obtained and/or monitored data to determine and/or estimate irrigation date(s), the extent to which irrigation activity was carried out, as well as the amount and type of irrigation activity. Also, such determination can be carried out, for example, at a farm- and/or field-level with respect to the region of interest. Additionally, recorded irrigation information, if available, can be used to improve the confidence in the determinations. Such an embodiment enables improvement of the accuracy of one or more models utilized to provide estimates of irrigation-related and/or agronomic factors, while reducing dependence on sensors and monitoring systems which are cost- and maintenance-intensive. In at least one embodiment, estimating the date(s) and extent(s) of irrigation activity includes performing spatiotemporal analysis of one or more backscattering parameters derived from microwave satellite data, rainfall data, soil moisture data, optical data, etc. Also, in one or more embodiments, estimating the amount(s) of irrigated water utilized in particular irrigation activity includes using one or more backscattering parameters on irrigation activity-related timestamps in connection with one or more machine learning techniques. Such an embodiment can include, for example, building backscatter parameters versus rainfall data, taking into account temperature, run-off, and other factors for latency in backscattering parameters (VV and VH) observation from rainfall and/or irrigation events. As noted above and further detailed herein, backscatter is the portion of an outgoing radar signal that the target (e.g., the Earth's surface) redirects directly back towards the radar antenna of a microwave satellite system. The magnitude of the backscattered signal can depend on a variety factors such as physical factors (e.g., the dielectric constant of the surface materials (which also depends on moisture content), geometric factors (e.g., surface roughness, slopes, orientation of objects relative to the radar beam direction, etc.), and the types of landcover (e.g., soil, vegetation, man-made objects, etc.). Additionally, in accordance with one or more embodiments, a backscattered signal is polarized, and the polarizations can be controlled between H and V as follows: (a) HH: horizontal transmit, horizontal receive; (b) HV: horizontal transmit, vertical receive; (c) VH: vertical transmit, horizontal receive; and (d) VV: vertical transmit, vertical receive. These backscattered signal polarizations are referred to herein as “backscattering parameters,” and can be expressed, for example, in decibel (dB) units. Further, in at least one embodiment, backscatter parameters can be derived, for example, from microwave satellite images via processing steps (available via tools such as, for example, a Sentinel application platform (SNAP)). Such processing steps can include, for example, applying an orbit file to correct backscattering parameters using satellite position and velocity information contained within the metadata of the image. Such processing steps can also include removing thermal noise. Microwave satellite image intensity is disturbed by additive thermal noise, particularly in the cross-polarization channel (HV/VH). Accordingly, thermal noise removal reduces noise effects in the entire image scene and results in reduced discontinuities. Additionally, such processing steps can also include removing border noise. Microwave satellite image has radiometric artifacts at the image borders, and as such, one or more embodiments includes implementing at least one border noise removal algorithm to remove low-intensity noise and invalid data on scene edges. Such processing steps can additionally include calibration, which includes a procedure that converts digital pixel values to radiometrically calibrated backscattering parameters. Further, such processing steps can include speckle filtering. Speckle, appearing in microwave satellite images as granular noise, is typically due to the interference of waves reflected from elementary scattering elements. Speckle filtering is a procedure carried out to increase image quality by reducing speckle. Processing steps can also include range doppler terrain correction, which is a correction of geometric distortions caused by topography (such as foreshortening and shadows) using a digital elevation model to correct the location of each pixel. Further, the processing steps can additionally include conversion to dB, whereby unitless backscattering parameters are converted to dB using a logarithmic transformation. Additionally, in at least one embodiment, estimating the type(s) of irrigation activity associated with particular irrigation activity includes using optical and/or microwave data to determine one or more patterns across various contexts (using, for example, machine learning techniques to determine one or more relevant threshold values). Such contexts can include, for example, contexts pertaining to soil moisture and frequency such as surface-related information, sprinkler and/or pivot-related information, drip-related information, etc. Further, as detailed herein, one or more embodiments include using estimations and/or determinations pertaining to irrigation activity date(s) as well as the extent(s) and amount(s) of irrigation activity as inputs into a physical model to simulate and/or estimate land surface parameters such as field-scale soil moisture, soil temperature, and/or evapotranspiration. Additionally, incorporating soil moisture data into the physical (agricultural) model reduces the uncertainty of modelled crop yields and quality when weather-related input data to the model are subject to non-trivial levels of uncertainty. Further, in one or more embodiments, estimations and/or determinations pertaining to irrigation activity date(s) as well as the extent(s), amount(s), and type(s) of irrigation activity can serve as effective inputs for pest and disease estimation. FIG. 1 is a diagram illustrating system architecture, according to an embodiment of the invention. By way of illustration, FIG. 1 depicts input data in the form of optical/microwave data 102, weather data 104, and soil moisture data 106 (all of which encompass spatiotemporal datasets), as well as elevation map data 108, which encompasses a spatial dataset. As illustrated via component 110, a change in soil water content is detected at timestamp t using microwave data 102 at timestamps t and t−1, respectively. If the change is found to be decreasing (or if no change is found), as illustrated by component 112, an implication is made (as illustrated via component 114) that no rainfall or irrigation events have occurred. This can be further confirmed, for example, from the weather (rainfall) data 104 and the soil moisture data 106. If, as illustrated via component 116, a soil water content increase is found, it implies that there is either rainfall (as illustrated by component 118) or an irrigation event (as illustrated by component 120). Here, the weather (rainfall) data 104 can help to distinguish soil water increment due to irrigation activity (component 120), thereby enabling identification of an irrigation date (as illustrated via component 126). Also, pixel-based identification can be performed for the region of interest to enable determination of the extent of irrigation (as also illustrated via component 126). As also depicted in FIG. 1, a pre-trained machine learning model 130 (as further detailed, for example, in connection with component 226 in FIG. 2) estimates the amount of irrigated water (as illustrated via component 122) for given timestamp t using input features such as backscatter parameters 102, weather data 104 (e.g., evapotranspiration, temperature, humidity, etc.), and run-off and/or elevation map data 108. Another pre-trained machine learning model 132 is implemented to classify irrigation types (as illustrated via component 124) using input features such as backscatter parameters 102, normalized difference water index (NDWI) from optical satellite image data (further detailed in connection with component 214 in FIG. 2) and elevation map data 108. Accordingly, component 126 includes gridded irrigation information (timestamp, extent and amount of water) in a geographic information system, which then can be used as an input (as illustrated via component 128) for accurate estimation of soil moisture at field scale, as well as other agricultural applications such as a yield model, etc. FIG. 2 is a diagram illustrating system architecture, according to an embodiment of the invention. By way of illustration, FIG. 2 depicts input parameters including synthetic aperture radar (SAR) backscatter parameter VH 202 and SAR backscatter parameter VV 204, which can include a positive difference for timestamps t and t−1, and are used to find an increase of soil water content via step 216 for timestamp t. Additionally, an input parameter of rainfall data 206 can be used to discriminate soil water content increases due to irrigation activities in step 218. Supplementary inputs including soil moisture data 208, evapotranspiration data 210, and humidity data 212 (including for example, data pertaining to humidity increases due to irrigation) can be used to further confirm the estimation of irrigation events in connection with step 220. In addition, such data can also be used as input features in machine learning model 222 to account for total irrigated water estimation at timestamp t. Further, in connection with machine learning model 222, an estimated amount of rainfall is determined via step 224 and an estimated amount of irrigated water is determined via step 226. Also, normalized difference water index (NDWI) data 214 can be utilized in conjunction with the supplementary input, and can serve as portions of the input features for machine learning classification of different types of irrigation (such as detailed in connection with component 124 in FIG. 1). The NDWI data 214, in one or more embodiments, uses reflected near-infrared radiation and visible green light to enhance the presence of open water features while eliminating the presence of soil and vegetation features. NDWI returns a positive or close to zero value when water or wet soil features are encountered, and returns a negative value for dry soil and vegetation features. FIG. 3 is a diagram illustrating an example implementation of a land surface model, according to an exemplary embodiment of the invention. By way of illustration, FIG. 3 depicts estimation of soil moisture using a land surface model 316 and a set of input parameters. Such parameters can include, for example, dynamic atmospheric forcing data 302 (also known as weather data) required at the land surface to simulate soil moisture, soil temperature at different depths, evapotranspiration, etc., wherein such data 302 can include specific variables 308 such as wind speed and direction data, temperature data, humidity data, pressure data, downwards shortwave (SW) and longwave (LW) data, and precipitation data. Such parameters can also include parametric variables 304, which can include specific variables 310 such as a digital elevation map (DEM), land use and land cover (LULC information), soil texture information, green vegetation fraction, surface albedo, etc. Additionally, such parameters can include initializing fields 306, which can include specific fields 312 such as soil moisture data and soil temperature data (at multiple layers) required to start a simulation at an initial model time. As also depicted by FIG. 3, such parameters are input to land surface model 316 along with gridded irrigation data 314 (time stamp of an irrigation event, extent and amount of irrigation associated with the irrigation event, etc.), wherein the land surface model 316 processes such inputs and generates an output 318 that includes an estimation of soil moisture, soil temperature, and evapotranspiration information. FIG. 4 is a flow diagram illustrating techniques according to an embodiment of the present invention. Step 402 includes obtaining multiple items of irrigation-related data pertaining to at least one region of interest. The multiple items of irrigation-related data can include one or more items of weather data, one or more items of multi spectral data (e.g., ranging across the electromagnetic spectrum), one or more items of hyperspectral data, one or more items of data pertaining to soil moisture, and/or one or more items of elevation data. Step 404 includes determining one or more temporal values corresponding to irrigation activity at one or more portions of the at least one region of interest by performing spatiotemporal analysis of at least a portion of the obtained irrigation-related data. Performing spatiotemporal analysis can include performing spatiotemporal analysis of one or more backscattering parameters derived from the at least a portion of the obtained irrigation-related data. Step 406 includes determining one or more amounts of water utilized in connection with the irrigation activity corresponding to the one or more determined temporal values by applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data. Applying a first set of one or more machine learning techniques to at least a portion of the obtained irrigation-related data can include using one or more backscattering parameters derived from satellite data associated with the at least a portion of the obtained irrigation-related data. Step 408 includes determining one or more types of irrigation activity to be attributed to the irrigation activity corresponding to the one or more determined temporal values by applying a second set of one or more machine learning techniques to (i) at least a portion of the obtained irrigation-related data and (ii) the one or more determined amounts of water utilized in connection with the irrigation activity. Determining one or more types of irrigation activity can include determining at least one pattern pertaining to one or more aspects of the obtained irrigation-related data. The at least one pattern pertaining to one or more aspects of the obtained irrigation-related data can include at least one surface-related pattern, at least one sprinkler-related pattern, at least one pivot-related pattern, at least one flood irrigation-related pattern, and/or at least one drip-related pattern. In one or more embodiments, the first set of one or more machine learning techniques can include the same one or more machine learning techniques as the second set, or can be distinct from the second set of one or more machine learning techniques. Further, in at least one embodiment, the one or more machine learning techniques can include at least one random forest algorithm, a support vector regression, and/or at least one neural network. Step 410 includes determining one or more irrigation-related variables pertaining to the at least one region of interest by executing a physical model using, as inputs, (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, and (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, wherein the one or more irrigation-related variables comprises at least an extent of the irrigation activity. In one or more embodiments, determining the extent of irrigation activity enables the inclusion of information pertaining to how an entire region of interest (e.g., field, farm, etc.) was irrigated, which can then be utilized to identify one or more hot spots of low, optimal, and/or high irrigation areas in the region of interest using the determined irrigation date, irrigation type, and irrigation amount information. Also, in one or more embodiments, the one or more irrigation-related variables can also include soil moisture, soil temperature, and/or evapotranspiration. Step 412 includes outputting the one or more determined irrigation-related variables to at least one user. As detailed herein, the techniques depicted in FIG. 4 can be carried out without the use of one or more sensors. Additionally, at least one embodiment includes performing one or more automated actions (e.g., outputting related information to at least one user, modifying one or more irrigation parameters and/or configurations in connection with irrigation activity, etc.) in response to (i) the one or more determined temporal values, (ii) the one or more determined amounts of water utilized in connection with the irrigation activity, (iii) the one or more determined types of irrigation activity to be attributed to the irrigation activity, and (iv) the determined extent of the irrigation activity. The techniques depicted in FIG. 4 can also, as described herein, include providing a system, wherein the system includes distinct software modules, each of the distinct software modules being embodied on a tangible computer-readable recordable storage medium. All of the modules (or any subset thereof) can be on the same medium, or each can be on a different medium, for example. The modules can include any or all of the components shown in the figures and/or described herein. In an embodiment of the invention, the modules can run, for example, on a hardware processor. The method steps can then be carried out using the distinct software modules of the system, as described above, executing on a hardware processor. Further, a computer program product can include a tangible computer-readable recordable storage medium with code adapted to be executed to carry out at least one method step described herein, including the provision of the system with the distinct software modules. Additionally, the techniques depicted in FIG. 4 can be implemented via a computer program product that can include computer useable program code that is stored in a computer readable storage medium in a data processing system, and wherein the computer useable program code was downloaded over a network from a remote data processing system. Also, in an embodiment of the invention, the computer program product can include computer useable program code that is stored in a computer readable storage medium in a server data processing system, and wherein the computer useable program code is downloaded over a network to a remote data processing system for use in a computer readable storage medium with the remote system. An embodiment of the invention or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and configured to perform exemplary method steps. Additionally, an embodiment of the present invention can make use of software running on a computer or workstation. With reference to FIG. 5, such an implementation might employ, for example, a processor 502, a memory 504, and an input/output interface formed, for example, by a display 506 and a keyboard 508. The term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processor” may refer to more than one individual processor. The term “memory” is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device (for example, hard drive), a removable memory device (for example, diskette), a flash memory and the like. In addition, the phrase “input/output interface” as used herein, is intended to include, for example, a mechanism for inputting data to the processing unit (for example, mouse), and a mechanism for providing results associated with the processing unit (for example, printer). The processor 502, memory 504, and input/output interface such as display 506 and keyboard 508 can be interconnected, for example, via bus 510 as part of a data processing unit 512. Suitable interconnections, for example via bus 510, can also be provided to a network interface 514, such as a network card, which can be provided to interface with a computer network, and to a media interface 516, such as a diskette or CD-ROM drive, which can be provided to interface with media 518. Accordingly, computer software including instructions or code for performing the methodologies of the invention, as described herein, may be stored in associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and implemented by a CPU. Such software could include, but is not limited to, firmware, resident software, microcode, and the like. A data processing system suitable for storing and/or executing program code will include at least one processor 502 coupled directly or indirectly to memory elements 504 through a system bus 510. The memory elements can include local memory employed during actual implementation of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during implementation. Input/output or I/O devices (including, but not limited to, keyboards 508, displays 506, pointing devices, and the like) can be coupled to the system either directly (such as via bus 510) or through intervening I/O controllers (omitted for clarity). Network adapters such as network interface 514 may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems and Ethernet cards are just a few of the currently available types of network adapters. As used herein, including the claims, a “server” includes a physical data processing system (for example, system 512 as shown in FIG. 5) running a server program. It will be understood that such a physical server may or may not include a display and keyboard. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It should be noted that any of the methods described herein can include an additional step of providing a system comprising distinct software modules embodied on a computer readable storage medium; the modules can include, for example, any or all of the components detailed herein. The method steps can then be carried out using the distinct software modules and/or sub-modules of the system, as described above, executing on a hardware processor 502. Further, a computer program product can include a computer-readable storage medium with code adapted to be implemented to carry out at least one method step described herein, including the provision of the system with the distinct software modules. In any case, it should be understood that the components illustrated herein may be implemented in various forms of hardware, software, or combinations thereof, for example, application specific integrated circuit(s) (ASICS), functional circuitry, an appropriately programmed digital computer with associated memory, and the like. Given the teachings of the invention provided herein, one of ordinary skill in the related art will be able to contemplate other implementations of the components of the invention. Additionally, it is understood in advance that implementation of the teachings recited herein are not limited to a particular computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any type of computing environment now known or later developed. For example, cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (for example, networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (for example, country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (for example, web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (for example, host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (for example, mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (for example, cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now to FIG. 6, illustrative cloud computing environment 50 is depicted. As shown, cloud computing environment 50 includes one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 54A, desktop computer 54B, laptop computer 54C, and/or automobile computer system 54N may communicate. Nodes 10 may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment 50 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices 54A-N shown in FIG. 6 are intended to be illustrative only and that computing nodes 10 and cloud computing environment 50 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now to FIG. 7, a set of functional abstraction layers provided by cloud computing environment 50 (FIG. 6) is shown. It should be understood in advance that the components, layers, and functions shown in FIG. 7 are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer 60 includes hardware and software components. Examples of hardware components include: mainframes 61; RISC (Reduced Instruction Set Computer) architecture based servers 62; servers 63; blade servers 64; storage devices 65; and networks and networking components 66. In some embodiments, software components include network application server software 67 and database software 68. Virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 71; virtual storage 72; virtual networks 73, including virtual private networks; virtual applications and operating systems 74; and virtual clients 75. In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides access to the cloud computing environment for consumers and system administrators. Service level management 84 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer 90 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analytics processing 94; transaction processing 95; and irrigation-related data determination 96, in accordance with the one or more embodiments of the present invention. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of another feature, step, operation, element, component, and/or group thereof. At least one embodiment of the present invention may provide a beneficial effect such as, for example, improving the accuracy of models that provide estimates of agronomic factors, while also reducing dependence on cost- and maintenance-intensive sensors and monitoring systems. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. 16728266 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Jan 28th, 2020 12:00AM https://www.uspto.gov?id=US11314545-20220426 Predicting transaction outcome based on artifacts in a transaction processing environment Method and apparatus for predicting a transaction's outcome in a transaction processing environment are provided. A transaction request is received by a transaction processing monitor (TPM), where the transaction request comprises a plurality of tags. The TPM identifies historical prior transactions corresponding to the transaction request, and determines a plurality of historical tags associated with those historical transactions. The TPM then determines whether a predicted execution time exceeds the transaction request's timeout, and proceeds accordingly. If the predicted execution time exceeds the timeout value, the transaction is immediately returned as failed. The tags associated with a given transaction request are repeatedly updated as the request traverses the transaction processing system, and the transaction is repeatedly verified to ensure that it can still be completed successfully. 11314545 1. A method, comprising: receiving, by a dispatcher from a client device, a first transaction request at a first time; determining, by the dispatcher, a first timeout value corresponding to the first transaction request; associating, by the dispatcher, the first transaction request with a first plurality of current tags, wherein one of the first plurality of current tags specifies the first timeout value; determining, by the dispatcher, a plurality of historical tags associated with a set of historical transactions corresponding to the first transaction request, wherein each historical tag of the plurality of historical tags comprises (i) a plurality of historical execution times for a plurality of transaction processing monitors (TPMs) and (ii) a last execution time for each of the plurality of TPMs; selecting, by the dispatcher, a first TPM from the plurality of TPMs to execute the first transaction request based on the plurality of historical tags, wherein selecting the first TPM is based on determining that (i) the first TPM has a lowest last execution time among the plurality of TPMs and (ii) a historical minimum execution time associated with the first TPM is less than the first timeout value; sending, by the dispatcher, the first transaction request to the selected first TPM for execution, wherein the first TPM determines a predicted execution time of the first transaction request, based on the lowest last execution time, the historical minimum execution time, and the first plurality of current tags; receiving, by the dispatcher from the first TPM, one of a transaction response that the first TPM completed the execution or an indication that the first transaction request failed to execute, in response to determining that a predicted execution time for the first TPM was greater than the first timeout value, wherein the indication is received, prior to an expiration of the first timeout value, and the indication returns one or more updates to the first plurality of current tags associated with the first transaction request; and responsive to receiving the indication, updating, by the dispatcher, the plurality of historical tags. 2. The method of claim 1, wherein determining the first timeout value comprises determining a value specified by the first transaction request. 3. The method of claim 1, wherein: the first transaction request does not specify the first timeout value; and the first timeout value is determined by the dispatcher. 4. The method of claim 3, wherein the first timeout value is further determined based on a type of transaction associated with the first transaction request. 5. The method of claim 1, further comprising: receiving, by the dispatcher, a second transaction request; associating, by the dispatcher, the second transaction request with a second plurality of current tags, wherein one of the second plurality of current tags specifies a second timeout value; and upon determining, by the dispatcher, that a predicted execution time for the second transaction request exceeds the second timeout value, returning, by the dispatcher, an indication that the second transaction request failed to execute. 6. The method of claim 1, further comprising: receiving, by the dispatcher, a response from the first TPM, wherein the response corresponds to a second transaction request and wherein the response is associated with a plurality of response tags; updating, by the dispatcher, the plurality of historical tags based on the plurality of response tags; and returning, by the dispatcher, the response to an entity that sent the second transaction request. 7. The method of claim 6, wherein: the plurality of response tags comprise a new execution time; and updating the plurality of historical tags comprises updating the last execution time of the first TPM based on the new execution time. 8. The method of claim 7, wherein updating the plurality of historical tags further comprises: determining that the new execution time is faster than the historical minimum execution time of the first TPM; and updating the historical minimum execution time based on the new execution time. 9. The method of claim 1, further comprising: receiving, by the dispatcher, from a requesting entity, a second transaction request; associating, by the dispatcher, the second transaction request with a second plurality of current tags, wherein one of the second plurality of current tags specifies a system resource that is required to execute the second transaction request; and upon determining, by the dispatcher, that the system resource is unavailable, returning, by the dispatcher, an indication that the second transaction request failed to execute to the requesting entity. 10. The method of claim 1, further comprising: receiving, by the dispatcher, the first transaction request at a subsequent second time; selecting, by the dispatcher, a second TPM to execute the first transaction request, wherein the second TPM is different from the first TPM; and sending, by the dispatcher, the first transaction request to the second TPM. 11. The method of claim 1, further comprising: prior to selecting the first TPM, executing, by the dispatcher, the first transaction request, wherein executing the first transaction request comprises: beginning to execute the first transaction request; and determining that the first transaction request will need to be sent to another TPM to finish execution. 12. A computer program product comprising a computer-readable storage medium having computer-readable program code embodied therewith, the computer-readable program code executable by one or more computer processors to perform an operation, the operation comprising: receiving, by a dispatcher from a client device, a first transaction request at a first time; determining, by the dispatcher, a first timeout value corresponding to the first transaction request; associating, by the dispatcher, the first transaction request with a first plurality of current tags, wherein one of the first plurality of current tags specifies the first timeout value; determining, by the dispatcher, a plurality of historical tags associated with a set of historical transactions corresponding to the first transaction request, wherein each historical tag of the plurality of historical tags comprises (i) a plurality of historical execution times for a plurality of transaction processing monitors (TPMs) and (ii) a last execution time for each of the plurality of TPMs; selecting, by the dispatcher, a first TPM from the plurality of TPMs to execute the first transaction request based on the plurality of historical tags, wherein selecting the first TPM is based on determining that (i) the first TPM has a lowest last execution time among the plurality of TPMs and (ii) a historical minimum execution time associated with the first TPM is less than the first timeout value; sending, by the dispatcher, the first transaction request to the selected first TPM for execution, wherein the first TPM determines a predicted execution time of the first transaction request, based on the lowest last execution time, the historical minimum execution time, and the first plurality of current tags; receiving, by the dispatcher from the first TPM, one of a transaction response that the first TPM completed the execution or an indication that the first transaction request failed to execute, in response to determining that a predicted execution time for the first TPM was greater than the first timeout value, wherein the indication is received, prior to an expiration of the first timeout value, and the indication returns one or more updates to the first plurality of current tags associated with the first transaction request; and responsive to receiving the indication, updating, by the dispatcher, the plurality of historical tags. 13. The computer program product of claim 12, wherein determining the first timeout value comprises determining a value specified by the first transaction request. 14. The computer program product of claim 12, wherein: the first transaction request does not specify the first timeout value; and the first timeout value is determined by the dispatcher. 15. The computer program product of claim 14, wherein the first timeout value is further determined based on a type of transaction associated with the first transaction request. 16. The computer program product of claim 12, the operation further comprising: receiving, by the dispatcher, a second transaction request; associating, by the dispatcher, the second transaction request with a second plurality of current tags, wherein one of the second plurality of current tags specifies a second timeout value; and upon determining, by the dispatcher, that a predicted execution time for the second transaction request exceeds the second timeout value, returning, by the dispatcher, an indication that the second transaction request failed to execute. 17. A transaction processing monitor (TPM) dispatcher comprising: a computer processor; and a memory containing a program, which when executed by the computer processor, performs an operation, the operation comprising: receiving a first transaction request from a client device; determining a first timeout value corresponding to the first transaction request; associating the first transaction request with a first plurality of current tags, wherein one of the first plurality of current tags specifies the first timeout value; determining a plurality of historical tags associated with a set of historical transactions corresponding to the first transaction request, wherein each historical tag of the plurality of historical tags comprises (i) a plurality of historical execution times for a plurality of transaction processing monitors (TPMs) and (ii) a last execution time for each of the plurality of TPMs; selecting a first TPM from the plurality of TPMs to execute the first transaction request based on the plurality of historical tags, wherein selecting the first TPM is based on determining that (i) the first TPM has a lowest last execution time among the plurality of TPMs and (ii) a historical minimum execution time associated with the first TPM is less than the first timeout value; sending the first transaction request to the selected first TPM for execution, wherein the first TPM determines a predicted execution time of the first transaction request, based on the lowest last execution time, the historical minimum execution time, and the first plurality of current tags; receiving, from the first TPM, one of a transaction response that the first TPM completed the execution or an indication that the first transaction request failed to execute, in response to determining that a predicted execution time for the first TPM was greater than the first timeout value, wherein the indication is received, prior to an expiration of the first timeout value, and the indication returns one or more updates to the first plurality of current tags associated with the first transaction request; and responsive to receiving the indication, updating the plurality of historical tags. 18. The TPM dispatcher of claim 17, wherein determining the first timeout value comprises determining a value specified by the first transaction request. 19. The TPM dispatcher of claim 17, wherein: the first transaction request does not specify the first timeout value; and the first timeout value is determined by the TPM dispatcher. 20. The TPM dispatcher of claim 19, wherein the first timeout value is further determined based on a type of transaction associated with the first transaction request. 20 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a divisional of co-pending U.S. patent application Ser. No. 15/345,831, filed on Nov. 8, 2016. The aforementioned related patent application is herein incorporated by reference in their entirety. BACKGROUND The present invention relates to transaction processing, and more specifically, to predicting transaction failure in a transaction processing environment. Transaction processing is a form of computer processing where work is divided into transactions. Typically, transactions are indivisible operations, where the entire transaction must either succeed or fail, and distributed transaction processing generally involves executing a transaction across multiple devices. As the number of devices and resources required to process a transaction increases, the possible failure points likewise increases. Common failure points include network delays while the transaction is traversing the system, dispatcher delays before the transaction can be sent to a particular device for processing, scheduler delays where a transaction processor is overloaded and the transaction sits in an input queue for too long, execution delays where a transaction processor is simply executing slowly, and unavailability of a dependent resource. Generally, transactions can be associated with an amount of time that the transaction must be processed in and, if the transaction is not processed within the time limit, the transaction is typically aborted. For example, if the transaction suffers delays, e.g., waiting for a resource or TPM to become available, the elapsed time since the transaction was initiated may exceed this timeout. When that happens, the transaction fails, and any changes made are reverted to maintain consistency in the system. As a result, a substantial amount of computer resources and processing time can be wasted on these failed transactions before it is determined that the transaction has timed out or failed. SUMMARY According to one embodiment of the present invention, a transaction request is received at a transaction processing monitor (TPM) from a requesting entity. The transaction request is associated with a plurality of current tags, one of which specifies a timeout value. The TPM identifies historical transactions corresponding to the transaction request, and determines a plurality of historical tags associated with the historical transactions, wherein one of the historical tags specifies a historical minimum execution time. If the TPM determines that the predicted execution time for the transaction request exceeds the timeout value, the current tags are updated to reflect that determination, and an indication that the transaction request failed to execute is returned with the current tags. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS FIG. 1 is a block diagram illustrating a TPM capable of implementing one embodiment discussed herein. FIG. 2 illustrates an environment comprising multiple TPMs that is capable of implementing one embodiment discussed herein. FIG. 3 is a block diagram illustrating a transaction request and TPM, and the historical tags and artifacts associated with one embodiment discussed herein. FIG. 4 is a flow diagram illustrating a method of predicting transaction failure, according to one embodiment described herein. FIG. 5 is a flow diagram illustrating a method of predicting transaction failure based on a predicted execution time exceeding a timeout value for the transaction, according to one embodiment described herein. FIG. 6 is a flow diagram illustrating a method of managing transactions in a transaction processing environment using tags, according to one embodiment described herein. DETAILED DESCRIPTION Transaction processing is a form of computer processing where work is divided into transactions. Typically, transactions are indivisible operations, where the entire transaction must either succeed or fail. Distributed transaction processing involves executing a transaction across multiple devices, referred to herein as transaction processing monitors (TPMs). In a complex transaction processing system, a transaction can often go through multiple TPMs, accessing multiple resources (e.g., databases) along the way. These TPMs could be spread across wide geographies and operating in a cloud environment. As the number of TPMs and resources required increases, the possible failure points likewise increases. Common failure points include network delays while the transaction is traversing the system, dispatcher delays before the transaction can be sent to a particular TPM to execute it, scheduler delays where a TPM is overloaded and the transaction sits in an input queue for too long, execution delays where a TPM is simply executing slowly, and unavailability of a dependent resource. Generally, transactions are associated with a timeout value. If the transaction suffers delays, e.g., waiting for a resource or TPM to become available, the elapsed time since the transaction was initiated may exceed this timeout. When that happens, the transaction fails, and any changes made are reverted to maintain consistency in the system. Large amounts of computer resources and processing time can be wasted on these failed transactions before it is determined that the transaction has timed out or failed. The present disclosure therefore presents techniques to predetermine a transaction's outcome based on its artifacts in a transaction processing environment, and thereby avoid wasting limited transaction processing system resources. With reference now to FIG. 1, a TPM 101 capable of implementing one embodiment of the present discussion is illustrated. As shown, TPM 101 includes one or more Processor(s) 102, Memory 103, and Network Interface 106. The Processor 102 may be any processor capable of performing the functions described herein. Memory 103 contains Application 105, Historical Artifacts 106, and TPM Records 107. Although in the pictured embodiment Memory 103 is shown as a single entity, Memory 103 may include one or more memory devices having blocks of memory associated with physical addresses, such as random access memory (RAM), read only memory (ROM), flash memory or other types of volatile and/or non-volatile memory. As is discussed in more detail below, TPM 101 is generally configured to receive transaction requests through Network Interface 104 and execute them using Application 105. Prior to executing a transaction request, TPM 101 may be configured to compare the tags associated with the transaction request to Historical Artifacts 106 in order to predict whether the transaction will fail. If so, TPM 101 will immediately return failure rather than waste time or computing resources executing a doomed request. Additionally, Memory 103 may contain one or more TPM Records 107 which each contain data about a respective neighboring TPM. For example, each TPM Record 107 may contain information about a minimum execution time and recent execution times at a neighboring TPM. Each TPM Record 107 may further contain information about whether resources attached to the neighboring TPM, such as a database, are currently available. Additionally, TPM Records 107 can include data about workload at neighboring TPMs. In some embodiments, TPM 101 may only be able to execute a portion of a given transaction request (e.g., due to lacking specific resources needed to execute the entirety of the transaction request). In that case, TPM 101 may be configured to refer to TPM Records 107 to select a neighboring TPM that is capable of continuing the execution. FIG. 2 is a block diagram of an environment capable of implementing one embodiment of the present disclosure. In the depicted embodiment, a Client Device 201 initiates a transaction by sending a transaction request along communication link 201a to a TPM Dispatcher 202. In some embodiments, TPM Dispatcher 202 is a discrete module that is designed to receive transaction requests and select an appropriate TPM to dispatch them to, such as TPM A 203. In such an embodiment, TPM Dispatcher 202 is generally configured to receive a transaction request from Client Device 201, determine a best available TPM to execute the transaction request, and dispatch the transaction request to that best available TPM. TPM Dispatcher 202 may select the best TPM based on a variety of factors, including current workload of each TPM, a hash-based load-balancer, or any other method of dispatching requests. In one embodiment, TPM Dispatcher 202 maintains a record for each TPM in the system, where each record contains information about each TPM. For example, each record might contain a minimum historical execution time for comparable requests sent to that TPM, as well as a last execution time for the most recent comparable transaction that was dispatched there. In some embodiments, TPM Dispatcher 202 is configured to select a best TPM based on these records, e.g., TPM Dispatcher 202 may select the TPM that has the lowest minimum execution time or the fastest last execution time. In some embodiments, one of the TPMs within the transaction processing environment is configured to also act as TPM Dispatcher 202. In these embodiments, TPM Dispatcher 202 itself is capable of executing at least a portion of a transaction request, and may then select a TPM to continue execution rather than begin execution. In some embodiments, transaction requests are associated with a timeout value that indicates how much time remains until the transaction request times out and fails. For instance, such a timeout value could be set by the Client Device 201 or an application therein. For example, an application running on the Client Device 201 that generates the transaction request may require a response within five seconds, and could specify a timeout value of “5” in a timeout field of the transaction request before sending the transaction request to the TPM Dispatcher 202. In other embodiments, transaction requests arrive at TPM Dispatcher 202 without a specified timeout value, and TPM Dispatcher 202 determines a timeout value to associate with the transaction request. For example, TPM Dispatcher 202 may be configured to associate a particular timeout value with all requests that originate from a particular Client Device 201, a defined group of clients, a type of client, a location of the Client Device 201, or any other method of grouping client devices. Additionally or alternatively, TPM Dispatcher 202 may be configured to determine an appropriate timeout value based on the type of transaction contemplated by the transaction request, the current workload of the system or particular TPMs, or any other method of deciding an appropriate timeout value for a particular transaction request. In some embodiments, TPM Dispatcher 202 is configured to verify that sufficient time remains to assure successful execution of the transaction request before forwarding it to TPM(s) A 203. In embodiments where TPM Dispatcher 202 is itself a TPM capable of executing the request, TPM Dispatcher 202 may likewise be configured to verify that sufficient time remains to execute the transaction before it begins execution. For example, after determining the associated timeout value, TPM Dispatcher 202 can verify that the minimum execution time of the system is faster than the timeout value. If the timeout value is less than the minimum execution time, TPM Dispatcher 202 can immediately return the failure to the Client Device 201. In another embodiment, TPM Dispatcher may be configured to select a best available TPM based on, e.g., the last execution time of each TPM, and to verify that the minimum execution time of the best available TPM is less than the timeout value. In some embodiments, TPM Dispatcher 202 verifies that sufficient time remains by using a predicted execution time. The predicted execution time may be based off of any number of factors, including the current workload of the system, the minimum execution time at one or more TPMs, the most recent execution time at one or more TPMs, or any other factors. For example, the TPM Dispatcher 202 may determine that the most recent execution time for a TPM was five seconds. Rather than simply fail the transaction because the timeout value is 4.5 seconds, the TPM Dispatcher 202 may instead estimate that execution will take approximately five seconds rather than exactly five seconds. The range of acceptable variation could be determined with a predefined amount of time, e.g., one second, or a predefined percentage, e.g., within 10%. Additionally or alternatively, the range of acceptable times could be based on the standard deviation of execution times at the particular TPM. In this example, the TPM Dispatcher 202 may determine that the last execution time was five seconds, and decide that the predicted execution time is between four and six seconds. Thus, the transaction request may be executed or forwarded to a TPM even though the last execution time is greater than the timeout value. Using historical tags to predetermine transaction failure avoids wasting scarce time and computing resources of the transaction processing system and the client. For example, suppose a transaction request has a timeout value of five seconds and TPM Dispatcher 202 determines that it will take at least six seconds to execute it. The transaction request can be immediately returned, rather than attempting to execute it. In prior systems without historical transaction tags and artifacts, the transaction processing system would begin executing the request and would not return failure until five seconds had elapsed, even though the request was doomed to fail. In addition to the benefits to the transaction processing system, this embodiment is beneficial for Client Device 201 because it receives the failed request sooner, and can generate a new request sooner in order to attempt execution again. In some embodiments, if sufficient time remains to execute the request, TPM Dispatcher 202 is configured to update the tags associated with the request before dispatching it. For example, TPM Dispatcher 202 can calculate how much time has elapsed since the transaction request was sent by Client Device 201. TPM Dispatcher 202 can then update the timeout value associated with the transaction request by decreasing the timeout value by the elapsed time. In some embodiments, TPM Dispatcher 202 may further verify that sufficient time remains for execution before sending the transaction request to a TPM. Thus, TPM Dispatcher 202 may update and verify the timeout twice, once upon receiving the request and once just before dispatching it. As discussed below, each TPM in the system may perform a similar verification and update before passing the transaction request along, upon receiving the request, or both. Although FIG. 2 illustrates a plurality of TPMs A 203, in some embodiments there is only a single TPM A 203. In one embodiment with a plurality of TPMs A 203, each TPM A 203 is a clone capable of executing the transaction request, and TPM Dispatcher 202 may select a TPM A 203 to execute the request based on workload, last execution time, minimum execution time, or any other method. Each TPM A 203 may have identical hardware to the other TPM A 203, or one or more TPM A 203 may have faster hardware, more memory, or may vary in some other way. Additionally, each TPM A 203 may be a discrete device, or each may each operate as independent discrete modules on a single device or distributed across multiple devices. TPM A 203 and TPM Dispatcher 202 may be communicatively linked in any manner. In many embodiments, there are unavoidable and unpredictable transmission delays while a transaction request traverses the various links between TPMs and the TPM Dispatcher 202. Additionally, each TPM may have an input queue where incoming transaction requests are queued to be executed, and further delays could occur while the transaction is waiting to be scheduled and executed. In a preferred embodiment, before beginning execution, TPM A 203 (and all other TPMs that receive a request) verifies that it is possible for the transaction request to be completed successfully. To do so, TPM A 203 may compare the tags associated with the transaction request to its own set of historical artifacts or tags. For example, in one embodiment, TPM A 203 compares the timeout value associated with the transaction with the historical minimum execution time for TPM A 203. After each transaction request is completed, TPM A 203 compares the updated tags comprising the execution time with its own historical minimum execution time, and updates its own historical tags or artifacts if the transaction was completed faster than the historical minimum time. If the historical minimum execution time is greater than the current timeout value, the transaction will not begin execution. In some embodiments, before returning a failure notice, TPM A 203 updates the tags associated with the transaction request to indicate why it failed, for example the tags may be updated to reflect that the transaction would take too long to complete at the particular TPM A 203. TPM Dispatcher 202 may use this data to adjust its routing patterns, and thereby prevent repeated failures because of timeout. For example, TPM Dispatcher 202 may send a future transaction request to a different TPM A 203 in the plurality of TPM(s) A 203. In one embodiment, the current transaction tags contain information about which resources will be required for successful completion, for example a database that will be accessed during execution. In this embodiment, TPM A 203 may reject the transaction because it knows that the indicated resource is unavailable. In order to maintain an updated status of dependent resources, TPM A 203 uses updated tags associated with completed (or failed) transactions that are being returned to the Client Device 201, as is discussed in more detail below. In this way, TPM A 203 may recognize that a required resource is unavailable before the execution actually requires it to be accessed. This enables more efficient use of the transaction processing system. If a dependent resource is unavailable, TPM A 203 will update the tags associated with the transaction and return it to TPM Dispatcher 202. In this embodiment, TPM Dispatcher 202 will update its own historical records based on these updated tags before returning the failed request to the Client Device 201. For example, TPM Dispatcher 202 may store an indication that the particular resource is currently unavailable. If a transaction request arrives that requires that resource, TPM Dispatcher 202 can immediately return failure based on this data. In this embodiment, it may be necessary to periodically send a dummy request to see if the unavailable resource has become available again. The majority of status messages, however, are sent through the tags associated with each transaction request and response, which greatly reduces the amount of dummy traffic traversing the transaction processing system. In some embodiments, and particularly in complex transaction processing environments, a particular transaction may be sent across multiple TPMs during execution. For example, as illustrated in FIG. 2, TPM A 203 and TPM B 204 may run different applications, and each may be incapable of executing an entire transaction request alone. In such an embodiment, TPM A 203 may begin execution of a transaction, and during execution determine that it has reached a point that it cannot continue to execute the transaction. TPM A 203 may then forward the transaction to one of the plurality of TPM(s) B 204 to be completed. TPM A 203 may select one TPM B 204 in much the same way that TPM Dispatcher 202 selects one of the plurality of TPM(s) A 203, e.g., by comparing historical minimum execution times or last execution times for each TPM B 204. Additionally or alternatively, TPM A 203 may be capable of the entire execution, but may not have access to a required resource such as Database 205, for example because link 203c is unavailable. This would require that the transaction be sent to TPM B 204. Of course, in some embodiments TPM A 203 may be fully capable of executing and completing the request, and may not need to forward it to another TPM at all. In a preferred embodiment, before continuing or beginning execution of a transaction request, TPM B 204 updates and verifies the timeout value as discussed above. For example, TPM B 204 will determine how much time remains until the request times out, and compare this value to TPM B 204's minimum execution time. TPM B 204 may also compare other data in the transaction's tags, such as required resources, to historical tags and artifacts stored by TPM B 204. Additionally, in some embodiments TPM B 204 may be required to send the transaction to yet another TPM to continue execution, and that subsequent TPM would perform the same updating and verification of the transaction's tags before continuing execution. In this way, the timeout value and other tags associated with a given transaction are dynamically and repeatedly updated at every stage of execution, and each TPM independently determines whether the transaction can be completed successfully. If at any point it is determined that the transaction is doomed to fail, it will be returned immediately, thus saving time and system resources. As will be discussed in more detail below in reference to FIG. 3, in one embodiment transaction tags are updated and are useful even after a transaction is completed successfully (or is returned because of a predicted failure). For example, if TPM B 204 successfully completes execution, it may update the tags of the transaction to indicate how long it took to execute it, which TPM executed it, a reason for failure, and the like. TPM B 204 will also update its own records indicating how long this most recent execution took, and will update its previous minimum execution time if it was faster. When the transaction is returned to TPM A 203, TPM A 203 can use these tags to update its own records similarly, as will TPM Dispatcher 202. If the transaction failed, each TPM A 203 and TPM Dispatcher 202 may update its artifacts to indicate that a required resource is down, that a particular TPM could not finish in time, or any other relevant information. Finally, the tags can be stripped by TPM Dispatcher 202, and the response can be sent to the Client Device 201. Turning now to FIG. 3, a more detailed illustration of one embodiment of the information stored by each TPM and carried by each transaction request is shown. In the illustrated embodiment, TPM Dispatcher 202 maintains a plurality of Records 304. These Records 304 may also be referred to as historical tags or transaction artifacts. Each Record 304 contains information about a particular TPM 203 in the transaction processing system. Each Record 304 has various Fields 305, including a name or identifier of the respective TPM 203, a minimum historical execution time for the TPM 203, the last execution time, and other workload information. Records 304 may also contain information about the availability or workload of dependent resources for various transactions. In some embodiments, each TPM 203 maintains a similar plurality of Records 307 in order to facilitate routing decisions. In this way, TPM 203 can intelligently route transactions based on the workload, execution times, and availability of neighboring TPMs. Each TPM 203 also maintains a single Record 306 which maintains the same information as its corresponding Record 304 in TPM Dispatcher 202. For example, whenever TPM 203 updates its own Record 306, e.g., by updating the last execution time, TPM Dispatcher 202 will update its corresponding Record 304 when the Transaction Response 301b reaches it. In some embodiments, the transaction processing system is capable of handling multiple types of transactions. In such an embodiment, the transaction Records 304, 306, and 307 may further contain data about the type(s) of transaction it refers to, the type(s) of transaction the respective TPM is capable of, or similar information. In this way, the TPM Dispatcher 202 and subsequent TPMs can be sure that the data being used to predict failure is accurate based on the type of transaction. For example, if transactions of type A generally take five seconds to execute, and transactions of type B require ten seconds to execute, it is vital that the records are kept distinct for each type of transaction. Otherwise, transactions of type B would almost certainly be allowed to continue execution regardless of how much time remains because transactions of type A have lowered the minimum execution time. Similarly, the last execution time would be rendered useless, as it might apply to an entirely different type of transaction. In the illustrated embodiment, each Transaction Request 301a is associated with a series of tags 302a. As discussed above, these tags include information like the name of the transaction, the associated Client Device 201, a timeout value, required resources, and may include a type of the transaction. These tags 302a are dynamically and repeatedly updated at every stage of execution in order to predict whether the transaction will fail or can be completed successfully. After successful execution, TPM 203 updates the transaction tags 302a and attaches them to the corresponding Transaction Response 301b as tags 302b. Similarly, after determining that a transaction will fail, TPM 203 updates the tags 302a to indicate why the transaction is being returned, and attaches them to the response indicating failure. These updated tags 302b reflect the time it took to execute the transaction and other workload and system resource related information. For example, tags 302b may contain an indication of whether a particular resource is available. In a particular embodiment, TPM 203 sends Transaction Response 301b to the entity that sent the corresponding Transaction Request 301a to it. For example, if TPM 203 received Transaction Request 301a from another TPM, the Transaction Response 301b will be sent to that TPM. Likewise, if TPM 203 received the Transaction Request 301a directly from TPM Dispatcher 202, it will send the Transaction Response 301b to TPM Dispatcher 202. In this way, Transaction Response 301b is forwarded along the chain of TPMs that executed it, so that each participating TPM 203 can update its personal Record 306, as well as its plurality of Records 307. When the Transaction Response 301b reaches TPM Dispatcher 202, it similarly updates its records 304 based on the updated tags 302b. Finally, Transaction Response 301b is returned to Client Device 201. In a preferred embodiment, TPM Dispatcher 202 strips the updated tags 302b before returning the Transaction Response 301b, but TPM Dispatcher 202 may also strip only some of the tags 302b, or may leave them all attached to the Transaction Response 301b. FIG. 4 is a flow diagram illustrating an exemplary sequence of events and the entities that complete each operation in one embodiment. At block 410, Client Device 401 or an application thereon generates a transaction request. Client Device 401 sends this request to TPM Dispatcher 402, and at block 411 TPM Dispatcher 402 associates the request with tags, as discussed above. At block 412, TPM Dispatcher 402 determines a set of historical tags corresponding to the transaction. For example, if the transaction processing system is capable of executing multiple types of transactions, TPM Dispatcher 402 will only use tags relevant to this particular type of transaction in order to route the request. At block 413, TPM Dispatcher 402 verifies that sufficient time remains to execute the transaction. In the same block, TPM Dispatcher 402 may also verify that any required resources are available. At block 414, TPM Dispatcher 402 selects a TPM to execute the transaction, based on the current tags and the determined historical tags. TPM Dispatcher 402 then updates the timeout value based on elapsed time at block 415, and sends the request to the selected TPM. At block 416, TPM 403 begins operation on the request and determines that insufficient time remains to execute the transaction before timeout. Additionally or alternatively, TPM 403 may determine that a required resource is unavailable. TPM 403 then updates the transaction tags to indicate the failure at block 417. These updated tags preferably not only indicate that the request will fail, but also include data about why the request would fail. At block 418, TPM Dispatcher 402 updates its own historical tags with the data provided in the tags by TPM 403. Finally, at block 419, Client Device 401 receives the response indicating that the transaction has failed to execute, and can begin preparing another transaction request to attempt again. Although not illustrated, TPM 403 could of course determine that the transaction request can be completed successfully, and proceed to execute the request. Similarly, TPM 403 may forward the request to another TPM in the process of execution, and that TPM would complete similar steps in execution. FIG. 5 is a flow diagram illustrating a method 500 of implementing one embodiment of the present disclosure. The method begins at block 501 where a TPM receives, from a requesting entity, a transaction request associated with a plurality of current tags, wherein a first one of the plurality of current tags specifies a timeout value. At block 502 the TPM identifies, based on predefined criteria, a set of historical transactions corresponding to the transaction request. At block 503, the TPM determines a plurality of historical tags associated with the set of historical transactions, wherein a first one of the plurality of historical tags specifies a historical minimum execution time. Upon determining, based on the plurality of historical tags and the plurality of current tags, that a predicted execution time for executing the transaction request at the TPM exceeds the timeout value for the transaction request, the TPM updates the plurality of current tags at block 504. Finally, at block 505, the TPM returns an indication that the transaction request failed to execute to the requesting entity. FIG. 6 is a flow diagram illustrating a method 600 of implementing one embodiment of the present disclosure. The method begins at block 601 when a dispatcher receives a transaction request. At block 602, the dispatcher determines a timeout value corresponding to the transaction request. In some embodiments, the dispatcher determining the timeout value comprises determining a timeout value that a client device associated with the transaction request itself. In other embodiments, the client device did not associate the request with a timeout value, and the dispatcher determines a timeout value on its own. At block 603, the dispatcher associates the transaction request with a plurality of current tags, wherein a first one of the plurality of current tags specifies the timeout value. At block 604, the dispatcher determines a plurality of historical tags associated with a set of historical transactions corresponding to the transaction request, wherein the plurality of historical tags comprises a plurality of historical execution times for a plurality of TPMs. The dispatcher selects a TPM from the plurality of TPMs to execute the transaction request based on the plurality of historical execution times at block 605. At block 606, the dispatcher sends the transaction request to the selected TPM. At block 607, the dispatcher receives, from the selected TPM, an indication that the transaction request failed to execute because a predicted execution time for the selected TPM was greater than the timeout value. Finally, the method 600 ends when the dispatcher updates the plurality of historical tags at block 608. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. In the following, reference is made to embodiments presented in this disclosure. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Furthermore, although embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). Aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Embodiments of the invention may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in “the cloud,” without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources. Typically, cloud computing resources are provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present invention, a user may access applications (e.g., executing a transaction request in a distributed transaction processing system) or related data available in the cloud. For example, the transaction processing system could execute on a computing system in the cloud and each TPM could execute in the cloud. In such a case, the TPMs could execute transaction requests in a cloud computing system, and store transaction tags, historical artifacts, and related data at a storage location in the cloud. Doing so allows a user to access this information from any computing system attached to a network connected to the cloud (e.g., the Internet). While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. 16774134 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services
nyse:ibm IBM Apr 26th, 2022 12:00AM Jan 8th, 2020 12:00AM https://www.uspto.gov?id=US11314483-20220426 Bit-serial computation with dynamic frequency modulation for error resiliency in neural network A system is provided for error resiliency in a bit serial computation. A delay monitor enforces an overall processing duration threshold for bit-serial processing all iterations for the bit serial computation, while determining a threshold for processing each iteration. At least some iterations correspond to a respective bit in an input bit sequence. A clock generator generates a clock signal for controlling a performance of the iterations. Each of iteration units perform a particular iteration, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, and by selectively increasing the threshold for at least one iteration while skipping from processing at least one subsequent iteration whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all iterations, responsive to the at least one iteration requiring more time to complete than a current value of the threshold. 11314483 1. A computation system for providing error resiliency in a bit serial computation, comprising: a delay monitor configured to enforce an overall processing duration threshold for bit-serial processing all of a plurality of iterations for the bit serial computation, while dynamically determining a dynamically variable iteration-level processing duration threshold for processing each of the plurality of iterations, at least some of the plurality of iterations corresponding to a respective bit in an input bit sequence; a clock generator, operatively coupled to the delay monitor, configured to generate a clock signal for controlling a performance of the plurality of iterations; and a plurality of iteration units, each operatively coupled to the clock generator and configured to perform a particular one of the plurality of iterations, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, and by selectively increasing the dynamically variable iteration-level processing duration threshold for at least one of the plurality of iterations while skipping from processing at least one subsequent one of the plurality of iterations whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all of the plurality of iterations, responsive to the at least one of the plurality of iterations requiring more time to complete than a current value of the dynamically variable iteration-level processing duration threshold. 2. The computation system of claim 1, wherein the bit-serial computation is a dot product computation. 3. The computation system of claim 1, wherein the at least one other one of the plurality of iteration is skipped responsive to processing under noisy conditions. 4. The computation system of claim 1, wherein each of the plurality of iteration units comprises a register. 5. The computation system of claim 1, wherein each of the plurality of iteration units comprises an adder tree. 6. The computation system of claim 1, wherein each of the plurality of iteration units comprises a set of input multipliers, each for receiving a respective bit of the input bit sequence. 7. The computation system of claim 1, wherein the set of input multipliers multiply the bit from an input weight sequence by a respective kernel weight involved in the dot product computation. 8. The computation system of claim 1, wherein the overall processing duration threshold is kept fixed under all operating conditions. 9. The computation system of claim 1, wherein the plurality of iteration units comprises more units than elements of an input vector, such that a subset of the plurality of iteration units are used, the subset having a number of members equal to a number of elements of the input vector. 10. The computation system of claim 1, wherein the dot product computation is performed for a neural network. 11. The computation system of claim 1, wherein as a default setup prior to any adjustment of the dynamically variable iteration-level processing duration threshold, the overall processing duration threshold is equal to the dot product of B and the dynamically variable iteration-level processing duration threshold, wherein B is an integer representing a number of input bits of the input bit sequence. 12. A method for performing providing error resiliency in a bit-serial computation, comprising: enforcing, by a delay monitor, an overall processing duration threshold for bit-serial processing all of a plurality of iterations for the bit serial computation while dynamically determining a dynamically variable iteration-level processing duration threshold for processing each of the plurality of iterations, at least some of the plurality of iterations corresponding to a respective bit in an input bit sequence; generating, by a clock generator operatively coupled to the delay monitor, a clock signal for controlling a performance of the plurality of iterations; and performing, by each of a plurality of iteration units operatively coupled to the clock generator, a particular one of the plurality of iterations, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, and by selectively increasing the dynamically variable iteration-level processing duration threshold for at least one of the plurality of iterations while skipping from processing at least one subsequent one of the plurality of iterations whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all of the plurality of iterations, responsive to the at least one of the plurality of iterations requiring more time to complete than a current value of the dynamically variable iteration-level processing duration threshold. 13. The method of claim 12, wherein the bit-serial computation is a dot product computation. 14. The method of claim 12, wherein the at least one other one of the plurality of iteration is skipped responsive to processing under noisy conditions. 15. The method of claim 12, wherein each of the plurality of iteration units comprises a register. 16. The method of claim 12, wherein each of the plurality of iteration units comprises an adder tree. 17. The method of claim 12, wherein each of the plurality of iteration units comprises a set of input multipliers, each for receiving a respective bit of the input bit sequence. 18. The method of claim 12, wherein the set of input multipliers multiply the bit from an input weight sequence by a respective kernel weight involved in the dot product computation. 19. The method of claim 12, wherein the overall processing duration threshold is kept fixed under all operating conditions. 20. The method of claim 12, wherein the plurality of iteration units comprises more units than elements of an input vector, such that a subset of the plurality of iteration units are used, the subset having a number of members equal to a number of elements of the input vector. 20 BACKGROUND The present invention generally relates to artificial intelligence, and more particularly to a bit-serial computation with dynamic frequency modulation for error resiliency in a neural network. Conventional Dynamic Voltage Frequency Modulation (DVFM) techniques can be employed to guarantee the correctness of computation under supply noise by providing enough supply voltage or lowering frequency, causing significant penalties in energy and delay efficiency. Thus, there is a need for an improved dynamic frequency modulation technique. SUMMARY According to an aspect of the present invention, a computation system is provided that, in turn, provides error resiliency in a bit serial computation. The computation system includes a delay monitor configured to enforce an overall processing duration threshold for bit-serial processing all of a plurality of iterations for the bit serial computation, while dynamically determining a dynamically variable iteration-level processing duration threshold for processing each of the plurality of iterations. At least some of the plurality of iterations correspond to a respective bit in an input bit sequence. The computation system further includes a clock generator, operatively coupled to the delay monitor, configured to generate a clock signal for controlling a performance of the plurality of iterations. The computation system also includes a plurality of iteration units. Each of the plurality of iteration units is operatively coupled to the clock generator and configured to perform a particular one of the plurality of iterations, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, and by selectively increasing the dynamically variable iteration-level processing duration threshold for at least one of the plurality of iterations while skipping from processing at least one subsequent one of the plurality of iterations whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all of the plurality of iterations, responsive to the at least one of the plurality of iterations requiring more time to complete than a current value of the dynamically variable iteration-level processing duration threshold. According to another aspect of the present invention, a method is provided for performing providing error resiliency in a bit-serial computation. The method includes enforcing, by a delay monitor, an overall processing duration threshold for bit-serial processing all of a plurality of iterations for the bit serial computation while dynamically determining a dynamically variable iteration-level processing duration threshold for processing each of the plurality of iterations. At least some of the plurality of iterations correspond to a respective bit in an input bit sequence. The method further includes generating, by a clock generator operatively coupled to the delay monitor, a clock signal for controlling a performance of the plurality of iterations. The method also includes performing, by each of a plurality of iteration units operatively coupled to the clock generator, a particular one of the plurality of iterations, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, and by selectively increasing the dynamically variable iteration-level processing duration threshold for at least one of the plurality of iterations while skipping from processing at least one subsequent one of the plurality of iterations whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all of the plurality of iterations, responsive to the at least one of the plurality of iterations requiring more time to complete than a current value of the dynamically variable iteration-level processing duration threshold. These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The following description will provide details of preferred embodiments with reference to the following figures wherein: FIG. 1 is a block diagram showing an exemplary processing system, in accordance with an embodiment of the present invention; FIG. 2 is a block diagram showing an exemplary bit-serial computation computer with dynamic frequency modulation for error resiliency in a neural network, in accordance with an embodiment of the present invention; FIG. 3 shows an exemplary iteration unit included in the bit serial computer of FIG. 2, in accordance with an embodiment of the present invention; FIG. 4 is a timing diagram showing a bit-serial modulation technique and a dynamic frequency modulation technique, in accordance with an embodiment of the present invention; FIG. 5 is a flow diagram showing a method for providing error resiliency in a bit-serial computation, in accordance with an embodiment of the present invention; FIG. 6 is a block diagram showing an exemplary bit-serial processing under normal conditions, in accordance with an embodiment of the present invention; and FIG. 7 is a block diagram showing an exemplary bit-serial processing under noisy conditions, in accordance with an embodiment of the present invention. DETAILED DESCRIPTION Embodiments of the present invention are directed to bit-serial computation with dynamic frequency modulation for providing error resiliency in a neural network. In an embodiment, the bi-serial computation is a dot product computation. However, the present invention can be applied to other computations, given the teachings of the present invention provided herein. Types of neural networks to which embodiments of the present invention can be applied include, but are not limited to, Resnet, Alexnet, LSTM, Googlenet, Mobilenet, etc. Basically, any neural network that performs a dot product computation can utilize aspects of the present invention. Embodiments of the present invention provide power and speed benefits over conventional approaches when noise sources are present (e.g., temperature variation, chip-to-chip variation, power fluctuation, etc.). For example, embodiments of the present invention can drastically reduce the margin of supply voltage (e.g., 1V→0.8 V) or frequency by avoiding the voltage guard band (0.2V) and thus providing energy and delay savings. Moreover, embodiments of the present invention can better deal with errors that can result from noise. For example, even under very aggressive power or temperature noise sources, embodiments of the present invention compute the important information (Most Significant Bit) first, thus ensuring preservation of the same under a potential energy savings over the prior art. FIG. 1 is a block diagram showing an exemplary processing system 100, in accordance with an embodiment of the present invention. The processing system 100 includes a set of processing units (e.g., CPUs) 101, a set of GPUs 102, a set of memory devices 103, a set of communication devices 104, and set of peripherals 105. The CPUs 101 can be single or multi-core CPUs. The GPUs 102 can be single or multi-core GPUs. The one or more memory devices 103 can include caches, RAMs, ROMs, and other memories (flash, optical, magnetic, etc.). The communication devices 104 can include wireless and/or wired communication devices (e.g., network (e.g., WIFI, etc.) adapters, etc.). The peripherals 105 can include a display device, a user input device, a printer, an imaging device, and so forth. Elements of processing system 100 are connected by one or more buses or networks (collectively denoted by the figure reference numeral 110). In an embodiment, memory devices 103 can store specially programmed software modules to transform the computer processing system into a special purpose computer configured to implement various aspects of the present invention. In an embodiment, special purpose hardware (e.g., Application Specific Integrated Circuits, Field Programmable Gate Arrays (FPGAs), and so forth) can be used to implement various aspects of the present invention. The memory device 103 includes trained neural network kernels 103B. Processing system 100 further includes a bit-serial computer 166 with dynamic frequency modulation for providing error resiliency in the inference computation of a neural network. Neural network inference (classification) computation is composed of many {right arrow over (X)}{right arrow over (W)}s, where {right arrow over (X)} is an activation (input to the system) and {right arrow over (W)} is a kernel, which is obtained from training. Thus, {right arrow over (X)} is a variable input to the hardware system, but {right arrow over (W)} can be stored in 103B before it is used for inference (classification) computation. In addition, the inference computation itself is performed in block 166 whereas 103B simply stores the kernels needed for the computation. Thus, the error resiliency is achieved in block 166. In an embodiment, the bit-serial computer 166 includes a logic circuit 166A. Of course, the processing system 100 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in processing system 100, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized. These and other variations of the processing system 100 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein. Moreover, it is to be appreciated that various figures as described below with respect to various elements and steps relating to the present invention that may be implemented, in whole or in part, by one or more of the elements of system 100. As employed herein, the term “hardware processor subsystem” or “hardware processor” can refer to a processor, memory, software or combinations thereof that cooperate to perform one or more specific tasks. In useful embodiments, the hardware processor subsystem can include one or more data processing elements (e.g., logic circuits, processing circuits, instruction execution devices, etc.). The one or more data processing elements can be included in a central processing unit, a graphics processing unit, and/or a separate processor- or computing element-based controller (e.g., logic gates, etc.). The hardware processor subsystem can include one or more on-board memories (e.g., caches, dedicated memory arrays, read only memory, etc.). In some embodiments, the hardware processor subsystem can include one or more memories that can be on or off board or that can be dedicated for use by the hardware processor subsystem (e.g., ROM, RAM, basic input/output system (BIOS), etc.). In some embodiments, the hardware processor subsystem can include and execute one or more software elements. The one or more software elements can include an operating system and/or one or more applications and/or specific code to achieve a specified result. In other embodiments, the hardware processor subsystem can include dedicated, specialized circuitry that performs one or more electronic processing functions to achieve a specified result. Such circuitry can include one or more application-specific integrated circuits (ASICs), FPGAs, and/or PLAs. These and other variations of a hardware processor subsystem are also contemplated in accordance with embodiments of the present invention. FIG. 2 is a block diagram showing an exemplary bit-serial computation computer 200 with dynamic frequency modulation for error resiliency in a neural network, in accordance with an embodiment of the present invention. FIG. 3 shows an exemplary iteration unit 230 included in the bit serial computer 200 of FIG. 2, in accordance with an embodiment of the present invention. Referring to FIGS. 2 and 3, the computer 200 includes a delay monitor 210, a clock generator 220, and a set of iteration units 230 (formed of iteration units 230A-230L). The set of iteration units perform 8 (B=8) iterations, each corresponding to a respective one of eight bit positions. The clock generator 220 is operatively coupled to the delay monitor 210 and the set of iteration units 230. Each of the iteration units 230 receives an input {right arrow over (X)}, and outputs {right arrow over (X)}{right arrow over (W)}t, where {right arrow over (X)}=[X1, X2, . . . , XN], {right arrow over (W)}=[W1, W2, . . . , WN], and {right arrow over (X)}·Ŵ=X1W1, X2W2, . . . , XNWN. Computation {right arrow over (X)}·{right arrow over (W)} is processed in a bit-serial fashion, e.g., {right arrow over (X)}'s elements' 0 bit position (Xn0 where n=1, 2, . . . , N)) is processed at the first cycle (cycle 1), the 1 bit position (Xn1) is processed at the second cycle (cycle 2), and so on until the (B−1)-th bit position (Xn(B-1)) is processed at the B-th cycle (cycle B). However, processing starts by taking the MSB first and proceeding in descending order, and not the LSB and proceeding in ascending order for the reasons mentioned herein. Hence, in an embodiment, the present invention is performed in a task lacking redundant computations. Each iteration unit 230 includes a multipliers 231A-N, an adder tree 234, an adder 235, a register 236, and a multiplier 237. In an embodiment, each iteration unit includes and/or is otherwise implemented by a logic circuit. The multiplier 231A multiplies a first input element's b-th bit (X1b) by W1 at (b+1)-th cycle. The multiplier 232B multiplies a second input bit by W2 at (b+1)-th cycle. The multiplier 233N multiplies a N-th input element's b-th bit by WN at (b+1)-th cycle. The adder tree 234 adds the outputs of the multipliers 231A-N to output sb. The multiplier 237 multiplies a fourth input bit 2 and an output of the register 236 to represent bit position, e.g., s3=23s3+22s2+2s1+s0=s0+2(s1+2(s2+2s3)). The adder 235 adds sb and the output of the multiplier 237. The register 236 is responsive to a clock signal CLK provided by the clock generator 220 and outputs Sb. (i.e., the result of {right arrow over (X)}{right arrow over (W)}] at the end of B such cycles). FIG. 4 is a timing diagram showing a bit-serial modulation technique 410 and a dynamic frequency modulation technique 420, in accordance with an embodiment of the present invention. With respect to the timing, the following applies: 1. Setup TL and TG without excessive margin (e.g., TL is 5% larger than the worst-case delay of block 230 to barely avoid a timing error), where TL is a delay to compute single bit whereas TG is a delay for the entire B-bit processing, i.e., TG=B·TL at default setup, where B is the number of bits of the input bit sequence {right arrow over (X)}. 2. Equip on-chip delay monitor circuitry to decide the proper TL dynamically to obtain T′L. For example, T′L is decided to ensure at least the MSB is processed, but preferably more bits than just the MSB while hopefully avoiding dropping, e.g., one or no more than one, lower significance bit(s). 3. On the other hand, TG is set to a fixed amount of time to guarantee the real-time application requirements (e.g., total processing time of the bit-serial computation). 4. If TL becomes large due to many noise sources, compute as many bits as possible within the delay of TG from MSB first, but give up the LSB bits if needed. In this way, the system does not break down drastically when the delay is increased due to the noise, but the accuracy will be gracefully degraded by quantization noise rather than timing error. For example, consider the case where B=4 as shown in FIG. 4. T′L can be modified to ensure the MSB can be processed, then T′L can be modified again to ensure the 2nd MSB can be processed, and so forth, and these modifications may result in not enough remaining to complete, e.g., the LSB and possible the second LSB. However, it is to be appreciated that the most important information (MSB(s)) are assuredly being processed while dropping the least important information (LSB(s)). FIG. 5 is a flow diagram showing a method 500 for providing error resiliency in a bit-serial (e.g., a dot product) computation, in accordance with an embodiment of the present invention. At block 510, enforce, by a delay monitor, an overall processing duration threshold for bit-serial processing all of a plurality of iterations for the dot product computation, while dynamically determining a dynamically variable iteration-level processing duration threshold for processing each of the plurality of iterations. At least some of the plurality of iterations correspond to a respective bit in an input bit sequence. It is “at least some” because some iterations may be dropped from being performed, as described below, thus saving calculations performed with respect thereto. At block 520, generate, by a clock generator operatively coupled to the delay monitor, a clock signal for controlling a performance of the plurality of iterations. At block 530, perform, by each of a plurality of iteration units operatively coupled to the clock generator, a particular one of the plurality of iterations, starting with a Most Significant Bit (MSB) of the input bit sequence and continuing in descending bit significant order, ideally to a Least Significant Bit (LSB) of the input bit sequence, but selectively increasing the dynamically variable iteration-level processing duration threshold for at least one of the plurality of iterations while skipping from processing at least one subsequent one of the plurality of iterations whose iteration-level processing duration exceeds a remaining amount of an overall processing duration for all of the plurality of iterations, responsive to the at least one of the plurality of iterations requiring more time to complete than a current value of the dynamically variable iteration-level processing duration threshold. FIG. 6 is a block diagram showing an exemplary bit-serial processing 600 under normal conditions, in accordance with an embodiment of the present invention. FIG. 7 is a block diagram showing an exemplary bit-serial processing 700 under noisy conditions, in accordance with an embodiment of the present invention. As can be seen when comparing FIG. 6 to FIG. 7, under normal conditions (FIG. 6) all bits are serially processed (see also FIG. 4), while under noisy conditions (FIG. 7) some of the least significant bits are not processed (dropped) while preserving the most important information (i.e., the MSB(s)) (see also FIG. 5). The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. However, it is to be appreciated that features of one or more embodiments can be combined given the teachings of the present invention provided herein. It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended for as many items listed. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Having described preferred embodiments of a system and method (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the invention as outlined by the appended claims. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims. 16737440 international business machines corporation USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 08:56AM Apr 27th, 2022 08:56AM IBM Technology Software & Computer Services

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.