CA

Canon

- NYSE:CAJ
Last Updated 2022-06-27

Patent Grants Data

Patents granted to organizations.
Ticker Symbol Entity Name Publication Date Filing Date Patent ID Invention Title Abstract Patent Number Claims Number of Claims Description Application Number Assignee Country Kind Code Kind Code Description url Classification Code Length of Grant Date Added Date Updated Sector Industry
nyse:caj Canon Apr 26th, 2022 12:00AM Nov 27th, 2019 12:00AM https://www.uspto.gov?id=US11312592-20220426 Sheet stacking apparatus and image forming apparatus Provided is a sheet stacking apparatus including an aligning member lifting-lowering device configured to lift and lower aligning members between an aligning position where the aligning members abut to and align the sheets and a retreating position where the aligning members are lifted from an upper surface of the sheets, an aligning member moving device configured to move the aligning members in a sheet width direction, and a stack tray lifting-lowering device configured to lift and lower a stack tray based on a detection result of a sheet surface detecting device. A controller lifts the aligning members to the retreating position with the aligning member lifting-lowering device after the sheets stacked on the stack tray is aligned at the aligning position, and stops lifting-lowering operation of the stack tray lifting-lowering device while the aligning members are moved in the sheet width direction with the aligning member moving device. 11312592 1. A sheet stacking apparatus comprising: a conveyance device configured to convey sheets in a predetermined sheet conveyance direction; a stack tray on which sheets conveyed by the conveyance device are to be stacked; a pair of aligning members configured to abut to the sheets stacked on the stack tray in a sheet width direction intersecting with the sheet conveyance direction and to align the sheets; an aligning member lifting-lowering device configured to lift and lower the aligning members between an aligning position where the aligning members abut to and align the sheets and a retreating position where the aligning members are lifted from an upper surface of the sheets; an aligning member moving device configured to move the aligning members in the sheet width direction; a sheet surface detecting device configured to detect an uppermost surface position of the sheets stacked on the stack tray; a stack tray lifting-lowering device configured to lift and lower the stack tray based on a detection result of the sheet surface detecting device; and a controller configured to control the aligning member lifting-lowering device, the aligning member moving device, and the stack tray lifting-lowering device, wherein in a case that a subsequent sheet subsequent to the sheets is aligned at another aligning position different from the aligning position of the sheets in the sheet width direction, the controller lifts the aligning members to the retreating position with the aligning member lifting-lowering device after the sheets stacked on the stack tray are aligned at the aligning position, lowers the stack tray with the stack tray lifting-lowering device, and lifts the stack tray with the stack tray lifting-lowering device after movement of the aligning members to the another aligning position of the subsequent sheet with the aligning member moving device is completed. 2. The sheet stacking apparatus according to claim 1, wherein the controller stops lifting-lowering operation of the stack tray lifting-lowering device in a case that a number of sheets to be aligned at a predetermined aligning position is equal to or larger than a predetermined number. 3. The sheet stacking apparatus according to claim 1, wherein the controller starts lifting-lowering operation of the stack tray lifting-lowering device after the aligning members are lifted from the aligning position to the retreating position and are moved to the another aligning position of the subsequent sheet with the aligning member moving device. 4. The sheet stacking apparatus according to claim 1, wherein the controller stops lifting-lowering operation with the stack tray lifting-lowering device while the aligning members are moved to the retreating position with the aligning member lifting-lowering device, and starts the lifting-lowering operation with stack tray lifting-lowering device again after the aligning members are moved to the another aligning position of the subsequent sheet with the aligning member moving device. 5. The sheet stacking apparatus according to claim 1, wherein the controller varies a retreating amount of the pair of aligning members in a height direction in accordance with a lifting-lowering amount of the stack tray. 6. The sheet stacking apparatus according to claim 1, further comprising: a shift device configured to move the sheets in the sheet width direction intersecting with the conveyance direction and arranged at a sheet conveyance path leading to the stack tray, wherein the shift device stops detection of the uppermost surface of the sheets with the sheet surface detecting device after a predetermined time elapses from completion of sheet movement with the shift device. 7. An image forming apparatus including the sheet stacking apparatus according to claim 1 arranged in an image forming apparatus main body in which an image is formed on the sheets. 8. The sheet stacking apparatus according to claim 1, wherein the sheet surface detecting device is a sensor having an optical axis configured to detect the uppermost surface position of the sheets stacked on the stack tray by blocking the optical axis, and in the case that the subsequent sheet subsequent to the sheets is aligned at the another aligning position different from the aligning position of the sheets in the sheet width direction, the controller lifts the aligning members to the retreating position with the aligning member lifting-lowering device after the sheets stacked on the stack tray are aligned at the aligning position, lowers the stack tray with the stack tray lifting-lowering device until the optical axis appears, and lifts the stack tray with the stack tray lifting-lowering device after movement of the aligning members to the another aligning position of the subsequent sheet with the aligning member moving device is completed. 8 CROSS-REFERENCE TO RELATED APPLICATION The present application is based on and claims priority of Japanese Patent Application No. 2018-247123 filed on Dec. 28, 2018, the disclosure of which is incorporated herein. TECHNICAL FIELD The present invention relates to a sheet stacking apparatus which aligns sheets in a sheet width direction and stacks the sheets on a stack tray and an image forming apparatus. BACKGROUND ART Conventionally, there has been known a sheet stacking apparatus capable of aligning sheets discharged onto a stack tray in a sheet width direction intersecting with a sheet discharging direction after forming an image by an image forming apparatus or the like (e.g., Japanese Unexamined Patent Application Publication No. 2013-230891). As illustrated in FIGS. 20A to 20D, the sheet stacking apparatus disclosed in Japanese Unexamined Patent Application Publication No. 2013-230891 includes a pair of aligning members 1519 movable in the width direction above a single or a plurality of stack trays 1515. The sheet stacking apparatus moves aligning members 1519a, 1519b in the sheet width direction when sheets are discharged onto the stack tray 1515, and aligns the sheets by abutting the aligning members 1591a, 1519b to both ends of the sheets in the width direction. In an apparatus performing sorting processing on sheets at the time of aligning the sheets in the width direction as the above, alignment in the width direction is performed by fixing one aligning member 1519a as a reference side for alignment and moving the other aligning member 1519b in the width direction so that the sheets abut to the aligning member 1519a. When a position of the aligning members 1519 for the sorting processing is changed after performing the sorting processing, to switch the reference side for alignment, the aligning members 1519 are retreated upward from an aligning position of the sheets and move to a reception position for subsequent sheets in the sheet width direction in that state. Such an apparatus prevents contact of the aligning members 1519 with stacked sheets during moving by moving the aligning members 1519 after once retreating upward. The stack tray 1515 is arranged capable of being lifted and lowered, and generally, the stack tray 1515 is lowered as the number of stacked sheets increase. A sheet detecting device is arranged for detecting height of an uppermost surface sheet in the vicinity of a sheet discharge port. By continuously monitoring the height of the stacked sheet surface by the sheet detecting device and repeating lifting and lowering of the stack tray 1515 in accordance therewith, subsequent sheets can be received at an optimum height position. SUMMARY OF THE INVENTION However, as illustrated in FIG. 20, for example, when an end of a sheet to be stacked on the stack tray 1515 is curled upward, the curled part is detected by a sheet surface detecting sensor S9 arranged at the stack tray 1515, the sheet surface is determined to be high, and the stack tray 1515 is lowered from a position of FIG. 20A to a position of FIG. 20B. When the stack tray 1515 is lowered as the above, an optical axis of the sheet surface detecting sensor S9 appears, and now the stack tray 1515 is lifted. When the stack tray 1515 is lifted, as illustrated in FIG. 20C, the curl at a rear end side of the sheet is resolved and the stack tray 1515 is lifted to a position higher than the position before being lowered. Here, there is a fear that the aligning members 1519 contact to the uppermost sheet surface already stacked when moving the aligning members 1519 in the sheet width direction even though having been retreated from the aligning position in the height direction. When the aligning members 1519 are moved in the sheet width direction at the timing of lifting the stack tray 1515, as illustrated in FIG. 20D, the stack tray 1515 is lifted in accordance with height of the curled part. Due to the lifting of the stack tray 1515, there is a fear that the aligning members 1519 contact to the sheets, so that misalignment occurs on sheets stacked on the stack tray 1515 and already aligned in association with moving operation of the aligning members 1519 and alignment of the stacked sheets is spoiled. The object of the present invention is to provide a sheet stacking apparatus and an image forming apparatus capable of performing sorting processing on sheets with fine alignment even when a part of sheets stacked on a stack tray is curled. To solve the abovementioned problems, a sheet stacking apparatus of the present invention includes a conveyance device configured to convey sheets in a predetermined sheet conveyance direction, a stack tray on which sheets conveyed by the conveyance device are to be stacked, a pair of aligning members configured to abut to the sheet stacked on the stack tray in a sheet width direction intersecting with the sheet conveyance direction and to align the sheet, an aligning member lifting-lowering device configured to lift and lower the aligning members between an aligning position where the aligning members abut to and align the sheet and a retreating position where the aligning members are lifted from an upper surface of the sheet, an aligning member moving device configured to move the aligning members in the width direction, a sheet surface detecting device configured to detect an uppermost surface position of the sheets stacked on the stack tray, a stack tray lifting-lowering device configured to lift and lower the stack tray based on a detection result of the sheet surface detecting device, and a controller configured to control the aligning member lifting-lowering device, the aligning member moving device, and the stack tray lifting-lowering device. Here, the controller lifts the aligning members to the retreating position with the aligning member lifting-lowering device after the sheets stacked on the stack tray is aligned at the aligning position, and stops lifting-lowering operation of the stack tray lifting-lowering device while the aligning members are moved in the sheet width direction with the aligning member moving device. According to a sheet stacking apparatus of the present invention, a sheet conveyed to a stack tray later can be aligned without occurrence of contact with sorting processed sheets aligned on the stack tray, so that continuous sorting processing can be performed smoothly and certainly. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a sectional view of a sheet stacking apparatus and an image forming apparatus of the present invention. FIG. 2 is a block diagram illustrating the configuration of the image forming apparatus. FIG. 3 is a sectional view of the sheet stacking apparatus. FIG. 4 is a block diagram illustrating the configuration of the sheet stacking apparatus. FIG. 5 is a diagram viewing a lateral registration detecting unit from a downstream side of a sheet conveyance direction. FIG. 6 is a diagram viewing a shift unit from the downstream side of the sheet conveyance direction. FIGS. 7A and 7B are perspective views of an aligning member. FIGS. 8A to 8C are perspective views illustrating lifting-lowering operation of the aligning member. FIGS. 9A to 9C are perspective views illustrating a driving portion which drives the lifting-lowering operation of the aligning member. FIGS. 10A and 10B are explanatory views illustrating a lifting-lowering mechanism of a stack tray. FIGS. 11A to 11F are explanatory views of switching operation of the aligning members. FIGS. 12A to 12F are explanatory views of the switching operation of the aligning members. FIG. 13 is a flowchart of the switching operation of the aligning members. FIGS. 14A to 14F are explanatory views of switching operation of aligning members in a first embodiment. FIG. 15 is a flowchart of the switching operation the aligning members in the first embodiment. FIGS. 16A to 16G are explanatory views of the switching operation of the aligning members in the first embodiment. FIG. 17 is a flowchart of the switching operation the aligning members in the first embodiment. FIGS. 18A to 18C are explanatory views of switching operation of aligning members in a second embodiment. FIGS. 19A to 19C are explanatory views of switching operation of aligning members in a third embodiment. FIGS. 20A to 20D are explanatory views illustrating a series of aligning operation of sheets in a conventional sheet stacking apparatus. EMBODIMENTS OF THE INVENTION In the following, a sheet stacking apparatus and an image forming apparatus including the sheet stacking apparatus of the present invention will be described based on FIGS. 1 to 19. Here, structural elements described in the following embodiments are only illustrative and the scope of claims of the present invention is not limited to the structural elements. As illustrated in FIG. 1, an image forming apparatus 110 is configured of an apparatus main body 100 and a sheet stacking apparatus 500 connected to the apparatus main body 100. A toner image of four colors is transferred by photosensitive drums 102a to 102d of yellow, magenta, cyan, black, or the like respectively as an image forming device to a sheet fed from a cassette 101a, 101b in the apparatus main body 100, the sheet is conveyed to a fixing device 103 so that the toner image is fixed, and then, the sheet is discharged from the apparatus main body 100 to the sheet stacking apparatus 500 by a sheet discharge roller 104. FIG. 2 is a block diagram of an apparatus controller which controls the image forming apparatus 110. A CPU circuit unit 630 includes a CPU 629, a ROM 631, and a RAM 650. The CPU circuit unit 630 controls an image signal controller 634, a printer controller 635, a sheet stacking apparatus controller 636, and an external interface 637. The CPU circuit unit 630 performs control in accordance with a program stored in the ROM 631 and setting from an operating unit 601. The printer controller 635 controls the apparatus main body 100 and the sheet stacking apparatus controller 636 controls the sheet stacking apparatus 500. The RAM 650 is used as an area to temporarily hold control data and a working area for calculation associated with the control. The external interface 637 is an interface for an external computer (PC) 620. Signals are exchanged in two-way between the PC 620 and the CPU circuit unit 630 via the external interface 637. Print data is transmitted from the PC 620 to the image signal controller 634 via the external interface 637. The image signal controller 634 develops the transmitted print data into an image and outputs an image signal to the printer controller 635. Then, the image signal output from the image signal controller 634 to the printer controller 635 is input to an image forming device illustrated in FIG. 1. Next, the sheet stacking apparatus 500 will be described in detail. As illustrated in FIG. 1, a sheet discharged from the apparatus main body 100 are fed to the sheet stacking apparatus 500. As illustrated in FIG. 3, the sheet stacking apparatus 500 includes a sheet conveyance path 520 extending from an upstream side to a downstream side in a sheet conveyance direction, a sheet detecting device (inlet sensor) S0, arranged at the upstream side of the sheet conveyance path 520, which detects that a sheet is conveyed from the apparatus main body 100, and an inlet roller 501 which guides a sheet having passed through the inlet sensor S0 to the downstream side. A sheet received at the inlet roller 501 is sequentially conveyed to an inlet conveyance roller pair 502, conveyance devices (shift conveyance roller pairs) 503, 504 arranged in a shift unit 400, and a discharge conveyance roller pairs 506 to 508, and then, stacked on one of a first stack tray 515 and a second stack tray 516. The sheet stacking apparatus 500 has a sorting processing function in which a sheet can be stacked being shifted in a predetermined width in a direction intersecting with the sheet conveyance direction, when sheets are discharged to the first stack tray 515 and the second stack tray 516, so that sheets can be easily sorted. The sorting processing function is executed by the shift device (shift unit) 400 arranged at the sheet conveyance path 520 extending from the sheet discharge roller 104 side (upstream side) of the apparatus main body 100 toward the first stack tray 515 and the second stack tray 516 (downstream side). In the present embodiment, the sheet stacking apparatus 500 is configured to be attachable to the image forming apparatus 110 as an option. However, the sheet stacking apparatus 500 may be configured to be incorporated in the image forming apparatus 110. Further, the stack trays are in a two stage structure of the first and the second. However, the number of stages is not limited and the structure may be one stage or three or more stages. The lateral registration detecting unit 300 is arranged at the upstream side of the shift unit 400. The lateral registration detecting unit 300 is activated when a user selects sorting processing with the operating unit 601, so that a position of a sheet on which sorting processing is to be performed by the shift unit 400 in a direction intersecting with the conveyance direction (hereinafter referred to as a sheet width direction) is detected. When the position of a sheet in the sheet width direction is detected by the lateral registration detecting unit 300, the shift unit 400 moves in a direction intersecting with the sheet conveyance direction based on the detection result. Then, with a switching flapper 509 arranged at the downstream side, a sheet fed to the discharge conveyance roller pair 508 is stacked onto the first stack tray 515 from a discharge roller pair 510 or stacked onto the second stack tray 516 via a discharge conveyance roller pair 514 after being conveyed through conveyance roller pairs 511 to 513. Switching of the switching flapper 509 is performed by turning on or off an unillustrated solenoid. A sheet stacked on the first stack tray 515 are aligned in the sheet width direction by a first aligning portion 517. A sheet stacked on the second stack tray 516 are aligned in the sheet width direction by a second aligning portion 518. The sheet stacking apparatus 500 includes a first sheet surface detecting sensor S1 and a second sheet surface detecting sensor S2 as a detecting device to detect an uppermost surface of sheets stacked on the first stack tray 515 and the second stack tray 516, respectively. The first stack tray 515 and the second stack tray 516 are lifted and lowered in the arrowed Z direction based on the detection results of the first sheet surface detecting sensor S1 and the second sheet surface detecting sensor S2, respectively. Thus, each of the uppermost surfaces of the sheets stacked on the first stack tray 515 and the second stack tray 516 can be kept constant. Operation of detecting the sheet surface is as follows. The first stack tray 515 or the second stack tray 516 is lifted from below, and a state that an optical axis of the first sheet surface detecting sensor S1 or the second sheet surface detecting sensor S2 is blocked by sheets stacked on the first or second stack trays 515, 516 or an upper surface of the first or second stack trays 515, 516 is set as a home position (HP). The first or second stack tray 515, 516 is lowered until the optical axis of the first sheet surface detecting sensor S1 or the second sheet surface detecting sensor S2 appears, and then, is lifted until the optical axis is blocked again. The above operation is repeated. For example, in a case that sheets are curled upward, the first or second sheet surface detecting sensor S1 or S2 detects the curled part of the sheets and determines that the uppermost surface of the sheets is high. Therefore, the first or second stack tray 515, 516 is lowered until the optical axis of the first or second sheet surface detecting sensor S1, S2 appears, and then, is lifted until the optical axis is blocked again. Since the upward curling of the stacked sheets is resolved due to the lifting operation, the first or second stack tray 515, 516 can be lifted to provide an appropriate sheet surface height and sheet alignment of the sheets can be performed without engagement failure of a later described aligning member 519. Next, the sheet stacking apparatus controller 636 which controls the sheet stacking apparatus 500 will be described based on FIG. 4. FIG. 4 illustrates an example of a controller configuration but not limited thereto. The sheet stacking apparatus controller 636 may be integrally arranged in the apparatus main body 100 together with the CPU circuit unit 630 and the sheet stacking apparatus 500 may be controlled from the apparatus main body 100 side. The sheet stacking apparatus controller 636 is configured of a CPU 101, a RAM 702, a ROM 703, an I/O 705, a network interface 704, a communication interface 706, and the like. The I/O 705 controls a conveyance unit controller 707 and a stacking unit controller 708. The conveyance unit controller 707 includes a lateral registration detecting drive motor M1, a shift motor M2 for moving the shift unit 400, a shift conveyance motor M3 for conveying a sheet in the shift unit 400, a lateral registration detecting sensor S3, a lateral registration detecting HP sensor S4, and a shift unit HP sensor S5. The stacking unit controller 708 includes front-back aligning member slide motors M4, M5 which are aligning member moving devices, an aligning member lifting-lowering motor M6 which is an aligning member lifting-lowering unit, first and second stack tray lifting-lowering motors M7, M8 which are stack tray lifting-lowering devices, first and second sheet surface detecting sensors S1, S2, front-back aligning member HP sensors S6, S7, and an aligning member lifting-lowering HP sensor S8. Each of the sensors S1 to S8 detect a position as a reference and the motors M1 to M8 are controlled based on the detection results. Next, the lateral registration detecting unit 300 will be described in detail based on FIG. 5. FIG. 5 is a diagram viewing the lateral registration detecting unit 300 from the downstream side of the sheet conveyance direction. At the lateral registration detecting unit 300, an end part of a sheet in the sheet width direction is detected by the lateral registration detecting sensor S3 when the sheet passes through a conveyance path 309 configured of a pair of conveyance guides 307, 308, so that a position of the sheet in the sheet width direction is determined. The lateral registration detecting sensor S3 includes bearings 303, 304. Each of the bearings 303, 304 are configured movable in an arrowed X direction along guides 305, 306 fixed to the sheet stacking apparatus 500. The lateral registration detecting sensor S3 is previously moved to a position corresponding to a sheet size, information of the sheet size being input from the operating unit 601 of the apparatus main body 100. The lateral registration detecting sensor S3 has a recess and detects an end part, in the sheet width direction, of a sheet entering the recess. A driving source for moving the lateral registration detecting sensor S3 is the lateral registration detecting drive motor M1. Then, a timing belt 311 is operated with a pulley 313 arranged at the lateral registration detecting drive motor M1 and a pulley 312 fixed to the sheet stacking apparatus 500. The lateral registration detecting sensor S3 and the timing belt 311 are connected to each other via a fixed plate 310 and the lateral registration detecting sensor S3 can be moved in association with the operation of the timing belt 311. At this time, the home position (HP) of the lateral registration detecting sensor S3 is determined by detecting a fixed plate flag portion 310a arranged at the fixed plate 310 by a lateral registration detecting HP sensor S4 attached to the sheet stacking apparatus 500, the lateral registration detecting drive motor M1 is driven from the home position (HP) by predetermined pulses, and the lateral registration detecting sensor S3 is moved from the home position (HP) to a position corresponding to the sheet size. FIG. 6 is a diagram viewing the shift unit 400 from the downstream side of the sheet conveyance direction. In the shift unit 400, a conveyance path 423 is configured of conveying guides 403a, 403b. The conveyance path 423 is configured capable of sandwiching and conveying a sheet with shift conveyance rollers 503a, 503b, 504a, 504b (see FIG. 3). The shift conveyance roller pairs 503, 504 are connected to the shift conveyance motor M3 via gears 415, 416 and are configured capable of rotating forward and backward in accordance with rotation of the shift conveyance motor M3. The shift conveyance roller pairs 503, 504 and conveyance guides 403a, 403b are supported by frames 405, 406, 407, 408. Bearings 409, 410, 411, 412 fixed to the frames 405, 406, 407, 408 are configured movable along guides 413, 414. The frames 405, 406, 407, 408 are connected to a timing belt 418 via a fixed plate 419. The fixed plate 419 is configured movable with the shift motor M2 and pulleys 420, 421 via the timing belt 418. Thus, a sheet can be moved in the direction intersecting with the sheet conveyance direction while conveying the sheet in the sheet conveyance direction with the shift conveyance roller pairs 503, 504. Sheets stacked onto the first and second stack trays 515, 516 can be sorted by changing the moving direction of the shift unit 400. The home position of the shift unit 400 is determined by detecting a flag portion 406a in the frame 406 by the shift unit HP sensor S5 attached to the sheet stacking apparatus 500. Next, operation of the first aligning portion 517 and the second aligning portion 518 for aligning sheets stacked on the first and second stack trays 515, 516 will be described. Here, since the first aligning portion 517 and the second aligning portion 518 have the same configuration, the first aligning portion 517 will be described and description of the second aligning portion 518 will be omitted. First, sliding operation in the front-back direction which is basic operation of the aligning member 519 and configuration members of a slide portion will be described based on FIGS. 7A and 7B. In the following, viewing the sheet stacking apparatus 500 from the direction illustrated in FIG. 3, the near side in the depth direction is referred to as front and the far side is referred to as back. As illustrated in FIG. 7A, the aligning member 519 is supported with a first aligning supporting shaft 520. The outer side of the aligning member 519 is guided by a slide member 521 and follows front-back movement of the slide member 521. The slide member 521 is supported by the first aligning supporting shaft 520 as a rotation center, similarly to the aligning member 519, and a second aligning supporting shaft 522 as a rotation stopper. The slide member 521 and a slide position detecting member 523 sandwich a second slide drive transmission belt 525 therebetween, and these three components are combined with screws. Both ends of the second slide drive transmission belt 525 are supported by a slide drive transmission pulley 526. The slide drive transmission pulley 526 is a stepped pulley and is engaged with a first slide drive transmission belt 524. The first slide drive transmission belt 524 is engaged with a pulley portion of the aligning member slide motor M4. That is, driving of the aligning member slide motor M4 is transmitted to the aligning member 519 via the first slide drive transmission belt 524, the slide drive transmission pulley 526, the second slide drive transmission belt 525, and the slide member 521, so that the aligning member 519 is moved front and back while being guided by the first aligning supporting shaft 520. The slide drive transmission pulley 526 is supported by a pulley supporting shaft 527 and the pulley supporting shaft 527 is swaged and fixed to a pulley supporting plate 528. Both ends of the first aligning supporting shaft 520 and the second aligning supporting shaft 522 are connected with the pulley supporting plate 528 with E rings. The aligning member 519, the pulley supporting plate 528, and the like are unitized and attached to an upper stay 529. The aligning member slide motor M4 is attached to the upper stay 529 together with a slide motor supporting plate 530. Further, the aligning member 519, the pulley supporting plate 528, and the like which are unitized, the aligning member slide motor M5, and the like are arranged at the back side as well and attached to the upper stay 529 similarly to the front side. The front aligning member HP sensor S6 which detects the position of the aligning member 519 at the front side is attached to the upper stay 529 together with an aligning position detecting supporting plate 531. Similarly, the back aligning member HP sensor S7 is attached to the upper stay 529 together with the aligning position detecting supporting plate 531. The aligning members 519 of the front side and the back side are arranged as a pair, and slid in the direction intersecting with the sheet discharge direction to align a sheet. Subsequently, lifting-lowering operation of the aligning member 519 and members of the lifting-lowering unit will be described based on FIGS. 8A to 8C and FIGS. 9A to 9C. As described above, the aligning member 519 is supported by the first aligning supporting shaft 520, and further as illustrated in FIGS. 8A to 8C, the aligning member 519 is engaged with a third aligning supporting shaft 532 as a rotation stopper. The third aligning supporting shaft 532 is supported with both ends thereof fitted to a hole portion 533h of an aligning member lifting-lowering pulley 533. The aligning member lifting-lowering pulley 533 is supported with the first aligning supporting shaft 520 similarly to the aligning member 519. Since the first aligning supporting shaft 520, an aligning member lifting-lowering pulley 533-1, and an aligning member lifting-lowering pulley 533-2 are engaged with a parallel pin, rotation of the aligning member lifting-lowering pulley 533-1 and rotation of the aligning member lifting-lowering pulley 533-2 are synchronized. When the aligning member lifting-lowering pulleys 533-1, 533-2 rotate, the third aligning supporting shaft 532 rotates about the first aligning supporting shaft 520, and the aligning member 519 engaged thereto rotates as well to be lifted or lowered (FIG. 8C). As illustrated in FIGS. 9A to 9C, rotary drive of a second lifting-lowering pulley 534-1 is transmitted to the aligning member lifting-lowering pulley 533-1 via a drive transmission belt 535-1. Since the second lifting-lowering pulleys 534-1, 534-2 are attached to a lifting-lowering transmission shaft 536 with D cut at both front and back, rotation of the lifting-lowering transmission shaft 536 and rotation of the second lifting-lowering pulleys 534-1, 534-2 are synchronized. Further, since a third lifting-lowering pulley 537 attached to a center part of the lifting-lowering transmission shaft 536 is engaged with a parallel pin as well, rotation of the third lifting-lowering pulley 537 and rotation of the lifting-lowering transmission shaft 536 are synchronized as well. That is, rotation of the second lifting-lowering pulley 534, rotation of the lifting-lowering transmission shaft 536, and rotation of the third lifting-lowering pulley 537 are synchronized. Driving of the aligning member lifting-lowering motor M6 is transmitted to the third lifting-lowering pulley 537 via a drive transmission belt 538, and further transmitted to the aligning member 519 via the lifting-lowering transmission shaft 536, the second lifting-lowering pulley 534, a drive transmission belt 535, the aligning member lifting-lowering pulley 533, and the third aligning supporting shaft 532. Thus, driving of the aligning member lifting-lowering motor M6 is transmitted to the aligning member 519 and the lifting-lowering operation of the aligning member 519 is performed. The second lifting-lowering pulley 534-1 transmits driving to the aligning member lifting-lowering pulley 533-1 at the back side and the second lifting-lowering pulley 534-2 transmits driving to the aligning member lifting-lowering pulley 533-2 at the front side. Thus, driving is transmitted to the aligning members 519 at both of the front side and back side for lifting and lowering. When the aligning member lifting-lowering pulley 533-1 at the back side is rotated, an aligning member lifting-lowering pulley 533-4 at the backmost is rotated as well. At this time, a flag portion 533-4f of the aligning member lifting-lowering pulley 533-4 turns on and off the aligning member lifting-lowering HP sensor S8 which detects a lifting-lowering position of the aligning member 519, so that the lifting-lowering position of the aligning member 519 is detected and controlled. Thus, driving of the aligning member lifting-lowering motor M6 is transmitted to the aligning members 519 at both of the front side and back side for lifting and lowering, and rotation and positions of the aligning members 519 of the front side and back side are controlled while lifting-lowering (rotation) thereof are synchronized. According to the above operation, for sheets larger than a predetermined size in the width direction intersecting with the sheet discharging direction of the discharge roller pair 510, sheets are stacked onto the first and second stack trays 515, 516 while being aligned in the direction intersecting with the sheet discharging direction by the aligning member 519, and after a predetermined number, which is specified by a user, of sheets are stacked, the aligning member 519 is lifted or lowered to retreat from the aligning position. At this time, in a case that sorting is to be performed on the first and second stack trays 515, 516, stacking onto the first and second stack trays 515, 516 is performed in a state that sheets are moved by the shift unit 400 in a direction intersecting with the conveyance direction, and then, aligning operation for a first set is performed. Then, after discharging of the first set is completed, the moving direction of the shift unit 400 is changed. Then, to move the aligning member 519 to the aligning position in the width direction for a second set in which a predetermined amount has been sorted, the aligning member 519 is once retreated from the aligning position in the height direction and lowered to the aligning position in the height direction again after being moved to the aligning position in the width direction. Details of the operation will be described later. By repeating the above operation, a set number of sheets set by a user are stacked onto the first and second stack trays 515, 516. Subsequently, lifting-lowering operation of the first stack tray 515 and the second stack tray 516 will be described based on FIGS. 10A and 10B. The first and second stack trays 515, 516 are selectively used in accordance with situations, and are selectable by a user in accordance with copy output, printer output, sample output, interrupt output, output at the time of stack tray overflow, function sorting output, output at the time of job mixing, and the like. The first and second stack trays 515, 516 have a first stack tray lifting-lowering motor M7 and a second stack tray lifting-lowering motor M8, respectively, enabling lifting and lowering in the vertical direction independently, and are attached to a rack 571 attached to a frame 570 of the sheet stacking apparatus 500 in the vertical direction. Here, since the first and second stack trays 515, 516 have the same configuration, the first stack tray 515 will be described in the present embodiment and description of the second stack tray 516 will be omitted. As illustrated in FIGS. 10A and 10B, the first stack tray 515 is configured to have a first stack tray lifting-lowering motor M7 being a stepping motor attached to a tray base plate 572 and a pulley press fit onto the first stack tray lifting-lowering motor M7 transmits driving to a pulley 574 via a timing belt 573. A shaft 575 connected to the pulley 574 with a parallel pin transmits driving to a ratchet 576 connected thereto with the parallel pin as well, the ratchet 576 being urged to an idler gear 577 with an unillustrated spring. The idler gear 577 transmits driving to a gear 578 connected thereto, and the gear 578 transmits driving to a gear 579 connected thereto. Another gear 579 is attached thereto via a shaft 580 for driving the first stack tray 515 from both the front side and the back side, and these two gears 579 are connected to the rack 571 via a gear 581. The first stack tray 515 is fixed owing to that two rollers 582 arranged at one side are settled in the rack 571 also functioning as a roller receiver. At the first stack tray 515, the first stack tray lifting-lowering motor M7, the idler gear 577, the base plate 572 supporting the above, an unillustrated sheet supporting plate attached onto the base plate 572, and the like integrally configure a tray unit. Thus, owing to that driving of the first and second stack tray lifting-lowering motors M7, M8 is transmitted, the first and second stack trays 515, 516 are configured to be capable of being lifted and lowered in the arrowed Z direction in FIG. 3. Next, operation (800) of the aligning members 519 at the time of performing sorting processing will be described based on FIGS. 11 to 13. Here, since the aligning members 519 included in the first stack tray 515 and the second stack tray 516 have the same configuration and control, in the following embodiments, the first stack tray 515 and the second stack tray 516 are simply referred to as a stack tray 515 and the first sheet surface detecting sensor S1 is referred to as a sheet surface detecting sensor S1. As illustrated in FIGS. 11A and 11B, when control of the sheet sorting processing is started, a pair of the aligning members 519 wait at a height position above a predetermined aligning position. A second aligning member 519b waits at a position away by a predetermined distance in the width direction from a position where a sheet is discharged and a first aligning member 519a being a reference side waits at an aligning position of stacked sheets. As illustrated in FIG. 11C, when a sheet is stacked onto the stack tray 515, the second aligning member 519b moves to an abutting side in the width direction and aligns the sheet toward the aligning position (FIG. 13 (801, 802)). Thus, when alignment of preceding sheets in a single sorting unit is completed, as illustrated in FIG. 11D, the second aligning member 519b moves in the width direction from the abutting side and retreats. When alignment of a final sheet in the single sorting unit is completed, as illustrated in FIGS. 11E and 11F, the pair of aligning members 519 are lifted and retreats to a height position above the aligning position (FIG. 13 (803 to 805)). When preceding sorting processing is thus completed and a sheet of a subsequent sorting unit is conveyed, as illustrated in FIG. 12A, the pair of aligning members 519 move parallel in the width direction by a predetermined amount and waits at a height position above the aligning position where a subsequent sheet is to be aligned. Then, as illustrated in FIGS. 12C and 12D, the pair of aligning members 519 are lowered to the aligning position (FIG. 12D) from the retreating position (FIG. 12B) in the height direction, and a subsequent sheet is discharged (FIG. 13 (806 to 808)). Then, as illustrated in FIG. 12E, the first aligning member 519a moves in the width direction so that a sheets is abutted to the second aligning member 519b and aligned, and when alignment is completed, the first aligning member 519a retreats in the width direction and waits for reception of a subsequent sheet. As described above, a sheet on which sort processing is to be performed are to abut to one of the first aligning member 519a and the second aligning member 519b with the other thereof moving toward the one thereof. Switching of this operation is performed by once retreating the aligning members 519 to a retreating position in the height direction, moving the aligning members 519 to the aligning position of a subsequent sheet in the width direction, and lowering the aligning members 519 to an aligning position in the height direction. According to the above, when the aligning members 519 move to the aligning position in the width direction, contact with stacked sheets does not occur, so that displacement of sheets sorted and aligned does not occur. Although there is no problem for the sheet aligning operation of the sorting processing described above if sheets stacked on the stack tray 515 are flat, there may be a case that a part of sheets having an image formed thereon at the image forming apparatus 110 curls depending on a type of sheets used and a usage environment. In the present invention, assuming such a case, a plurality of controllers for smoothly performing sorting processing and discharging sheets are included. FIGS. 14A to 14F and FIG. 15 illustrate the sorting and aligning operation of a first embodiment and control flow thereof in a case that a part of sheets stacked on the stack tray 515 is curled. As illustrated in FIG. 14A, in a case that a part of sheets stacked on the stack tray 515 is curled upward, the sheet surface detecting sensor S1 detects a curled sheet surface at the upstream side, determines that the sheet surface is high, and lowers the stack tray 515 as illustrated in FIG. 14B (FIG. 15 (821 to 823)). Then, the stack tray 515 is stopped when the stack tray 515 is lowered until an optical axis of the sheet surface detecting sensor S1 appears (FIG. 15 (824, 825)). Conventionally, the stack tray 515 is lifted in a case that a signal for switching an aligning reference side of the aligning members 519 is input. However, in the present embodiment, the stack tray 515 is not lifted and, as illustrated in FIG. 14C, the aligning members 519 are lifted to a predetermined retreating position in the height direction (FIG. 15 (826, 827)). Then, as illustrated in FIG. 14D, the aligning members 519 are moved to the aligning position of a subsequent sheet in the width direction and after the movement of the aligning members 519 in the width direction is completed, as illustrated in FIG. 14E, the aligning members 519 are lowered to the aligning position in the height direction (FIG. 15 (828, 829)). Then, as illustrated in FIG. 14F, when the optical axis of the sheet surface detecting sensor S1 is blocked after starting lifting of the stack tray 515, the lifting of the stack tray 515 is stopped (FIG. 15 (830 to 832)). In a case that the switching signal of the aligning members 519 is not input when the optical axis of the sheet surface detecting sensor S1 appears, the stack tray 515 is lifted as conventionally. In the sorting processing described above, since the switching signal for the aligning members 519 is received at any time, there may be a case that timing for lifting the stack tray 515 cannot be ensured when a plurality of sets are sorted with one sheet as a unit. Accordingly, in a case that a plurality of sets are sorted with one sheet as a unit, the stack tray 515 is controlled to be lifted. That is, in the present invention, as illustrated in FIGS. 14A to 14F, the lifting-lowering operation of the stack tray 515 with the stack tray lifting-lowering device is controlled to be stopped while the aligning members 519 are moved in a case that the number of sheets to be aligned at the predetermined aligning position in a single sorting unit is equal to or larger than a predetermined number, for example equal to or larger than two. On the other hand, in a case that the number of sheets to be aligned in a single sorting unit is smaller than the predetermined number, such as one, the lifting-lowering operation of the stack tray 515 with the stack tray lifting-lowering device is not stopped, so that speed-up of the processing is achieved. The number of sheets for control of the lifting-lowering operation of the stack tray 515 can be appropriately set in accordance with a type, a thickness, and the like of the sheets. Here, in the flow of the present embodiment, the aligning members 519 are moved in the width direction (FIG. 15 (828)), the aligning members 519 are lowered to the aligning position in the height direction, and then, the stack tray 515 is lifted. However, similar effects can be obtained by moving the aligning members 519 in the width direction, lifting the stack tray 515, and then lowering the aligning members 519 to the aligning position, or lifting the stack tray 515 and lowering the aligning members 519 to the aligning position at the same time. FIGS. 16A to 16G and FIG. 17 illustrate operation in a case that the switching signal of the aligning members 519 is received during lifting of the stack tray 515 (FIG. 17 (840)). Similarly to the case described above, when sheets curled upward are stacked, the stack tray 515 is lowered until the optical axis of the sheet surface detecting sensor S1 appears (FIG. 17 (841 to 845)), as in the first embodiment illustrated in FIGS. 16A and 16B. When the optical axis of the sheet surface detecting sensor S1 appears, the stack tray 515 is lifted as illustrated in FIG. 16C, and the switching signal of the reference side of the aligning members 519 is received during the lifting, lifting of the stack tray 515 is stopped even when the optical axis of the sheet surface detecting sensor S1 is not blocked as illustrated in FIG. 16D, and the aligning members 519 are lifted to the retreating position in the height direction (FIG. 17 (846 to 849)). Then, as illustrated in FIG. 16E, the aligning members 519 are moved to the aligning position of a subsequent sheet in the width direction and the aligning members 519 are controlled to be lowered to the aligning position in the height direction (FIG. 17 (850, 851)). When the optical axis of the sheet surface detecting sensor S1 is blocked while lifting the stack tray 515 again, the lifting of the stack tray 515 is stopped. FIGS. 18A to 18C and FIGS. 19A to 19C illustrate the sorting and aligning operation of a second embodiment and control flow thereof. As illustrated in FIGS. 18A to 18C, when sheets are stacked onto the stack tray 515 and the switching signal of the reference side of the aligning members 519 for sorting of the sheets is received, the aligning members 519 are lifted from an aligning position in the height direction illustrated in FIG. 18A to a retreating position in the height direction illustrated in FIG. 18B. At this time, the aligning members 519 are retreated to a position to be certainly away from a curled sheet surface. In this state, as illustrated in FIG. 18C, the aligning members 519 are controlled to be moved to an aligning position of subsequent sheets in the width direction. Since time required for lifting and lowering the aligning members 519 increases, as illustrated in FIG. 2, a signal to delay sheet discharging is transmitted to the printer controller 635 from the sheet stacking apparatus controller 636. Alternatively, pulses to be output from the stack tray lifting-lowering motor M7 illustrated in FIG. 10 may be stored in the sheet stacking apparatus controller 636, the output pulse number of the stack tray lifting-lowering motor M7 during lowering of the stack tray 515 may be counted, and control may be performed to change the retreating position in the height direction of the aligning members 519 in accordance with the pulse number. For example, when the output pulse number during lowering of the stack tray 515 is smaller than a predetermined pulse number, the retreating amount of the aligning members 519 in the height direction is decreased, and when the output pulse number is larger than the predetermined pulse number, it is determined that an upward curl exists owing to that the lowering amount of the stack tray 515 is large and the retreating amount of the aligning members 519 is increased. According to such a variable control, the number of times to cause the apparatus main body 100 to delay sheet discharging can be minimized. Here, similar effects can be obtained by controlling to move the aligning members 519 to the aligning position of a subsequent sheet in the width direction as illustrated in FIG. 19C after lowering the stack tray 515 from the position of FIG. 19A to the position of FIG. 19B by a predetermined amount when receiving the switching signal of the reference side of the aligning members 519, as illustrated in FIG. 19. Further, as a controller of a third embodiment, lifting and lowering of the stack tray 515 may be repeated by continuously monitoring sheet surface height of sheets on the stack tray 515 by the sheet surface detecting sensor S1 while sheets are discharged to the stack tray 515. In the present embodiment, in a case that the sorting processing of sheets is performed on the stack tray 515, detection of the sheet surface by the sheet surface detecting sensor S1 is controlled to be stopped after a predetermined time passes from completion of moving the sheets by the shift unit 400. Change of the moving direction by the shift unit 400 means change of the sorting direction and occurrence of switching of the reference side of the aligning members 519. Since monitoring of the sheet surface by the sheet surface detecting sensor S1 is not performed for the predetermined time, the lifting-lowering operation of the stack tray 515 is not performed at that time as well. Accordingly, similar effects can be obtained as controlling the stack tray lifting-lowering motors M7, M8 not to lift and lower the stack tray 515. Here, as illustrated in FIG. 3, with a configuration in which the first stack tray 515 is arranged above and the second stack tray 516 is arranged below, since a conveyance path length differs, a time not to monitor with the second sheet surface detecting sensor S2 may be set longer than a time not to monitor with the first sheet surface detecting sensor S1 in consideration of the path length difference. 16698034 canon finetech nisca inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 27th, 2022 09:14AM Apr 27th, 2022 09:14AM Technology Technology Hardware & Equipment
nyse:caj Canon Apr 19th, 2022 12:00AM May 1st, 2019 12:00AM https://www.uspto.gov?id=US11308589-20220419 Devices, systems, and methods for enhancing images Devices, systems, and methods obtain an image; perform at least one nonlinear locally-adaptive mapping on the image, thereby generating a revised image; generate an edge-enhanced image based on the revised image; and perform dynamic contrast stretching on the edge-enhanced image, thereby generating an enhanced image. 11308589 1. A method comprising: obtaining an initial image that is defined in an image space; performing, in the image space, at least one nonlinear locally-adaptive mapping on the initial image, thereby generating a revised image in the image space, wherein the at least one nonlinear locally-adaptive mapping on the initial image includes a first nonlinear locally-adaptive mapping on the initial image in the image space that generates a first revised value for pixel p in the initial image based on the following: a value of pixel p in the initial image, respective values of pixels in a neighborhood of pixel p in the initial image, a maximum value of respective values of all pixels in the initial image, and a central tendency of the respective values of all the pixels in the initial image; performing, in the image space, edge enhancement on the initial image, thereby generating an edge-enhanced image in the image space; generating a synthesized image based on the edge-enhanced image and on the revised image; and performing, in the image space, dynamic contrast stretching on the revised image in the image space, thereby generating an enhanced image in the image space. 2. The method of claim 1, wherein performing at least one nonlinear locally-adaptive mapping on the initial image includes: performing a first nonlinear locally-adaptive mapping on the initial image, thereby generating an interim image in the image space; and performing a second nonlinear locally-adaptive mapping on the interim image, thereby generating the revised image in the image space. 3. The method of claim 2, wherein an output pixel value of the first nonlinear locally-adaptive mapping on the initial image can be described by the following: y temp = ( max ⁡ ( I i ⁢ ⁢ n ) + f + a ⁢ ⁢ 1 ) ⁢ x x + f + a ⁢ ⁢ 1 , ⁢ where ⁢ ⁢ f = LP ⁡ ( x ) + mean ⁢ ⁢ ( I i ⁢ ⁢ n ) 2 , where ytemp is the output pixel value of the first nonlinear locally-adaptive mapping, where Iin is the initial image, where x is a pixel value in the initial image Iin, where max(Iin) is a maximum pixel value in the initial image Iin, where a1 is a strength-control parameter, where mean(Iin) is a central tendency of all pixel values in the initial image Iin, and where LP(x) is a value at the pixel after the initial image Iin has been convolved with a low-pass filter. 4. The method of claim 3, wherein an output pixel value of the second nonlinear locally-adaptive mapping on the interim image can be described by the following: y out = ( max ⁡ ( Y temp ) + h + a ⁢ ⁢ 2 ) ⁢ y temp y temp + h + a ⁢ ⁢ 2 , where ⁢ ⁢ h = LP ⁡ ( y temp ) + mean ⁢ ⁢ ( Y temp ) 2 , where yout is the output pixel value of the second nonlinear locally-adaptive mapping, where Ktemp is the interim image, where max(Ytemp) is the maximum pixel value in the interim image, where mean(Ytemp) is a central tendency of all pixel values in the interim image, where a2 is a strength-control parameter, and where LP(ytemp) is a value at pixel x after the interim image Ktemp has been convolved with a low-pass filter. 5. The method of claim 1, further comprising: obtaining an indication of a region of interest in the initial image, wherein the at least one nonlinear locally-adaptive mapping on the initial image includes a region-of-interest-based nonlinear locally-adaptive mapping. 6. The method of claim 5, wherein an output pixel value of the region-of-interest-based nonlinear locally-adaptive mapping on the initial image can be described by the following: y temp = ( max ⁡ ( I i ⁢ ⁢ n ) + f + a ⁢ ⁢ 1 ) ⁢ x x + f + a ⁢ ⁢ 1 , where ⁢ ⁢ f = LP ⁡ ( x ) + mean ⁢ ⁢ ( I ROI ) , where ytemp is the output pixel value of the region-of-interest-based nonlinear locally-adaptive mapping, where Iin is the initial image, where x is an input pixel value, where max(Iin) is a maximum pixel value in the initial image, where al is a strength-control parameter, where mean(IROI) is a central tendency of all pixel values in the region of interest in the initial image, and where LP(x) is a value at the pixel after the initial image has been convolved with a low-pass filter. 7. The method of claim 1, wherein the at least one nonlinear locally-adaptive mapping maps pixel values in the initial image to respective pixel values in the revised image, wherein each pixel value in the initial image maps to a respective pixel value in the revised image; and maps two pixel values in the initial image that are identical and that have different neighborhood pixel values to two different respective pixel values in the revised image. 8. The method of claim 7, wherein a highest pixel value in the initial image is equal to a highest pixel value in the revised image, and wherein a lowest pixel value in the initial image is equal to a lowest pixel value in the revised image. 9. The method of claim 1, wherein performing the at least one nonlinear locally-adaptive mapping on the initial image includes: generating a respective scaling factor for each pixel in the initial image, wherein each pixel's respective scaling factor is based, at least in part, on the maximum value of the respective values of all pixels a maximum pixel value in the initial image and on respective pixel values of pixels in a neighborhood of the pixel, and multiplying each pixel's respective value by the pixel's respective scaling factor. 10. The method of claim 9, wherein each pixel's respective scaling factor is further based, at least in part, on the pixel's value in the initial image. 11. A device comprising: one or more computer-readable media storing instructions; and one or more processors that are configured to communicate with the one or more computer-readable media to execute the instructions to cause the device to perform operations comprising: obtaining an image that is defined in an image space; performing, in the image space, one or more nonlinear locally-adaptive mappings on the image, thereby generating a revised image in the image space, wherein performing the one or more nonlinear locally-adaptive mappings on the image includes generating a respective scaling factor for each pixel in the image and multiplying each pixel's respective value by the pixel's respective scaling factor; performing an inverse tone mapping on the image, thereby generating an inverse-tone-mapped image; generating an adjusted-dynamic-range image in the image space based on the revised image in the image space and on the inverse-tone-mapped image; synthesizing the revised image and the adjusted-dynamic-range image, thereby generating a synthesized image; and performing dynamic contrast stretching on the synthesized image. 12. The device of claim 11, wherein the operations further comprise: enhancing one or more edges in the image, thereby generating an edge- enhanced image, and wherein generating the adjusted-dynamic-range image is further based on the edge-enhanced image. 13. The device of claim 11, wherein the synthesized image includes pixels that have respective values, and wherein performing the dynamic contrast stretching on the synthesized image includes: obtaining a first value and a second value, wherein the first value is less than the second value, and modifying the respective values of the pixels in the synthesized image such that a first percentage of the pixels in the synthesized image have respective values that are lower than the first value and such that a second percentage of the pixels in the synthesized image have respective values that are lower than the second value. 14. The device of claim 11, wherein the operations further comprise: obtaining an indication of a region of interest in the image, and wherein the one or more nonlinear locally-adaptive mappings include at least one region-of-interest-based nonlinear locally-adaptive mapping. 15. One or more non-transitory computer-readable media storing instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations comprising: obtaining an image that is defined in an image space; performing, in the image space, a first nonlinear locally-adaptive mapping on the image, thereby generating an interim image in the image space, wherein performing the first nonlinear locally-adaptive mapping on the image includes generating a respective scaling factor for each pixel in the image and multiplying each pixel's respective value by the pixel's respective scaling factor; performing, in the image space, a second nonlinear locally-adaptive mapping on the interim image in the image space, thereby generating a revised image; enhancing one or more edges in the image, thereby generating an edge- enhanced image; and generating a synthesized image based on the edge-enhanced image and on the revised image, wherein an output pixel value of the first nonlinear locally-adaptive mapping on the image can be described by the following: y temp = ( max ⁡ ( I i ⁢ ⁢ n ) + f + a ⁢ ⁢ 1 ) ⁢ x x + f + a ⁢ ⁢ 1 , ⁢ where ⁢ ⁢ f = LP ⁡ ( x ) + mean ⁢ ⁢ ( I i ⁢ ⁢ n ) 2 , where ytemp is the output pixel value of the first nonlinear locally-adaptive mapping, where Iin is the image, where x is a pixel value in the image Iin, where max(Iin) is a maximum pixel value in the image, where a1 is a strength-control parameter, where mean(Iin) is a central tendency of all pixel values in the image, and where LP(x) is a value at the pixel after the image has been convolved with a low-pass filter. 16. The one or more non-transitory computer-readable media of claim 15, wherein an output pixel value of the second nonlinear locally-adaptive mapping on the interim image can be described by the following: y out = ( max ⁡ ( Y temp ) + h + a ⁢ ⁢ 2 ) ⁢ y temp y temp + h + a ⁢ ⁢ 2 , where ⁢ ⁢ h = LP ⁡ ( y temp ) + mean ⁢ ⁢ ( Y temp ) 2 , where yout is the output pixel value of the second nonlinear locally-adaptive mapping, where ytemp is the interim image, where max(Ytemp) is the maximum pixel value in the interim image, where mean(Ytemp) is a central tendency of all pixel values in the interim image, where a2 is a strength-control parameter, and where LP(ytemp) is a value at pixel x after the interim image Ytemp has been convolved with a low-pass filter. 17. The one or more non-transitory computer-readable media of claim 15, wherein the operations further comprise: performing dynamic contrast stretching on the synthesized image, thereby generating an adjusted-dynamic-range image. 18. The one or more non-transitory computer-readable media of claim 15, wherein the operations further comprise: performing an inverse tone mapping on the image, thereby generating an inverse-tone-mapped image, and generating an adjusted-dynamic-range image based on the inverse-tone-mapped image and on the revised image. 18 CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Application No. 62/666,600, which was filed on May 3, 2018; of U.S. Application No. 62/788,661, which was filed on Jan. 4, 2019; and of U.S. Application No. 62/812,571, which was filed on Mar. 1, 2019. BACKGROUND Technical Field This application generally concerns image enhancement. Background One approach to enhance an image's details (e.g., contrast, edges) is to convolve the image with an edge-enhancement filter, such as a Laplacian filter or a Laplacian of Gaussian filter, or to adjust the intensities of the pixels. Also, tone mapping can be used to map an input image that has limited dynamic range from one set of intensities to another set of intensities to approximate the appearance of a high-dynamic-range image. SUMMARY Some embodiments of a method comprise obtaining an image; performing at least one nonlinear locally-adaptive mapping on the image, thereby generating a revised image; generating an edge-enhanced image based on the revised image; and performing dynamic contrast stretching on the edge-enhanced image, thereby generating an enhanced image. Some embodiments of a device comprise one or more computer-readable media storing instructions and one or more processors. And the one or more processors are configured to communicate with the one or more computer-readable media to execute the instructions to cause the device to perform operations that comprise obtaining an image; performing two or more nonlinear locally-adaptive mappings on the image, thereby generating a revised image; and generating an adjusted-dynamic-range image based on the revised image. Some embodiments of one or more computer-readable storage media store instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations that comprise obtaining an image; performing a first nonlinear locally-adaptive mapping on the image, thereby generating an interim image; and performing a second nonlinear locally-adaptive mapping on the image, thereby generating a revised image. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an example embodiment of an image-enhancement system. FIG. 2 illustrates an example embodiment of an operational flow for enhancing an image. FIG. 3 illustrates example embodiments of mapping curves with different strength-control parameters. FIG. 4 illustrates example embodiments of mapping curves. FIG. 5 illustrates an example embodiment of a user interface that displays an original image. FIG. 6 illustrates an example embodiment of a user interface that displays an enhanced image. FIG. 7 illustrates an example embodiment of an operational flow for enhancing an image. FIG. 8 illustrates an example embodiment of a user interface. FIG. 9 illustrates an example embodiment of a user interface. FIG. 10 illustrates an example embodiment of an operational flow for enhancing an image. FIG. 11 illustrates example embodiments of mapping curves that show the effects of the mean value of a Region of Interest. FIG. 12 illustrates an example embodiment of a mapping curve that shows the effects of a strength-control parameter. FIG. 13 illustrates an example embodiment of a mapping curve. FIGS. 14A and 14B illustrate example embodiments of an input image and an enhanced image. FIG. 15 illustrates an example embodiment of an operational flow for enhancing an image. FIG. 16 illustrates an example embodiment of an operational flow for enhancing an image. FIGS. 17A-D illustrate examples of the effects of an embodiment of the inverse-tone-mapping operations. FIGS. 18A-B illustrate an example of the effects of an embodiment of locally-adaptive-mapping operations. FIGS. 19A-B illustrate an example of a comparison between an original image and a post-processed adjusted-dynamic-range image. FIG. 20 illustrates an example embodiment of an image-enhancement system. DESCRIPTION The following paragraphs describe certain explanatory embodiments. Other embodiments may include alternatives, equivalents, and modifications. Additionally, the explanatory embodiments may include several novel features, and a particular feature may not be essential to some embodiments of the devices, systems, and methods that are described herein. FIG. 1 illustrates an example embodiment of an image-enhancement system 10. The image-enhancement system 10 includes one or more image-enhancement devices 100, each of which is a specially-configured computing device (e.g., a specially-configured desktop computer, a specially-configured laptop computer, a specially-configured server); one or more image-capturing devices, such as an x-ray detector 110A or a camera 110B (collectively, the “image-capturing devices 110”); and at least one display device 120. The image-capturing devices 110 capture images, and the one or more image-enhancement devices 100 obtain the images. The one or more image-enhancement devices 100 then enhance the images, and the one or more image-enhancement devices 100 may send the enhanced images to the display device 120, which displays the enhanced images. To enhance the images, the one or more image-enhancement devices 100 may convolve the images with an edge-enhancement filter. But, because of the limitations of the image-capturing devices 110, the dynamic range of the display device 120, or the structure of the human vision system, some details in the images might barely be captured or be identified by the edge-enhancement filter, for example the details in an underexposed region or the details that lack sufficient local contrast. Re-distributing the pixel intensities may make some of these details more visible by using a global-mapping function, such as gamma correction or histogram equalization. In global mapping, for a pixel with a certain input value, the output value will be a known fixed value that is based on the mapping curve, regardless of the other pixels that surround the pixel. But global mapping may be less sensitive to local contrast and particularly may be less sensitive to local contrast than human eyes. So even the edge-enhancement filter may not be able to detect and enhance the details in these regions. Some embodiments of the one or more image-enhancement devices 100 perform one or more nonlinear locally-adaptive mappings on the images and use the output of the one or more nonlinear locally-adaptive mappings as inputs to an edge-enhancement filter, which may increase the visible details in underexposed regions and low-contrast regions in the images. And some embodiments of the one or more image-enhancement devices 100 perform dynamic contrast stretching on the images, which may further enhance the details of the images. FIG. 2 illustrates an example embodiment of an operational flow for enhancing an image. Although this operational flow and the other operational flows that are described herein are presented in a certain order, some embodiments of these operational flows perform at least some of the operations in different orders than the presented orders. Examples of different orders include concurrent, parallel, overlapping, reordered, simultaneous, incremental, and interleaved orders. Also, some embodiments of these operational flows include operations from more than one of the embodiments that are described herein. Thus, some embodiments of the operational flows may omit blocks, add blocks, change the order of the blocks, combine blocks, or divide blocks into more blocks relative to the example embodiments of the operational flows that are described herein. Furthermore, although this operational flow and the other operational flows that are described herein are performed by an image-enhancement device, some embodiments of these operational flows are performed by two or more image-enhancement devices or by one or more other specially-configured computing devices. The operational flow in FIG. 2 starts in block B200, wherein the image-enhancement device obtains an image (e.g., from an image-capturing device, from an image-repository device). Next, in block B210, the image-enhancement device normalizes the values of the pixels in the image, for example by normalizing them to [0,1]. Also, some embodiments do not include block B210. The flow then moves to block B220, where the image-enhancement device performs a first locally-adaptive mapping (e.g., a first nonlinear locally-adaptive mapping) on the input image (the image that is input to block B220), for example the normalized image in embodiments that include block B210 or the image obtained in block B200 in embodiments that omit block B210. In some embodiments, the first locally-adaptive mapping of an input pixel x can be described by the following: y temp = ( max ⁡ ( I i ⁢ ⁢ n ) + f + a ⁢ ⁢ 1 ) ⁢ x x + f + a ⁢ ⁢ 1 , ( 1 ) where ⁢ ⁢ f = LP ⁡ ( x ) + mean ⁢ ⁢ ( I i ⁢ ⁢ n ) 2 , where Iin is the input image, where max(Iin) is the maximum pixel value in the input image Iin, where ytemp is the output pixel value of the first locally-adaptive mapping, where α1 is a factor to control the mapping strength (a strength-control parameter), and where LP(x) is the value at pixel x after the input image Iin has been convolved with a low-pass filter. The smaller the strength-control parameter α1 is, the stronger the boosting will be on the low-intensity pixels and the stronger the compression will be on the high-intensity pixels. Examples of low-pass filters in LP(x) include a two-dimensional (2D) median filter, a two-dimensional (2D) mean filter, and a 2D Gaussian filter. Equation (1) may be understood by comparing it to equation (2): Y = X X + X ⁢ ⁢ 0 . ( 2 ) Instead of having a fixed factor X0, equation (1) uses a factor (f+α1) that incorporates information from the entire input image (mean(Iin)), information from the current pixel's (LP(x)) surrounding region (e.g., neighborhood), and a strength-control parameter (α1). FIG. 3 illustrates example embodiments of mapping curves with different strength-control parameters. FIG. 3 shows mapping curves of equation (1) with different strength-control parameter (α1) values when mean(X)=0.5 and LP(x)=x. Also, FIG. 4 illustrates example embodiments of mapping curves. FIG. 4 shows mapping curves that have different surrounding regions when mean(X)=0.5 and α1=0.1. FIG. 4 indicates the following: (1) If the current pixel's value is higher than the values of the pixels in the surrounding neighborhoods, then more boosting will be performed; and (2) if the current pixel's value is lower than the values of the pixels in the surrounding neighborhoods, then less boosting will be performed. This operation may enhance the local contrast. The operations in block B220 generate an interim image Ytemp that is composed of the output pixel values of the first locally-adaptive mapping. Next, in block B230, the image-enhancement device performs a second locally-adaptive mapping (e.g., a second nonlinear locally-adaptive mapping) on the interim image Ytemp. In some embodiments, the second locally-adaptive mapping of an input pixel ytemp (a pixel that in an image that is input to block B230) can be described by the following: y out = ( max ⁡ ( Y temp ) + h + a ⁢ ⁢ 2 ) ⁢ y temp y temp + h + a ⁢ ⁢ 2 , ( 3 ) where ⁢ ⁢ h = LP ⁡ ( y temp ) + mean ⁡ ( Y temp ) 2 , where yout is the output value of the pixel, where Ytemp is the interim image (Ytemp includes pixels {ytemp1, ytemp2, . . . ytemp3, . . . }), where max(Ytemp) is the maximum pixel value in the interim image Ytemp, where α2 is a strength-control parameter, and where LP(ytemp) is the value at pixel x after the interim image Ytemp has been convolved with a low-pass filter. After the second locally-adaptive mapping, at least some of the pixels in the original image (the image obtained in block B200) have been mapped to new values based on the mean of the values of all the pixels in the image, on the values of the pixels in the region near the pixel, and on the respective pixel's original value. Underexposed pixels are boosted, overexposed pixels are compressed, and local contrast increases. The flow then moves to block B240, where the image-enhancement device performs edge enhancement on the image that was output by block B230. For example, some embodiments of the image-enhancement device convolve the image with a Laplacian of Gaussian (LOG) 2D filter. In this embodiment, the Gaussian operator smoothes the noise while the Laplacian operator detects the edges in the image. In some embodiments, after block B240 is performed, the values of pixels that show the edges are zero or close to zero, while the values of the pixels in other areas in the image are below zero. Thus, the image-enhancement device may then normalize the image (e.g., to [0,1]) to give all the pixels respective non-negative values. Because the operations in blocks B220-230 may improve the local contrast in the image, they may also help the LOG filter better detect and amplify the details in the image. Next, in block B250, the image-enhancement device performs dynamic contrast stretching on the image that is output by block B240, thereby generating an adjusted-dynamic-range image. When performing dynamic contrast stretching, some embodiments of the image-enhancement device calculate value L1 and value L2 such that t1 percent of the pixels in the whole image have values lower than L1 and such that t2 percent of the pixels in the whole image have values lower than L2. In some embodiments, a user can define t1 and t2, but t1 has to be smaller than t2. For example, in some embodiments, t1 is set to 1% while t2 is set to 95%. Then the dynamic-contrast-stretched image Xresult is generated, for example as described by equations (4), (5), and (6): X filtered ⁡ ( X filtered < L ⁢ ⁢ 1 ) = L ⁢ ⁢ 1 , ( 4 ) X filtered ⁡ ( X filtered > L ⁢ ⁢ 2 ) = L ⁢ ⁢ 2 , and ( 5 ) X result = X - L ⁢ ⁢ 1 L ⁢ ⁢ 2 - L ⁢ ⁢ 1 , ( 6 ) where Xfiltered is the output of block B240. Because L1 and L2 are calculated based on t1 and t2, the values may be dynamically changed for each image. Also, in this embodiment, the dynamic-contrast-stretched image Xresult is the enhanced image. Finally, in block B260, the image-enhancement device outputs the enhanced image. The enhanced image may reveal more details both in underexposed regions as well as in the edges. Also, the contrast may be improved. Before output or storage, some embodiments scale the values to a different range for display or storage purposes, for example to [0,255]. Additionally, some embodiments of the operational flow use the output of block B220, the output of block B230, or the output of block B240 as the enhanced image. FIG. 5 illustrates an example embodiment of a user interface that displays an original image, and FIG. 6 illustrates an example embodiment of a user interface that displays an enhanced image. The enhanced image may have a contrast that has been increased, relative to the original image, and both globally and locally, to make more details visible to a human viewer. Also, the LOG filter may be able to detect more details and edges after the local contrast has been modified (e.g., enhanced) by one or more nonlinear locally-adaptive mappings. And an image may be enhanced pixel by pixel, and the enhancement's strength may be calculated to fit each pixel's respective location in the image. Accordingly, some embodiments of the operational flow combine locally-adaptive mapping, a LOG filter, and dynamic contrast stretching to enhance contrast in an image. And some embodiments of the operational flow calculate a mapping factor for each pixel based on the image, on the region surrounding the pixel, and on the pixel's value. Furthermore, some embodiments of the operational flow convolve the locally-adaptive-mapped image with a LOG filter for further detail enhancement. Moreover, the strength of the contrast stretching may be calculated as a percentage and may vary from image to image. FIG. 7 illustrates an example embodiment of an operational flow for enhancing an image. The operational flow starts in block B700, where an image-enhancement device obtains an image (e.g., from an image-capturing device, from an image-repository device). The image-enhancement device may also normalize the image. Next, in block B705, the image-enhancement device sets a pixel counter p to one and sets a pixel total P to the total number of pixels in the image. The flow then moves to block B710. In block B710, the image-enhancement device determines if the pixel counter p is less than or equal to the pixel total P. If yes (block B710=Yes), then the flow proceeds to block B715. In block B715, the image-enhancement device generates (e.g., calculates) a first revised value for pixel p based on some or all of the following: the value of pixel p, a strength-control parameter, the values of the pixels in the neighborhood of pixel p, the maximum pixel value in the image, and the central tendency (e.g., mean, median) of all the pixel values in the image. Some embodiments of block B715 can be described by equation (1). The first revised value for pixel p is added to an interim image. The flow then moves to block B720, where the image-enhancement device increases the pixel counter p by one, and then the flow returns to block B710. If in block B710 the image-enhancement device determines that the pixel counter p is not less than or equal to the pixel total P (block B710=No), then the flow proceeds to block B725. In block B725, the image-enhancement device resets the pixel counter p to one. The flow then moves to block B730, where the image-enhancement device determines if the pixel counter p is less than or equal to the pixel total P. If yes (block B730=Yes), then the flow proceeds to block B735. In block B735, the image-enhancement device generates (e.g., calculates) a second revised value for pixel p based on some or all of the following: the first revised value of pixel p (in the interim image), a strength-control parameter (which may or may not be the same as the strength-control parameter in block B715), the values of the pixels in the neighborhood of pixel p in the interim image, the maximum pixel value in the interim image, and the central tendency (e.g., mean, median) of all the first revised pixel values in the interim image. Some embodiments of block B735 can be described by equation (3). The flow then moves to block B740, where the image-enhancement device increases the pixel counter p by one, and then the flow returns to block B730. If in block B730 the image-enhancement device determines that the pixel counter p is not less than or equal to the pixel total P (block B730=No), then the flow proceeds to block B745. In block B745, the image-enhancement device performs edge enhancement on the image that is composed of the second revised pixel values (the output of block B740). Next, in block B750, the image-enhancement device performs dynamic contrast stretching on the edge-enhanced image (the output of block B745), thereby generating an adjusted-dynamic-range image. Also, in this embodiment, the adjusted-dynamic-range image is the enhanced image. Finally, in block B755, the image-enhancement device outputs (e.g., to a display device) or stores the enhanced image (the output of block B750). Additionally, some embodiments of the operational flow use the output of blocks B710-B720, the output of blocks B725-B740, or the output of block B745 as the enhanced image. FIG. 8 illustrates an example embodiment of a user interface. The user interface 850 displays an image 851. In this embodiment, the image 851 is not an enhanced image. The user interface 850 also includes defect highlights 852, which show where potential defects have been detected in the image 851. The highlight thumbnails 853 show zoomed-in views of the areas in the image that include the potential defects. In this embodiment, although the image 851 is not an enhanced image, the highlight thumbnails 853 show views of the areas from a corresponding enhanced image of the image 851. In some embodiments, the user can click on a highlight thumbnail 853 to cause the user interface 850 to zoom in to the area in the image 851 that includes the highlight thumbnail 853. The user interface 850 also includes other controls 855 that allow a user to adjust aspects of the interface (e.g., zoom, cursor mode, marker type, view type). FIG. 9 illustrates an example embodiment of a user interface. The user interface 950 displays an image 951. In this embodiment, the image 951 is an enhanced image. The user interface 950 also includes defect highlights 952, which show where potential defects have been detected in the image 951. The highlight thumbnails 953 show zoomed-in views of the areas in the enhanced image 951 that include the potential defects. In this embodiment, the highlight thumbnails 953 show views of the areas from the enhanced image 951. In some embodiments, the user can click on (or otherwise active) a highlight thumbnail 953 to cause the user interface 950 to zoom in to the area in the enhanced image 951 that includes the highlight thumbnail 953. The user interface 950 also includes other controls 955 that allow a user to adjust aspects of the display (e.g., zoom, cursor mode, marker type, view type). FIG. 10 illustrates an example embodiment of an operational flow for enhancing an image. This example embodiment of an operational flow performs a locally-adaptive tone mapping that emphasizes a Region-of-Interest (ROI). The operational flow in FIG. 10 starts in block B1000, wherein an image-enhancement device obtains an image and a selection of an ROI. For example, the selection of the ROI may be received from a user via an input device, the selection of the ROI may be received from another computing device, and the selection of the ROI may be generated by the image-enhancement device. Next, in block B1010, the image-enhancement device normalizes the values of the pixels in the obtained image, for example by normalizing them to [0,1]. Also, some embodiments of the operational flow do not include block B1010. The flow then splits into a first flow and a second flow. The first flow moves to block B1020, where the image-enhancement device performs a first ROI-based locally-adaptive mapping (e.g., a nonlinear locally-adaptive mapping) on the pixels of the normalized image (or the obtained image in embodiments that omit block B1010). Some embodiments of the first ROI-based locally-adaptive mapping of an input pixel x can be described by the following: y temp = ( max ⁡ ( I i ⁢ ⁢ n ) + f + a ⁢ ⁢ 1 ) ⁢ x x + f + a ⁢ ⁢ 1 , ( 7 ) where ⁢ ⁢ f = LP ⁡ ( x ) + mean ⁢ ⁢ ( I ROI ) , where Iin is the input image (the image that is input to block B1020), where ytemp is the output pixel value of the first ROI-based locally-adaptive mapping, where α1 is a strength-control parameter, where LP(x) is the value at pixel x after the input image has been convolved with a low-pass filter, and where IROI is the ROI in the input image. For example, in some embodiments, the value of LP(x) is the weighted average value of all the pixels in the neighborhood of pixel x. Also, the mean value of the ROI (mean(IROI)) operates as a baseline for the overall mapping curve. The larger the mean value of the ROI (mean(IROI)) is, the brighter the ROI is, and the mapping curve will have less compression in the high-intensity pixels to preserve more details in the ROI. When the mean value of the ROI (mean(IROI)) is smaller, the ROI region contains more lower-intensity pixels, and the mapping curve will have more boosting in the lower-intensity pixels to reveal more details in the ROI. FIG. 11 illustrates example embodiments of mapping curves that shows the effects of the mean value of the ROI (mean(IROI)) in equation (7). In the mapping curves in FIG. 11, LP(x)=x and α1=0.1. Additionally, unlike the fixed factor X0 in equation (2), equation (7) uses a factor (f+α1) that incorporates information from the current pixel x, information from a ROI (mean(IROI)), information from a surrounding region of the current pixel (LP(x)), and a strength-control parameter α1. Also, equation (7) incorporates information from the whole image (max(Iin)). FIG. 12 illustrates example embodiments of mapping curves that show the effects of the strength-control parameter al in equation (7). In the mapping curves in FIG. 12, LP(x)=x and mean(IROI)=0.5. Also, FIG. 13 illustrates example embodiments of mapping curves. FIG. 13 illustrates mapping curves that are based on different surrounding regions LP(x), where mean(IROI)=0.5 and α1=0.1. FIG. 13 shows the following: (1) If the current pixel's value is higher than the values of the pixels in the surrounding neighborhood (or neighborhoods), then more boosting will be performed; and (2) if the current pixel's value is lower than the values of the pixels in the surrounding neighborhood (or neighborhoods), then less boosting will be performed (less boosting may increase the local contrast). The operations in block B1020 may enhance the local contrast. The operations in block B1020 generate an interim image Ytemp that is composed of the output pixel values of the first ROI-based locally-adaptive mapping. Next, in block B1030, the image-enhancement device performs a second ROI-based locally-adaptive mapping (e.g., a nonlinear locally-adaptive mapping) on the interim image Ytemp. Some embodiments of the second ROI-based locally-adaptive mapping can be described by the following: y out = ( max ⁡ ( Y temp ) + h + a ⁢ ⁢ 2 ) ⁢ y temp y temp + h + a ⁢ ⁢ 2 , ( 8 ) where ⁢ ⁢ h = LP ⁡ ( y temp ) + mean ⁢ ⁢ ( Y temp_ROI ) 2 , where Ytemp is the interim image (an image that was output by block B1020), where ytemp is a pixel's value in Ytemp, where yout is the output value of the pixel, where LP(ytemp) is the value at pixel x after the interim image Ytemp has been convolved with a low-pass filter, and where Ytemp_ROI is the ROI in Ytemp. After the second ROI-based locally-adaptive mapping, each pixel value in the original image will have been mapped to a new pixel value based on the maximum pixel value in the entire image, on the mean pixel value in the ROI, on the values of the pixels in the region near the pixel (the surrounding region), and on the pixel's original value. Underexposed pixels are boosted, overexposed pixels are compressed, and local contrast increases. The first flow then moves to block B1060. Also, from block B1010, the second flow moves to block B1040, where the image-enhancement device performs edge enhancement on the image that was output by block B1010 (or on the image that was obtained in block B1000 in embodiments that omit block B1010). For example, some embodiments of the image-enhancement device convolve the image with a Laplacian of Gaussian (LOG) 2D filter. After block B1040 is performed, the values of pixels away from any edges are closer to zero, while the values of pixels around the edges are more different from each other (e.g., occupy a greater range). Next, in block B1050, the image-enhancement device normalizes the image that was output block B1040 (e.g., to [0,1]), for example to give all the pixels respective non-negative values in some embodiments. The second flow then moves to block B1060. The output of block B1030 has an enhanced local contrast relative to the image that was obtained in block B1000, especially in the ROI, and the output of block B1050 has enhanced high-frequency details relative to the image that was obtained in block B1000. In block B1060, the image-enhancement device generates (e.g., constructs) a synthesized image using the outputs of blocks B1030 and B1050. In some embodiments, the synthesizing can be described by the following: Xr=(1−a)*yout+a*yLOG,  (9) where Xr is the synthesized image, where yout is the output of block B1030, where yLOG is the output of block B1050, and where a is a parameter that controls the strength of detail enhancement. The larger a is, the stronger the effects of the edge enhancement will be. In block B1070, the image-enhancement device performs dynamic contrast stretching on the synthesized image that is output by block B1060, thereby generating an adjusted-dynamic-range image. When performing dynamic contrast stretching, some embodiments of the image-enhancement device calculate value L1 and value L2 such that t1 percent of the pixels in the whole image have values lower than L1 and t2 percent of the pixels in the whole image have values lower than L2. In some embodiments, a user can define t1 and t2, but t1 has to be smaller than t2. For example, in some embodiments, t1 is set to 3% while t2 is set to 98%. Then the adjusted-dynamic-range image is generated, for example as described by equations (10), (11), and (12): X r ⁡ ( X r < L ⁢ ⁢ 1 ) = L ⁢ ⁢ 1 , ( 10 ) X r ⁡ ( X r > L ⁢ ⁢ 2 ) = L ⁢ ⁢ 2 , and ( 11 ) X final = X r - L ⁢ ⁢ 1 L ⁢ ⁢ 2 - L ⁢ ⁢ 1 . ( 12 ) where Xfinal is the adjusted-dynamic-range image (the enhanced image, in some embodiments), and where Xr is the synthesized image. Because L1 and L2 are calculated based on t1 and t2, the values may be dynamically changed for each image. After the local contrast enhancement, the detail enhancement, the image synthesis, and the dynamic contrast stretching, the enhanced image may reveal more details (e.g., details in underexposure regions, edges) and may provide a more visually-appealing image of the ROI. FIGS. 14A and 14B illustrate example embodiments of an input image (an image obtained in block B1000) in FIG. 14A and an enhanced image in FIG. 14B. The white rectangles mark the ROI in the images. Finally, in block B1080, the image-enhancement device outputs or stores the enhanced image. The enhanced image may reveal more details both in underexposed regions as well as in the edges. Also, the contrast may be improved relative to the input image. Before output or storage, some embodiments scale the pixel values to a different range for display or storage purposes, for example to [0,255]. Additionally, some embodiments of the operational flow use the output of block B1020, the output of block B1030, or the output of block B1060 as the enhanced image. FIG. 15 illustrates an example embodiment of an operational flow for enhancing an image. The operational flow starts in block B1500, where an image-enhancement device obtains an image and a selection of a ROI (e.g., from an image-capturing device, from an image server). The image-enhancement device may also normalize the obtained image. The operational flow then splits into a first flow and a second flow. The first flow proceeds to block B1505, where the image-enhancement device sets a pixel counter p to one and sets a pixel total P to the total number of pixels in the image. The first flow then moves to block B1510. In block B1510, the image-enhancement device determines whether the pixel counter p is less than or equal to the pixel total P. If yes (block B1510=Yes), then the first flow proceeds to block B1515. In block B1515, the image-enhancement device generates (e.g., calculates) a first ROI-based revised value for pixel p based on some or all of the following: the value of pixel p, a strength-control parameter, the values of the pixels in the neighborhood of pixel p, the maximum pixel value in the obtained image, and a central tendency (e.g., mean, median) of the values of all the pixels in the ROI. Some embodiments of block B1515 can be described by equation (7). The image-enhancement device adds the first ROI-based revised value for pixel p to an interim image. The first flow then moves to block B1520, where the image-enhancement device increases the pixel counter p by one, and then the first flow returns to block B1510. If in block B1510 the image-enhancement device determines that the pixel counter p is not less than or equal to the pixel total P (block B1510=No), then the first flow proceeds to block B1525. In block B1525, the image-enhancement device resets the pixel counter p to one. The first flow then moves to block B1530, where the image-enhancement device determines if the pixel counter p is less than or equal to the pixel total P. If yes (block B1530=Yes), then the first flow proceeds to block B1535. In block B1535, the image-enhancement device generates (e.g., calculates) a second ROI-based revised value for pixel p based on some or all of the following: the first ROI-based revised value of pixel p (the value of pixel p in the interim image), a strength-control parameter (which may or may not be the same as the strength-control parameter in block B1515), the values of the pixels in the neighborhood of pixel p in the interim image, the maximum of the values of all the pixels in the interim image, and a central tendency (e.g., mean, median) of the values of all the pixels in the ROI in the interim image. Some embodiments of block B1535 can be described by equation (8). The first flow then moves to block B1540, where the image-enhancement device increases the pixel counter p by one, and then the first flow returns to block B1530. If in block B1530 the image-enhancement device determines that the pixel counter p is not less than or equal to the pixel total P (block B1530=No), then the first flow proceeds to block B1550. Also, from block B1500, the second flow proceeds to block B1545. In block B1545, the image-enhancement device performs edge enhancement on the obtained image. The second flow then moves to block B1550. In block B1550, the image-enhancement device generate a synthesized image based on the edge-enhanced image and on the second ROI-based revised values of the pixels. Some embodiments of the synthesizing can be described by equation (9). In block B1555, the image-enhancement device performs dynamic contrast stretching on the synthesized image (the output of block B1550), thereby generating and adjusted-dynamic-range image. In some embodiments (e.g., the embodiment in FIG. 15), the adjusted-dynamic-range image is the enhanced image. Finally, in block B1560, the image-enhancement device outputs or stores the enhanced image (the output of block B1555). Before output or storage, some embodiments scale the pixel values to a different range for display or storage purposes, for example to [0,255]. Additionally, some embodiments of the operational flow use the output of blocks B1505-B1520, the output of blocks B1525-B1540, or the output of block B1550 as the enhanced image. FIG. 16 illustrates an example embodiment of an operational flow for enhancing an image. This operational flow performs tone mapping and enhances the dynamic range of the image. Some tone mapping applies a gamma-shaped function to an original image, globally or locally. When applied globally, the gamma-shaped function adjusts the histogram of the whole image by boosting the low-mid tones of the image while compressing mid-high tones to maximize the contrast in the low-mid tone regions. When applied locally (e.g., to better reproduce local contrast), the image may be divided into multiple regions (e.g., based on location, content, intensity level), and then the parameters of the gamma-shaped function may be tuned for the individual region before the gamma-shaped function is applied to the region. Because the gamma-shaped function compresses high tones in the image, the gamma function may work most effectively for natural-scene images where the background has a higher luminance (e.g., sky, light source) and contains less useful details. Then the dynamic range in the high-luminance region is traded off to enhance contrast and reveal more details in the low-mid tone regions. However, because of different imaging techniques or imaging goals, sometimes both the low-luminance regions and the high-luminance regions in an image contain important details or information. In these situations, some embodiments attempt to preserve details and enhance details or contrast in both the high- and low-luminance regions. The operational flow in FIG. 16 starts in block B1600, and then, in block B1610, an image-enhancement device obtains an image. In some embodiments, the flow then moves to block B1620, where the image-enhancement device preprocesses the obtained image (the image obtained in block B1610) by scaling the pixel values in the obtained image, for example to [0, 1], which means that the minimal pixel value of the obtained image is set to 0 and the maximal pixel value of the obtained image is set to 1. Any value between is linearly scaled. Additionally, some embodiments of the operational flow omit block B1620. The flow then splits into a first flow and a second flow. The first flow proceeds to block B1630, where the image-enhancement device performs an inverse tone mapping on the obtained image (e.g., in embodiments that omit block B1620) or the scaled image (the output of block B1620). In some embodiments, the inverse tone mapping can be described by Ihigh=1−(1−Iin)g,  (13) where Iin is the image (e.g., the scaled image, the obtained image) that is input to block B1630, where Ihigh is the inverse-tone-mapped image, and where g is a gamma factor that controls the shape of a gamma function. The operation that is described by (1−Iin) performs an inverse operation on the image, which means high-luminance regions are changed into low-luminance regions. The gamma operation on this inversed image may enhance the contrast of the current low-luminance regions, which are the high-luminance regions in the original image. Then the inverse operation inverses the image back so that relatively high-luminescence regions will be unchanged compared to the original image. And, because the re-inverse operation is linear, the contrast that has been enhanced during the gamma operation will be maintained. So in the inverse-tone-mapped image Ihigh, the contrast in the mid-high luminance region is enhanced and more details may be revealed in that region. Also, different gamma factors g can be used for different regions in the image. Additionally, some embodiments use different gamma-shaped operations than the global gamma operations that can be described by equation (13). FIGS. 17A-D illustrate examples of the effects of an embodiment of the inverse-tone-mapping operations. In this example, the gamma factor is 0.5 (g=0.5). FIG. 17A shows an original image, where “1” marks some of the low luminance regions, “2” marks some of the mid-luminance regions, and “3” marks some of the high-luminance regions. FIG. 17B shows the inversed image of the original image. FIG. 17C shows the inversed image after gamma-enhancement. And FIG. 17D shows the inverse-tone-mapped image of block B1630, which is the inversed image of FIG. 17C. FIGS. 17A-D show that, after block B1630, the contrast in the mid-high luminance regions, as marked by “2” and “3,” has been enhanced, and the details are more visible to human vision. The first flow then proceeds to block B1650. From block B1620, the second flow moves to block B1640, where the image-enhancement device performs a local-adaptive mapping on the obtained image (e.g., in embodiments that omit block B1620) or the scaled image (the output of block B1620). Consequently, block B1640 can be performed before or in parallel with block B1630. In some embodiments, the local-adaptive mapping applies twice-adaptive nonlinear local mapping and can be described by equations (1) and (3). For example, in some embodiments, in block B1640 the image-enhancement device performs the operations that are described in blocks B220-B230 in FIG. 2, in blocks B705-B740 in FIG. 7, in blocks B1020-B1030 in FIG. 10, or in blocks B1505-B1540 in FIG. 15. In the embodiment shown in FIG. 16, block B1640 includes performing a first locally-adaptive mapping in block B1642 (e.g., a mapping that can be described by equation (1) or equation (7)) and includes performing a second locally-adaptive mapping in block B1644 (e.g., a mapping that can be described by equation (3) or equation (8)). FIGS. 18A-B illustrate an example of the effects of an embodiment of the locally-adaptive-mapping operations. In this example, both α1 and α2 were set to 0.1. FIG. 18A is the original image with low, mid, and high luminance regions marked by “1,” “2,” and “3,” respectively. FIG. 18B shows the image after twice performing locally-adaptive-nonlinear-mapping operations (e.g., as described by equations (1) and (3)). FIGS. 18A-B show that, after block B1640, the dark regions (marked as “1” and “2”) in the original image in FIG. 18A have been enhanced and more details (e.g., circular lines) are visible in FIG. 18B. Also, some embodiments use a locally-adaptive tone mapping operation to produce better local contrast in low-mid tone regions. And some embodiments use other gamma-shaped global tone-mapping operations or locally-adaptive tone-mapping operations. The flow then moves to block B1650, where the image-enhancement device generates (e.g., reconstructs, synthesizes) an adjusted-dynamic-range image based on the images that are output by blocks B1630 and B1640. After the operations in blocks B1630 and B1640, the image-enhancement device has two images that have different contrast-enhanced regions. The inverse tone mapping in block B1630 enhances contrast or dynamic range in mid-high tone regions, and the mapping in block B1640 enhances contrast or dynamic range in low-mid tone regions. In block B1650, the image-enhancement device generates an adjusted-dynamic-range image based on these two images. In some embodiments, the generation of the an adjusted-dynamic-range image Iadr can be described by the following: Iadr=normalize((β*Ihigh+(1−β)*Ilow),  (14) where Ihigh is the image generated in block B1630, in which the mid-high luminance regions have been contrast enhanced; where Ilow is the image generated in block B1640, in which the low-mid luminance regions have been contrast enhanced; and where is a factor between 0 and 1 that controls the contrast-enhancement strength for the adjusted-dynamic-range image Iadr. A smaller β yields an adjusted-dynamic-range image Iadr that has more dynamic range in low-mid tone regions and less dynamic range in mid-high tone regions, and a larger yields an adjusted-dynamic-range image Iadr that has less dynamic range in low-mid tone regions and more dynamic range in mid-high tone regions. For example, some embodiments use β=0.5. After adding the weighted outputs of blocks B1630-B1640, the adjusted-dynamic-range image Iadr may be normalized to [0,1]. With a proper selection of the adjusted-dynamic-range image Iadr output in block B1650 has an enhanced dynamic range in mid-high tone regions and in low-mid tone regions. Accordingly, makes the operations in block B1650 tunable, and the strength of the tuning can be different for different regions in the image. In some embodiments, for example, the image can first be segmented into two or more different regions based on the regions' contents or histograms. Different βs are used in different regions to maximize the contrast. At the boundary area between two different regions, β is calculated and used as the weighted average of the βs in the two regions to achieve a smooth transition. Next, in block B1660, the image-enhancement device performs post processing on the adjusted-dynamic-range image, thereby generating an enhanced image. For example, to further increase the dynamic range for the image and amplify the details, some embodiments perform one or more post-processing operations on the adjusted-dynamic-range image. The post-processing operations may include a convolution with an LOG (Mexican Hat) 2D filter and dynamic contrast stretching (e.g., as described in block B260 in FIG. 2, block B750 in FIG. 7, block B1070 in FIG. 10, and block B1555 in FIG. 15). FIGS. 19A-B illustrate an example of a comparison between an original image (an image obtain in block B1610) and an enhanced image. FIG. 19A shows the original image, and FIG. 19B shows the enhanced image. The contrast has been enhanced in all three luminance regions (marked by 1, 2, and 3). Also, some embodiments omit block B1660. The flow then moves to block B1670, where the image-enhancement device stores or output the enhanced image. The flow ends in block B1680. Additionally, some embodiments of the operational flow use the output of block B1630, the output of block B1640, or the output of block B1650 as the enhanced image. FIG. 20 illustrates an example embodiment of an image-enhancement system. The system 20 includes an image-enhancement device 2000, which is a specially-configured computing device; an image-capturing device 2010 (e.g., an x-ray-imaging device); a display device 2020; and an image-repository device 2030 (e.g., a file server, a database server). In this embodiment, the image-enhancement device 2000, the image-capturing device 2010, and the image-repository device 2030 communicate via one or more networks 2099, which may include a wired network, a wireless network, a LAN, a WAN, a MAN, and a PAN. Also, in some embodiments of the system 20, the devices communicate via other wired or wireless channels. The image-enhancement device 2000 includes one or more processors 2001, one or more I/O components 2002, and storage 2003. Also, the hardware components of the image-enhancement device 2000 communicate via one or more buses or other electrical connections. Examples of buses include a universal serial bus (USB), an IEEE 1394 bus, a Peripheral Component Interconnect (PCI) bus, a Peripheral Component Interconnect Express (PCIe) bus, an Accelerated Graphics Port (AGP) bus, a Serial AT Attachment (SATA) bus, and a Small Computer System Interface (SCSI) bus. The one or more processors 2001 include one or more central processing units (CPUs), which include microprocessors (e.g., a single core microprocessor, a multi-core microprocessor); one or more graphics processing units (GPUs); one or more tensor processing units (TPUs); one or more application-specific integrated circuits (ASICs); one or more field-programmable-gate arrays (FPGAs); one or more digital signal processors (DSPs); or other electronic circuitry (e.g., other integrated circuits). The I/O components 2002 include communication components (e.g., a graphics card, a network-interface controller) that communicate with the image-capturing device 2010, the display device 2020, the image-repository device 2030, the network 2099, and other input or output devices (not illustrated), which may include a keyboard, a mouse, a printing device, a touch screen, a light pen, an optical-storage device, a scanner, a microphone, a drive, and a game controller (e.g., a joystick, a control pad). The storage 2003 includes one or more computer-readable storage media. As used herein, a computer-readable storage medium, in contrast to a mere transitory, propagating signal per se, refers to a computer-readable medium that includes an article of manufacture, for example a magnetic disk (e.g., a floppy disk, a hard disk), an optical disc (e.g., a CD, a DVD, a Blu-ray), a magneto-optical disk, magnetic tape, and semiconductor memory (e.g., a non-volatile memory card, flash memory, a solid-state drive, SRAM, DRAM, EPROM, EEPROM). Also, as used herein, a transitory computer-readable medium refers to a mere transitory, propagating signal per se, and a non-transitory computer-readable medium refers to any computer-readable medium that is not merely a transitory, propagating signal per se. The storage 2003, which may include either or both ROM and RAM, can store computer-readable data or computer-executable instructions. The image-enhancement device 2000 also includes a communication module 2003A, a preprocessing module 2003B, a locally-adaptive-mapping module 2003C, an inverse-tone-mapping module 2003D, an edge-enhancement module 2003E, a dynamic-range-adjustment module 2003F, an image-synthesis module 2003G, a post-processing module 2003H, and image storage 20031. A module includes logic, computer-readable data, or computer-executable instructions. In the embodiment shown in FIG. 20, the modules are implemented in software (e.g., Assembly, C, C++, C#, Java, BASIC, Pert, Visual Basic, Python). However, in some embodiments, the modules are implemented in hardware (e.g., customized circuitry) or, alternatively, a combination of software and hardware. When the modules are implemented, at least in part, in software, then the software can be stored in the storage 2003. Also, in some embodiments, the image-enhancement device 2000 includes additional or fewer modules, the modules are combined into fewer modules, or the modules are divided into more modules. The communication module 2003A includes instructions that cause the image-enhancement device 2000 to communicate with one or more other devices (e.g., the image-capturing device 2010, the image-repository device 2030), for example to obtain one or more images from the image-capturing device 2010, to obtain one or more images from the image-repository device 2030, to obtain a selection or indication of a region of interest (ROI), to generate one or more user interfaces, or to send one or more images to another device (e.g., the display device 2020). Also for example, some embodiments of the communication module 2003A include instructions that cause the image-enhancement device 2000 to perform at least some of the operations that are described in blocks B200 and B260 in FIG. 2, in blocks B700 and B755 in FIG. 7, in blocks B1000 and B1080 in FIG. 10, in blocks B1500 and B1560 in FIG. 15, and in blocks B1610 and B1670 in FIG. 16. Additionally, for example, some embodiments of the communication module 2003A include instructions that cause the image-enhancement device 2000 to generate the embodiment of a user interface that is illustrated in FIG. 5, the embodiment of a user interface that is illustrated in FIG. 6, the embodiment of a user interface that is illustrated in FIG. 8, or the embodiment of a user interface that is illustrated in FIG. 9. The preprocessing module 2003B includes instructions that cause the image-enhancement device 2000 to perform preprocessing operations (e.g., normalizing operations) on one or more images. For example, some embodiments of the preprocessing module 2003B include instructions that cause the image-enhancement device 2000 to perform at least some of the operations that are described in block B210 in FIG. 2, in block B1010 in FIG. 10, and in block B1620 in FIG. 16. The locally-adaptive-mapping module 2003C includes instructions that cause the image-enhancement device 2000 to perform at least one locally-adaptive mapping (e.g., a first locally-adaptive mapping, a second locally-adaptive mapping, a first ROI-based locally-adaptive mapping, a second ROI-based locally-adaptive mapping) on one or more images, for example as described in blocks B220-B230 in FIG. 2, in blocks B705-B740 in FIG. 7, in blocks B1020-B1030 in FIG. 10, in blocks B1505-1540 in FIG. 15, or in block B1640 in FIG. 16. The inverse-tone-mapping module 2003D includes instructions that cause the image-enhancement device 2000 to perform an inverse tone mapping on one or more images, for example as described in block B1630 in FIG. 16. The edge-enhancement module 2003E includes instructions that cause the image-enhancement device 2000 to enhance the edges in one or more images, for example as described in block B240 in FIG. 2, in block B745 in FIG. 7, in block B1040 in FIG. 10, or in block B1545 in FIG. 15. The dynamic-range-adjustment module 2003F includes instructions that cause the image-enhancement device 2000 to generate one or more adjusted-dynamic-range images, for example by performing dynamic contrast stretching on one or more images. Also for example, some embodiments of the preprocessing module 2003B include instructions that cause the image-enhancement device 2000 to perform at least some of the operations that are described in block B250 in FIG. 2, in block B750 in FIG. 7, in block B1070 in FIG. 10, in block B1555 in FIG. 15, and in block B1650 in FIG. 16. The image-synthesis module 2003G includes instructions that cause the image-enhancement device 2000 to generate an image based on two or more images, for example by synthesizing two or more images. Also for example, some embodiments of the image-synthesis module 2003G include instructions that cause the image-enhancement device 2000 to perform at least some of the operations that are described in block B1060 in FIG. 10, in block B1550 in FIG. 15, and in block B1650 in FIG. 16. The post-processing module 2003H includes instructions that cause the image-enhancement device 2000 to perform post processing on one or more images, for example as described in block B1660 in FIG. 16. The image storage 20031 is a repository that stores one or more images, for example the images that are obtained or generated by the operations in FIG. 2, FIG. 7, FIG. 10, FIG. 15, and FIG. 16. The image-capturing device 2010 includes one or more processors 2011, one or more I/O components 2012, storage 2013, a communication module 2013A, and an image-capturing assembly 2014. The image-capturing assembly 2014 includes one or more image sensors and may include one or more lenses and an aperture. The communication module 2013A includes instructions that, when executed, or circuits that, when activated, cause the image-capturing device 2010 to capture an image, receive a request for an image from a requesting device, retrieve a requested image from the storage 2013, or send a retrieved image to the requesting device (e.g., the image-enhancement device 2000). Also, in some embodiments, the image-enhancement device 2000 includes the image-capturing assembly 2014. The scope of the claims is not limited to the above-described embodiments and includes various modifications and equivalent arrangements. Also, as used herein, the conjunction “or” generally refers to an inclusive “or,” though “or” may refer to an exclusive “or” if expressly indicated or if the context indicates that the “or” must be an exclusive “or.” 16400972 canon virginia, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 20th, 2022 03:03PM Apr 20th, 2022 03:03PM Technology Technology Hardware & Equipment
nyse:caj Canon Apr 12th, 2022 12:00AM Mar 22nd, 2019 12:00AM https://www.uspto.gov?id=US11298001-20220412 Calibration tool for rotating endoscope Apparatus and methods for correcting distortion of a spectrally encoded endoscopy (“SEE”), more specifically, the subject disclosure provides a calibration tool calibrating a rotating spectrally encoded endoscope, which may be reused to recalibrate the endoscope throughout the lifecycle, and which may further act to protect the endoscope during packaging, shipping and handling. 11298001 1. A method for calibrating a rotating SEE, the method comprising: providing a calibration apparatus comprising: a body configured to encompass at least a portion of a SEE; a bottomed surface affixed to a distal end of the body; and a calibration chart configured on an inside wall portion of the apparatus, wherein the apparatus has an open end, opposite the bottomed surface, wherein the open end is configured to receive the at least a portion of the SEE, and the SEE is a rotating SEE, scanning the calibration chart with a SEE spectral line to obtain an image; determining a sign of a tangential shift of the spectral line based on a slope of at least one of the radial lines of the first image in a polar coordinate; computing a magnitude of the tangential shift based on at least one of the radial lines of the first image in either a polar coordinate or a Cartesian coordinate; determining a sign of a radial shift of the spectral line based on whether the slope has a turning point or not; computing a magnitude of the radial shift by measuring a location of the turning point if the radial shift is determined to be negative; scanning the calibration chart with the SEE spectral line to obtain a second image if the radial shift is determined to be positive; computing the magnitude of the radial shift based on the magnitude of the tangential shift and a radius of the circle; and applying the tangential shift and the radial shift for a corrected calibration. 2. The method of claim 1, wherein the calibration apparatus further comprising an attachment element configured to rigidly and removably attach the apparatus to the SEE. 3. The method of claim 1, wherein the calibration apparatus is configured to further extend onto a sheath of the SEE. 4. The method of claim 1, wherein the calibration apparatus is configured for repeated attachment and removal from the at least a portion of the SEE. 5. The method of claim 1, wherein the calibration chart of the calibration apparatus is positioned at a predetermined distance from the SEE. 6. The method of claim 1, wherein the bottomed surface of the calibration apparatus is configured to be ruptured by the SEE, allowing the SEE to protrude through the bottomed surface of the apparatus. 7. The method of claim 1, wherein the calibration apparatus further comprising a second calibration chart configured on an inside wall portion of the apparatus. 8. The method of claim 1, wherein the bottomed surface of the calibration apparatus is configured to be rotatable or pivotable, allowing for rearrangement of the bottomed surface with respect to the SEE. 9. The method of claim 1, wherein the calibration chart in the body has a diameter larger than a diameter of the SEE. 9 CROSS REFERENCE TO RELATED PATENT APPLICATIONS This application claims priority from U.S. Provisional Patent Application No. 62/650,155 filed on Mar. 29, 2018, in the United States Patent and Trademark Office, the disclosure of which is incorporated by reference herein in its entirety. FIELD OF THE DISCLOSURE The present disclosure relates generally to apparatus and methods for calibrating a scanning electron endoscope (“SEE”), and more particularly, to calibrating a rotating SEE. BACKGROUND OF THE DISCLOSURE Medical probes have the ability to provide images from inside a patient's body. Considering the potential harm capable to the human body caused by the insertion of a foreign object, it is preferable that the probe be as small as possible. Additionally, the ability to provide images within small pathways such as vessels, ducts, incisions, gaps and cavities dictates the use of a small probe One particularly useful medical probe is the SEE, which is a miniature endoscope that can conduct high-definition imaging through a sub-mm diameter probe. In operation, light from a light guiding component found in the SEE probe, (single mode fiber (“SMF”) usually for better resolution) is first coupled into a coreless fiber and then into a Gradient Index (“GRIN”) lens and then the light is diffracted through a prism with a grating. The diffracted light is scanned across the sample to be analyzed. Light reflected by the sample is captured by a detection fiber and imaged for viewing. As an example of a calibration technique for an endoscope that scans an optical fiber and acquires an image, Japanese Patent Application Laid-Open Publication No. 2010-515947 discloses a scanning beam apparatus. The Japanese Patent above discloses a method for calibrating a scanning beam apparatus, the method including acquiring an image of a calibration pattern using the scanning beam apparatus, comparing the acquired image with a representation of the calibration pattern and calibrating the scanning beam apparatus based on the comparison, in order to improve distortion of the acquired image by enhancing the accuracy of estimation of the position of an illumination spot for each pixel point in a scan pattern. In recent years SEE have advanced to allow for a greater field of vision for the endoscope, while retaining the diminutive size leading to less evasive imaging and surgical procedures. As provided in WO publication No. 2017/117203, the use of a rotating light dispersion fiber in the SEE allows for varying angles of incidents of light from the light dispersion fiber, which relays to a greater field of vision captured by the detection fiber. Specifically, the polychromatic light emanating from this rotating SEE probe is spectrally dispersed and projected in such a way that each color (wavelength) illuminates a different location on the tissue along the dispersive line. Reflected light from the tissue can be collected and decoded by a spectrometer to form a line of image, with each pixel of the line image corresponding to the specific wavelength of illumination. Spatial information in the other dimension perpendicular to the dispersive line is obtained by rotating the light dispersion fiber using a motor. For the forward viewing SEE imaging, spatial information in the other dimension perpendicular to the dispersive line is obtained by rotating the probe using a rotary motor such that target is circularly scanned. Due to various environmental variables, manufacturing variables, imperfect electronics, the sensitivity of the scanning fiber apparatus, and/or other factors, calibration of a SEE is typically required for improved and/or consistent imaging. The added complications of having a rotating probe, as provided in WO publication No. 2017/117203, further calls for calibration and method intended for a rotating SEE. SUMMARY The subject disclosure provides apparatus and methods for correcting distortion of a rotating spectrally encoded endoscopy image. More specifically, the subject disclosure provides an apparatus for calibrating a scanning electron endoscope (“SEE”), the apparatus comprising a body configured to encompass at least a portion of a SEE, as well as a bottomed surface affixed to a distal end of the body; and a calibration chart configured on an inside wall portion of the apparatus, wherein the apparatus has an open end, opposite the bottomed surface, wherein the open end is configured to receive the at least a portion of the SEE, and the SEE is a rotating SEE. In various embodiments, the apparatus further comprising an attachment element configured to rigidly and removably attach the apparatus to the SEE. Furthermore, the apparatus is configured wherein the body of the apparatus is configured to further extend onto a sheath of the SEE. In another embodiment, the apparatus is configured for repeated attachment and removal from the at least a portion of the SEE. In further embodiments of the apparatus, the calibration chart is positioned at a predetermined distance from the SEE. In yet another embodiment, the apparatus has the bottomed surface configured to be ruptured by the SEE, allowing the SEE to protrude through the bottomed surface of the apparatus. In further embodiment, the inside wall portion may be an inside wall portion of the bottomed surface. Furthermore, the inside wall portion may be an inside wall portion of the body. In other embodiment of the subject apparatus, a second calibration chart configured on an inside wall portion of the apparatus is utilized. Further embodiment devise the bottomed surface to be configured to be rotatable or pivotable, allowing for rearrangement of the bottomed surface with respect to the SEE. In additional embodiment, the apparatus further comprising an intermediate surface configured in the apparatus, wherein the intermediate surface contains at least one calibration chart. The subject innovation further details a method for calibrating a rotating SEE, the method comprising: providing a calibration apparatus comprising: a body configured to encompass at least a portion of a SEE; a bottomed surface affixed to a distal end of the body; and a calibration chart configured on an inside wall portion of the apparatus, wherein the apparatus has an open end, opposite the bottomed surface, wherein the open end is configured to receive the at least a portion of the SEE, and the SEE is a rotating SEE, the method including: scanning the calibration chart with an SEE spectral line to obtain an image; determining a sign of a tangential shift of the spectral line based on a slope of at least one of the radial lines of the first image in a polar coordinate; computing a magnitude of the tangential shift based on at least one of the radial lines of the first image in either a polar coordinate or a Cartesian coordinate; determining a sign of a radial shift of the spectral line based on whether the slope has a turning point or not; computing a magnitude of the radial shift by measuring a location of the turning point if the radial shift is determined to be negative; scanning the calibration chart with the SEE spectral line to obtain a second image if the radial shift is determined to be positive; computing the magnitude of the radial shift based on the magnitude of the tangential shift and a radius of the circle; and applying the tangential shift and the radial shift for a corrected calibration. In various embodiment, the subject method further provides, wherein the calibration apparatus further comprising an attachment element configured to rigidly and removably attach the apparatus to the SEE. In yet additional embodiment, the subject method teaches the calibration apparatus to be configured to further extend onto a sheath of the SEE. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1(a) provides an illustration of an exemplary SEE probe including a calibration tool, according to one or more embodiment of the subject disclosure. FIG. 1(b) provides an image of an exemplary SEE probe including a calibration tool being removed, according to one or more embodiment of the subject disclosure. FIG. 2 is a flow chart illustrating a method for calibrating a SEE probe incorporating a calibration tool, according to one or more embodiment of the subject disclosure. FIG. 3(a) depicts a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIG. 3(b) portrays a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIG. 3(c) illustrates a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIG. 3(d) portrays a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 4A-4D provide various SEE probes fitted with exemplary calibration tools, according to one or more embodiment of the subject disclosure. FIGS. 5(a) and 5(b) depict a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 6(a)-6(c) provide an exemplary calibration tool, capable of removal and reassertion, according to one or more embodiment of the subject disclosure. FIGS. 7(a) and 7(c) illustrate an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 8(a) through 8(h) portrays various calibration charts, which may be utilized in an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 9(a) and 9(b) provide various SEE probes fitted with an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 10(a)-10(c) provide an exemplary calibration tool, utilizing multiple stages of calibration, according to one or more embodiment of the subject disclosure. FIG. 11 illustrates a SEE probes fitted with an exemplary calibration tool having multiple calibration wells, according to one or more embodiment of the subject disclosure. FIG. 12 portrays a SEE probes fitted with an exemplary calibration tool having multiple calibration wells, according to one or more embodiment of the subject disclosure. FIGS. 13A and 13B provide a SEE probes fitted with an exemplary calibration tool having multiple calibration wells, with 13A depicting a side view and 13B providing a top view, according to one or more embodiment of the subject disclosure. FIGS. 14(a) and 14(b) provide exemplary calibration tools, according to one or more embodiment of the subject disclosure. In addition, FIG. 15 is a flow chart providing a method for calibrating a SEE probe, according to one or more embodiment of the subject disclosure. DETAILED DESCRIPTION Further objects, features and advantages of the present disclosure will become apparent from the following detailed description when taken in conjunction with the accompanying figures showing illustrative embodiments of the present disclosure. FIGS. 1(a) and 1(b) provide illustrations of an exemplary SEE probe including a calibration tool, according to one or more embodiment of the subject disclosure. In FIG. 1(a), the calibration tool 10 to configured to resemble a cylindrical cap, to be fitted to the SEE 12, while FIG. 1(b) depict the calibration tool 10 being removed from the SEE 12, preferably after calibration has been completed. The calibration tool 10 comprises a distal end 14, which has a bottomed cylindrical surface 16, and hollow cylindrical body 18 which is configured perpendicular to the bottomed cylindrical surface 16, and a proximal end 20 which is gapped for accepting the SEE 12. The bottomed cylindrical surface 16 is configured to accept a calibration chart 22, which is used for calibrating the SEE 12 (as detailed below). The calibration tool 10 is configured such that the inner calibration chart 22 is positioned at a predetermined distance with respect to the tip 24 of the SEE 12. The calibration is performed by irradiating light onto the calibration chart 22 and processing the image data of the inner surface of the cap. In various embodiments, an illumination source is configured to provide the irradiating light, with a detection fiber being incorporated to capture the reflected light and send the information to a spectrometer for processing the image data. In various embodiments, the calibration tool 10 may be removable affixed to the SEE 12 by pressure fitment, snap fitment, rotationally coupled, clamped, buttoned, threaded, screwed, glued on, taped, or any other appropriate fastening means known in the art. In addition, the calibration chart 22 may be configured to be water proof and/or solvent proof, allowing for cleaning and exposure of the calibration tool 10 to water and other elements. The SEE 12 may be prepackaged with a calibration tool 10, for ease of operation and calibration. In this instance, the user may perform a re-calibration of the SEE 12 with the calibration tool 10 pre-fitted to the SEE 12. The user initiates the automatic calibration through the software which operates the SEE 12 within the calibration tool 10 (discussed in detail below). Upon completion of calibration, the user removes the calibration tool 10 and begins intended use of the SEE 12. When the user is done operating the SEE 12, the user properly disposes of the SEE 12 and calibration tool 10. Alternatively, the SEE 12 may be recapped with the calibration tool 10 to enclose the SEE 12, which has been exposed to tissue or bodily fluids. Furthermore, and intended for non-disposable SEE's 12, the SEE 12 and calibration tool 10 may be re-sterilized individually, then the SEE 12 is re-capped with the calibration tool 10 and stored for future use. FIG. 2 is a flow chart illustrating a method for calibrating a SEE probe incorporating a calibration tool, according to one or more embodiment of the subject disclosure. As provided, the SEE 12 is assembled with the calibration tool 10 fixed to the SEE 12 by the manufacturer, and a manufacturer initiated calibration is performed. The SEE 12 and accompanying calibration tool 10 are sterile packed for shipment to an end user. The end user removes the combined SEE 12 and calibration tool 10 from the sterilized packaging, and recalibrates the SEE 12, which may have been unsettled during shipping and/or handling. After recalibration, the end user removes the calibration tool 10, and operates the SEE 12 as intended. FIG. 3(a) depicts a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. In this embodiment, the SEE 12 may be covered by a rigid calibration tool 10, which would enhance protection of the SEE 12 tip 24. The calibration tool 10 may be attached to the SEE 12, at or near the proximal end of the SEE 12, as illustrated by the attachment element 26. The attachment element 26 or mechanical features of the calibration tool 10 may align the SEE 12 axially centered to the calibration chart 22, thus positioning the SEE 12 and calibration tool 10 for accurate and repeatable calibration. The rigid calibration tool 10 may also be useful in protecting the tip 24 in packaging, shipping, storage and/or handling of the SEE 12. In addition, the calibration tool 10 may be utilized to prevent dust, particles, physical contamination from accumulating on the distal imaging lens of the SEE 12, or window of the SEE 12. If the SEE 12 scope is flexible, the calibration tool 10 can cover a length of the flexible sheath 28 to protect from bending or kinking as well (See FIG. 3(b)). FIGS. 3(a) through 3(d) portray a SEE probe including an exemplary calibration tool, according to one or more embodiment of the subject disclosure. FIGS. 3(a) and 3(c) incorporate a calibration tool 10 configured to cover a shorter portion of the SEE 12, with FIG. 3(c) employing the attachment element 26 designed to better protect the tip 24 of the SEE 12. FIGS. 3(b) and 3(d) detail a calibration tool 10 configured to cover a longer portion of the SEE 12, which may include the sheath 28 portion of the SEE 12 as well. The sheath 28, may be flexible or rigid, with the calibration tool 10 designed to offer greater protection to the sheath 28 in addition to the See 12 tip 24. FIG. 3(d) also employs the attachment element 26 designed to better protect the tip 24 and sheath 28 of the SEE 12. FIG. 4 provides various SEE probes fitted with exemplary calibration tools, according to one or more embodiment of the subject disclosure. In various instances, the SEE 12 may be utilized as a sub-system of a larger medical device. As provided in FIG. 4, the SEE 12 tips 24 of these sub-systems may be fitted by the subject calibration tool 10, which would be fitted to the distal end of the SEE 12. FIGS. 5(a) and 5(b) depict a SEE probe including an exemplary calibration tool having a break-through cylindrical surface 16, according to one or more embodiment of the subject disclosure. In this embodiment, the subject calibration tool 10 incorporates a break-through bottomed cylindrical surface 16, wherein the calibration chart 22 is also broken-through once the SEE 12 has been calibrated. As provided in FIG. 5(a), the calibration tool 10 is fitted to the SEE 12, and calibration is conducted. Once calibration is completed, the calibration tool 10 is forcibly urged parallel to and towards the SEE 12, thus rupturing the bottomed cylindrical surface 16, and exposing the SEE 12 tip 24 for use, as provided in FIG. 5(b). The calibration tool 10 in this embodiment is intended for one-time calibration use only. FIGS. 6(a)-6(c) provide an exemplary calibration tool, capable of removal and reassertion, according to one or more embodiment of the subject disclosure. In the embodiment provided in FIGS. 6(a) through 6(c), the calibration tool 10 is intended to be used repeatedly, as the calibration tool 10, may be removed and reasserted on the SEE 12. In this embodiment, the calibration tool 12 may be used as a safety device to cover the See 12 tip 24, after the tip 24 has been exposed to biological fluids and/or biological matter. In various other embodiments, the calibration tool 12 and SEE may both be sterilized, and the calibration tool 12 may be refitted on the SEE 12 for safe storage and future use. In an embodiment, the SEE may be designed for single use, such that after exposure to biological fluids, the SEE may be discarded. In such case, the SEE may also be recapped, thus ensuring the potentially biohazard material on the SEE is isolated for proper handling and disposal. FIGS. 7(a) and 7(b) illustrate an exemplary calibration tool, according to one or more embodiment of the subject disclosure. In this embodiment, the calibration tool 10 is fitted with a bottomed cylindrical surface 16 capable of being rotated and/or pivoted. The rotating and/or pivoting is configured to allow the SEE 12 tip 24 to advance beyond and through the calibration tool 10, as seen in FIG. 7(b), without damaging the bottomed cylindrical surface 16 of the calibration tool 10, as well as the calibration chart 24. In various embodiments, the rotating and/or pivoting bottomed cylindrical surface 16 may consist of a shutter-type element, one or more pivot attachments, or derivatives thereof. As seen in FIGS. 7(a) through 7(c), calibration is accomplished with the calibration tool 10 configured on the SEE 12, wherein upon completion of calibration, the calibration tool 10 may be urged parallel to and towards the SEE 12, enacting the rotating and/or pivoting of the bottomed cylindrical surface 16, and exposing the SEE 12 tip 24 for use. Finally, FIG. 7(c) depicts how the calibration tool 10 may be returned to a position of protecting the tip 24, by urging the calibration tool 10 parallel to and away from the SEE 12, thus enacting the rotating and/or pivoting of the bottomed cylindrical surface 16 to conceal the SEE 12 and tip 24 from the environment. In various embodiments, the rotating and/or pivoting bottomed cylindrical surface 16 may be configured for one-time use, wherein the bottomed cylindrical surface 16 is locked once the calibration tool 10 is urged parallel to and away from the SEE 12, thus enacting the rotating and/or pivoting of the bottomed cylindrical surface 16 to a concealed position. In another embodiment, the rotating and/or pivoting bottomed cylindrical surface 16 may be configured for repeated use, allowing an end user to repeatedly expose and conceal the tip 24 by enacting the rotating and/or pivoting of the bottomed cylindrical surface 16. In addition, for repeated use of the pivoting bottomed cylindrical surface 16, the SEE 12 and calibration tool 10 may be sterilized between uses to ensure consistency and safety of the calibration tool 10. FIGS. 8(a)-8(h) portray various calibration charts, which may be utilized in an exemplary calibration tool, according to one or more embodiment of the subject disclosure. The various examples of calibration charts 22 provided in FIGS. 8(a) through 8(h) may be used independently or in combination in the calibration tool 10. As stated prior, calibration can be accomplished by scanning a calibration chart 22 found at the distal end 14 of the calibration tool 10. The chart 22 can be a combination of various charts for a single chart to allow for various calibrations. Multiple calibration tools 10 with different charts 22 can be used to perform multiple individual calibrations of a single SEE 12. Forward view SEE's visualize calibration charts 22 at the distal end 14 of the calibration tool 10, while side-view SEE's will visualize calibration charts 22 on the cylindrical body 18 of the calibration tool 10. Various calibration charts 22 may include a color wheel, gradients, stepped chart, variations thereof, combinations of charts, and alternatives thereof. By way of example, FIG. 9(a) illustrates a forward view SEE 12 visualizing calibration charts 22 at the distal end 14 of the calibration tool 10. FIG. 9(b) denotes a side-view SEE 12 which is configured to visualize calibration charts 22 on the cylindrical body 18 of the calibration tool 10. Alternatively, combinations of side-view and forward view SEE's may merit a calibration tool 10 having both side-view and forward view calibration charts. FIGS. 10(a)-10(c) provide an exemplary calibration tool, utilizing multiple stages of calibration, according to one or more embodiment of the subject disclosure. As provided in FIGS. 10(a) through 10(c), a calibration tool 10, may incorporate one or more intermediate surface(s) 30 configured in the cylindrical calibration tool 10, wherein each intermediate surface 30 in situated about perpendicular to the cylindrical body 18 of the calibration tool 10, with each intermediate surface 30 having one or more calibration chart(s) 22 for calibrating the SEE 12. Each intermediate surface 30 may be configured to allow the SEE 12 to pierce through the intermediate surface 30 by forcibly urging the calibration tool 10 parallel to and towards the SEE 12, thus advancing the SEE 12 to the following intermediate surface 30 and/or bottomed cylindrical surface 16. In one embodiment, the one or more intermediate surface(s) 30 may be capable of being rotated and/or pivoted. The rotating and/or pivoting is configured to allow the SEE 12 tip 24 to advance beyond and through the calibration tool 10 without damaging the intermediate surface 30. In various embodiments, the rotating and/or pivoting intermediate surface 30 may consist of a shutter-type element, one or more pivot attachments, or derivatives thereof. As seen in FIGS. 10(a) through 10(c), calibration is accomplished with the calibration tool 10 configured on the SEE 12, wherein upon completion of calibration, the calibration tool 10 may be urged parallel to and towards the SEE 12, enacting the rotating and/or pivoting of the intermediate surface 30, and exposing the SEE 12 tip 24 to a secondary and/or tertiary intermediate surface 30 for additional calibration. Upon completion of all stages of calibration conducted by each intermediate surface 30 and bottomed cylindrical surface 16, the SEE 12 is now properly calibrated for use by the end user. Each intermediate surface 30 may have one or more calibration chart(s) 22 for calibrating the SEE 12. In yet another embodiment of the calibration tool 10, illustrated in FIG. 11, a SEE 12 probe employs an exemplary calibration tool 10 having multiple calibration wells, according to one or more embodiment of the subject disclosure. Each well 32 contains one or more calibration chart(s) 22 for calibrating the SEE 12. Alternatively, FIGS. 12 and 13 depicts a calibration tool 10, wherein a single well 32 in employed containing a slider 34 configured with multiple slots 36, each having a calibration chart 22. After calibration with a first calibration chart 22 in a first slot 36(a) is performed, the slider 34 is positioned for calibration of additional calibration charts in additional slots 36(b), 36(c) and 36(d). It is contemplated that additional calibration charts may be incorporated into each slot 36, as well as the use of additional slots 36 in the slider 34. Although longitudinal (FIG. 12) and rotating (FIG. 13) slider 34 configurations have been illustrated, it is contemplated herein that any appropriate and alternative configuration for a slider is within the scope of the present disclosure. As provided in FIGS. 14(a) and 14(b), the calibration tool 10 may incorporate a visual indicator 38 for identifying the imaging orientation of the calibration tool 10, and associated SEE, when attached to the calibration tool 10. Through label markings or design features, the calibration tool 10 is used to designate SEE 12 view orientations such as ‘top’ or ‘upright’ so that the user can easily identify how to hold and manipulate the SEE 12. Calibration for SEE Below are various methods for calibration and/or correction of distortion for a SEE, which would be used in conjunction with the subject calibration tool, disclosed herein. A first reference pattern having a plurality of radial lines is scanned with an SEE spectral line to obtain a first image. A sign of a tangential shift of the spectral line is determined based on a slope of at least one of the radial lines of the first image in a polar coordinate. A magnitude of the tangential shift is computed based on at least one of the radial lines of the first image in either a polar coordinate or a Cartesian coordinate. A sign of a radial shift of the spectral line is determined based on whether the slope has a turning point or not. A magnitude of the radial shift is determined by measuring a location of the turning point if the radial shift is determined to be negative. A second reference pattern comprising at least a circle is scanned to obtain a second image if the radial shift is determined to be positive. The magnitude of the radial shift is computed based on the magnitude of the tangential shift and a radius of the circle. The tangential shift and the radial shift are then applied for correcting distortion. By way of example, the step of computing the magnitude of the tangential shift comprises determining a shift of the radial line of the first image from an original position in the Cartesian coordinate. Alternatively, the step of computing the magnitude of the tangential shift may comprise selecting at least three radial lines with the spectral line that are equally spaced from each other with an angle and each intersecting with the spectral line at an intersection point and computing the magnitude of the tangential shift based on the angle, a first distance between the intersecting points of a first and a second of the at least three radial lines, and a second distance between the intersecting points of the second and a third of the intersecting points. The step of computing a magnitude of the radial shift may further include measuring the location of the turning point by determining where a second derivative of the radial line is zero if the radial shift is determined to be negative. The method may include, wherein when the sign of the radius shift is positive, the magnitude of the radial shift is computed by the relations: Rr=√{square root over (R02−Rt2)}−d where Rr is the radial shift, Rt is the tangential shift, Ro is the radius of the circle, and d is the distance between the circle and a target radius. When the sign of the radius shift is negative, the magnitude of the radial shift is computed by the relations: Rr=d−√{square root over (R02−Rt2)} where Rr is the radial shift, Rt is the tangential shift, Ro is the radius of the circle, and d is the distance between the circle and a target radius. The step of applying the tangential shift and the radial shift for correcting distortion further comprises applying the tangential shift and the radial shift to determine actual location (x′, y′) of the radial lines represented by: x′=ρ cos θ−Rt sin θ+Rr cos θ y′=ρ sin θ+Rt cos θ+Rr sin θ where ρ is pixel index along the SEE spectral line, θ is rotation angle of the SEE spectral line. An additional method for correcting distortion of a spectrally encoded endoscopy (SEE) image includes the following steps. A first reference pattern comprising a plurality of radial lines is scanned with an SEE spectral line to obtain a first image. A sign of a tangential shift of the spectral line is determined based on a slope of at least one of the radial lines of the first image in a polar coordinate. A second reference pattern comprising at least two concentric circles is scanned with the SEE spectral line to obtain a second image, the two concentric circles having a first radius and a second radius, respectively. The magnitude of the tangential shift and a magnitude of a radial shift of the spectral line are scanned by measuring locations of the spectral line corresponding to the two concentric circles in the polar coordinate. The tangential shift and the radial shift are applied for correcting distortion. The step of computing the magnitude of the tangential shift may comprise determining a shift of the radial line of the first image from an original position in the Cartesian coordinate. The radial shift may be calculated based on the relationship: R r = R 2 2 - R 1 2 2 ⁢ ( d 2 - d 1 ) - d 1 + d 2 2 , and the tangential shift is calculated based on the relationship: R t 2 = R 2 2 + R 1 2 2 - ( R 2 2 - R 1 2 ) 2 4 ⁢ ( d 2 - d 1 ) 2 - ( d 2 - d 1 ) 2 4 . The step of applying the tangential shift and the radial shift for correcting distortion further comprises applying the tangential shift and the radial shift to determine actual location (x′, y′) of the radial lines represented by: x′=ρ cos θ−Rt sin θ+Rr cos θ y′=ρ sin θ+Rt cos θ+Rr sin θ where ρ is pixel index along the SEE spectral line, θ is rotation angle of the SEE spectral line. In another embodiment, a first reference pattern comprising a plurality of radial lines is scanned with an SEE spectral line to obtain a first image. A sign of a tangential shift of the spectral line is determined based on a slope of at least one of the radial lines of the first image in a polar coordinate. A magnitude of the tangential shift is determined based on a shift of at least one of the plurality of the radial lines on a Cartesian coordinate or based on at least three angularly equally radial lines included in the plurality of radial lines scanned by the SEE spectral line. The magnitude of the tangential shift is computed based on the a shift of at least one of the radial lines or based on at least three angularly equally spaced radial lines included in the plurality of radial lines. A second reference pattern comprising at least two concentric circles is scanned with the SEE spectral line to obtain a second image, the two concentric circles having a first radius and a second radius, respectively. A ratio of the second radius to the first radius is provided. A radial shift of the spectral lines is computed based on the tangential shift and the ratio; and the tangential shift and the radial shift are applied for correcting distortion. The step of computing the magnitude of the tangential shift may comprise determining a shift of the radial line of the first image from an original position in the Cartesian coordinate. The step of computing the magnitude of the tangential shift may also comprise selecting at least three radial lines that are equally spaced from each other with an angle and each intersecting with the spectral line at an intersection point and computing the magnitude of the tangential shift based on the angle, a first distance between the intersecting points of a first and a second of the at least three radial lines, and a second distance between the intersecting points of the second and a third of the intersecting points. The radial shift is calculated based on the relationship of: R r = - ( d 1 × k 2 - d 2 ) ± k 2 × ( d 2 - d 1 ) 2 - R t 2 ⁡ ( k 2 - 1 ) 2 k 2 - 1 where k is the ratio of the second radius to the first radius. The step of applying the tangential shift and the radial shift for correcting distortion further comprises applying the tangential shift and the radial shift to determine actual location (x′, y′) of the radial lines represented by: x′=ρ cos θ−Rt sin θ+Rr cos θ y′=ρ sin θ+Rt cos θ+Rr sin θ where ρ is pixel index along the SEE spectral line, θ is rotation angle of the SEE spectral line. Another method for correcting distortion of a spectrally encoded endoscopy (SEE) image, includes, a first reference pattern comprising a plurality of radial lines is scanned with an SEE spectral line to obtain a first image. A sign of a tangential shift of the spectral line is determined based on a slope of at least one of the radial lines of the first image in a polar coordinate. A magnitude of the tangential shift is determined based on a shift of at least one of the plurality of the radial lines on a Cartesian coordinate or based on at least three angularly equally radial lines included in the plurality of radial lines scanned by the SEE spectral line. A second reference pattern comprising at least two concentric circles is scanned with the SEE spectral line to obtain a second image, the two concentric circles having a first radius and a second radius, respectively. A ratio of the second radius to the first radius is provided. Two possible values of the magnitude of a radial shift of the spectral lines are computed based on the tangential shift and the ratio. One of the possible values is selected to calculate pixel coordinate of the radial lines imaged by the spectral line. The tangential shift and the radial shift are applied for correcting distortion. The other of the possible values of the magnitude of the radial shift is selected if the distortion is not corrected by the first possible value. 16362351 canon u.s.a., inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 12th, 2022 12:27PM Apr 12th, 2022 12:27PM Technology Technology Hardware & Equipment
nyse:caj Canon Apr 5th, 2022 12:00AM Dec 24th, 2020 12:00AM https://www.uspto.gov?id=US11294318-20220405 Sheet processing apparatus and image forming system A sheet processing apparatus performs folding processing so that one end of a sheet exists inside the folded sheet. The sheet processing apparatus includes a transport path to guide a transported sheet, a rotating body pair capable of transporting the sheet in a first direction to perform folding processing, and in a second direction for switching back the sheet subjected to the folding processing, a folding blade that pushes the sheet to a nip portion of the rotating body pair, a press member that presses the sheet folded by the rotating body pair in the second direction, and a shift section that shifts the press member for pressing the sheet. In switching back the sheet, a control section controls transport of the sheet so that the one end of the sheet is halted within a region between the press member and a guide face of the transport path. 11294318 1. A sheet processing apparatus for performing folding processing in a plurality of portions of a sheet and performing the folding processing so that one end of the sheet exists inside the sheet folded, comprising: a transport path including a guide face to guide a sheet transported in a predetermined transport direction; a rotating body pair adapted to be able to transport the sheet in a first direction for nipping the sheet transported to the transport path by a nip portion to rotate, and thereby drawing the sheet to perform folding processing, and in a second direction for performing switchback on the sheet subjected to the folding processing in a direction opposite to the direction for drawing; a folding blade adapted to push the sheet transported to the transport path to the nip portion of the rotating body pair; a press member adapted to press the sheet, which is subjected to the folding processing by the rotating body pair and is transported in the second direction, to one direction side that is one of the transport direction and a direction opposite to the transport direction; a shift section adapted to shift the press member in a direction for pressing the sheet; and a control section adapted to control the rotating body pair and the shift section, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair and the shift section so that the one end of the sheet subjected to the folding processing by the rotating body pair is pressed by the press member, within a region between a shift locus of the press member and the guide face of the transport path. 2. A sheet processing apparatus for performing folding processing in a plurality of portions of a sheet and performing the folding processing so that one end of the sheet exists inside the sheet folded, comprising: a transport path including a guide face to guide a sheet transported in a predetermined transport direction; a transport section adapted to be able to transport the sheet in a first direction for drawing the sheet transported to the transport path, and in a second direction for performing switchback on the drawn sheet in a direction opposite to the direction for drawing; a rotating body pair adapted to nip the sheet transported to the transport path by a nip portion to rotate, and thereby perform folding processing on the sheet, each including a first circumferential surface with a radius from a rotating shaft of the rotating body to a rotating body circumferential surface, and a second circumferential surface with a radius from the rotating shaft smaller than the radius of the first circumferential surface; a folding blade adapted to push the sheet transported to the transport path to the nip portion of the rotating body pair; a press member adapted to press the sheet, which is subjected to the folding processing by the rotating body pair and is transported in the second direction, to one direction side that is one of the transport direction and a direction opposite to the transport direction; a shift section adapted to shift the press member in a direction for pressing the sheet; and a control section adapted to control the transport section and the shift section, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section and the shift section so that the one end of the sheet subjected to the folding processing by the rotating body pair is pressed by the press member, within a region between a shift locus of the press member and the guide face of the transport path. 3. The sheet processing apparatus according to claim 1, wherein the press member is provided rotatably around a rotation support as a center, and in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair and the shift section so that the one end of the sheet is pressed by the press member, within the region between a rotation locus of the press member and the guide face of the transport path. 4. The sheet processing apparatus according to claim 2, wherein the press member is provided rotatably around a rotation support as a center, and in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section and the shift section so that the one end of the sheet is pressed by the press member, within the region between a rotation locus of the press member and the guide face of the transport path. 5. The sheet processing apparatus according to claim 3, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair to halt the one end of the sheet subjected to the folding processing by the rotating body pair, within the region between the rotation locus of the press member and the guide face of the transport path, and controls the shift section to rotate the press member. 6. The sheet processing apparatus according to claim 4, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section to halt the one end of the sheet subjected to the folding processing by the rotating body pair, within the region between the rotation locus of the press member and the guide face of the transport path, and controls the shift section to rotate the press member. 7. The sheet processing apparatus according to claim 3, wherein the press member is able to shift to a first guide position for guiding the sheet along the guide face of the transport path, in receiving the sheet in the transport path, and to a second guide position for guiding the sheet, in transporting the sheet subjected to the folding processing by the rotating body pair to the one direction side, and the control section controls the shift section so as to rotate the press member from the second guide position to the first guide position at a velocity faster than in a shift from the first guide position to the second guide position. 8. A sheet processing apparatus for performing folding processing in a plurality of portions of a sheet and performing the folding processing so that one end of the sheet exists inside the sheet folded, comprising: a transport path including a guide face to guide a sheet transported in a predetermined transport direction; a rotating body pair adapted to be able to transport the sheet in a first direction for nipping the sheet transported to the transport path by a nip portion to rotate, and thereby drawing the sheet to perform folding processing, and in a second direction for performing switchback on the sheet subjected to the folding processing in a direction opposite to the direction for drawing; a folding blade adapted to push the sheet transported to the transport path to the nip portion of the rotating body pair; a direction change member adapted to change a direction of the sheet, which is subjected to the folding processing by the rotating body pair and is transported in the second direction, to one direction side that is one of the transport direction and a direction opposite to the transport direction; a shift section adapted to shift the direction change member; and a control section adapted to control the rotating body pair and the shift section, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair and the shift section so that the direction of the one end of the sheet subjected to the folding processing by the rotating body pair is changed by the direction change member, within a region between a shift locus of the direction change member and the guide face of the transport path. 9. A sheet processing apparatus for performing folding processing in a plurality of portions of a sheet and performing the folding processing so that one end of the sheet exists inside the sheet folded, comprising: a transport path including a guide face to guide a sheet transported in a predetermined transport direction; a transport section adapted to be able to transport the sheet in a first direction for drawing the sheet transported to the transport path, and in a second direction for performing switchback on the drawn sheet in a direction opposite to the direction for drawing; a rotating body pair adapted to nip the sheet transported to the transport path by a nip portion to rotate, and thereby perform folding processing on the sheet, each including a first circumferential surface with a radius from a rotating shaft of the rotating body to a rotating body circumferential surface, and a second circumferential surface with a radius from the rotating shaft smaller than the radius of the first circumferential surface; a folding blade adapted to push the sheet transported to the transport path to the nip portion of the rotating body pair; a direction change member adapted to change a direction of the sheet, which is subjected to the folding processing by the rotating body pair and is transported in the second direction, to one direction side that is one of the transport direction and a direction opposite to the transport direction; a shift section adapted to shift the direction change member; and a control section adapted to control the transport section and the shift section, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section and the shift section so that the direction of the one end of the sheet subjected to the folding processing by the rotating body pair is changed by the direction change member, within a region between a shift locus of the direction change member and the guide face of the transport path. 10. The sheet processing apparatus according to claim 8, wherein the direction change member is provided rotatably around a rotation support as a center, and in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair and the shift section so that the direction of the one end of the sheet is changed by the direction change member, within the region between a rotation locus of the direction change member and the guide face of the transport path. 11. The sheet processing apparatus according to claim 9, wherein the direction change member is provided rotatably around a rotation support as a center, and in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section and the shift section so that the direction of the one end of the sheet is changed by the direction change member, within the region between a rotation locus of the direction change member and the guide face of the transport path. 12. The sheet processing apparatus according to claim 10, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair to halt the one end of the sheet subjected to the folding processing by the rotating body pair, within the region between the rotation locus of the direction change member and the guide face of the transport path, and controls the shift section to rotate the direction change member. 13. The sheet processing apparatus according to claim 11, wherein in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the transport section to halt the one end of the sheet subjected to the folding processing by the rotating body pair, within the region between the rotation locus of the direction change member and the guide face of the transport path, and controls the shift section to rotate the direction change member. 14. The sheet processing apparatus according to claim 10, wherein the direction change member is able to shift to a first guide position for guiding the sheet along the guide face of the transport path, in receiving the sheet in the transport path, and to a second guide position for guiding the sheet, in transporting the sheet subjected to the folding processing by the rotating body pair to the one direction side, and the control section controls the shift section so as to rotate the direction change member from the second guide position to the first guide position at a velocity faster than in a shift from the first guide position to the second guide position. 15. An image forming system comprising: an image forming apparatus adapted to form an image on a sheet; and a sheet processing apparatus adapted to perform folding processing on the sheet fed from the image forming apparatus, wherein the sheet processing apparatus is the sheet processing apparatus according to claim 1. 16. The sheet processing apparatus according to claim 1, wherein the control section controls the rotating body pair and the press member such that the plurality of portions of the sheet is folded with one end of the sheet existing inside the sheet folded. 16 TECHNICAL FIELD The present invention relates to a sheet processing apparatus to perform folding processing on a sheet fed from, for example, an image forming apparatus, and an image forming system provided with the sheet processing apparatus. BACKGROUND ART Conventionally, there has been a proposed sheet processing apparatus for performing folding processing on a bunch of sheets in the shape of a booklet, as post-processing of sheets discharged from an image forming apparatus such as a copier, printer, facsimile and complex apparatus thereof. For example, there is a known sheet processing apparatus for folding a predetermined position of a sheet carried out to a sheet stacker from an image forming apparatus to push into a nip portion of a folding roller pair by a push plate, and folding in two, while transporting with the folding roller pair. Among sheet processing apparatuses for performing folding processing on sheets, as well as two-fold, there is a sheet processing apparatus for performing folding processing in two different portions of a sheet, and executing inward three-fold processing for folding so that an end portion on one side of the sheet exists inside the folded sheet. In such an apparatus, inward three-fold is performed by switchback-transporting a sheet subjected to first folding processing to once return to a stacker, and executing second folding processing on the sheet in a position different from a first fold. In the folding processing, in switchback-transporting a sheet, when curl and the like occur in a sheet end portion, turn-up occurs in the end portion, and there is the case where the sheet is not returned to a stacker in a proper state. In order to prevent turn-up from occurring, a configuration is proposed where a turn-up preventing member is provided swingably in a sheet path for switchback, and by swinging the turn-up preventing member, an end portion of a sheet undergoing switchback-transport is guided to a stacker (Japanese Unexamined Patent Publication No. 2012-56674). DISCLOSURE OF INVENTION Problems to be Solved by the Invention However, in the configuration as described in Japanese Unexamined Patent Publication No. 2012-56674, the turn-up preventing member is swung together with backward-rotation drive of a folding roller for switchback-transporting the sheet, and there is a possibility that the end portion of the sheet returned to the stacker by backward rotation of the folding roller and the turn-up preventing member relatively shift and contact inside the stacker. At this point, when a face of the turn-up preventing member contacts the end portion of the sheet at an angle near a perpendicular, there is risk of causing damage to the sheet end portion. Further, in the case where the sheet end portion is curled and deformed to the folding roller side, the turn-up preventing member pushes the curled end portion of the sheet to the folding roller side, and there is the risk that it is not possible to properly switchback-transport the sheet. The present invention was made in view of the above-mentioned problem, and it is an object of the invention to provide a sheet processing apparatus for enabling a sheet undergoing folding processing to be properly switchback-transported, and an image forming system provided with the apparatus. Means for Solving the Problem A representative configuration according to the present invention to attain the above-mentioned object is provided with a transport path including a guide face to guide a sheet transported in a predetermined transport direction, a rotating body pair capable of transporting the sheet in a first direction for nipping the sheet transported to the transport path by a nip portion to rotate, and thereby drawing the sheet to perform folding processing, and in a second direction for performing switchback on the sheet subjected to the folding processing in a direction opposite to the direction for drawing, a folding blade that pushes the sheet transported to the transport path to the nip portion of the rotating body pair, a press member that presses the sheet, which is subjected to the folding processing by the rotating body pair and is transported in the second direction, to one direction side that is one of the transport direction and a direction opposite to the transport direction, a shift section that shifts the press member in a direction for pressing the sheet, and a control section that controls the rotating body pair and the shift section, in a sheet processing apparatus for performing folding processing in a plurality of portions of a sheet and performing the folding processing so that one end of the sheet exists inside the folded sheet, where in performing the switchback on the sheet subjected to the folding processing by the rotating body pair, the control section controls the rotating body pair and the shift section so that the one end of the sheet subjected to the folding processing by the rotating body pair is pressed by the press member, within a region between a shift locus of the press member and the guide face of the transport path. Advantageous Effect of the Invention In the present invention, in switchback-transporting the sheet, when the press member is shifted, one end of the sheet to be folded in is pressed in the direction for switchback by the press member. Therefore, switchback-transport of the sheet is properly performed. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is an explanatory view of the entire configuration of an image forming system of this Embodiment; FIG. 2 is an explanatory view of the entire configuration of a sheet processing apparatus in the image forming system; FIG. 3 is a cross-sectional view illustrating a folding processing apparatus of the sheet processing apparatus; FIG. 4 is a plan view illustrating a sheet folding processing apparatus; FIGS. 5A and 5B are cross-sectional explanatory views of inward three-fold operation on a sheet; FIGS. 6A and 6B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIGS. 7A and 7B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIGS. 8A and 8B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIGS. 9A and 9B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIGS. 10A and 10B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIGS. 11A and 11B are cross-sectional explanatory views of inward three-fold operation on the sheet; FIG. 12 is a perspective view of a part of the sheet folding processing apparatus; FIG. 13 is an arrangement explanatory view of a folding roller pair, folding blade and press guide member; FIGS. 14A, 14B and 14C are operation explanatory views of the press guide member; FIGS. 15A and 15B are cross-sectional explanatory views of operation of the folding blade and blade guide member; FIGS. 16A and 16B are cross-sectional explanatory views of operation of the folding blade and blade guide member; FIGS. 17A and 17B are cross-sectional explanatory views of operation of the folding blade and blade guide member; FIGS. 18A and 18B are cross-sectional explanatory views of operation of the folding blade and blade guide member; FIGS. 19A and 19B are cross-sectional explanatory views of operation of the folding blade and blade guide member; FIG. 20 is a control block diagram of folding operation in the sheet folding processing apparatus; FIG. 21 is a flowchart of folding operation in the sheet folding processing apparatus; and FIG. 22 is another flowchart of folding operation in the sheet folding processing apparatus. MODE FOR CARRYING OUT THE INVENTION A sheet processing apparatus according to a suitable Embodiment of the present invention and an image forming system provided with the apparatus will be described next with reference to drawings. FIG. 1 schematically illustrates the entire configuration of the image forming system provided with the sheet processing apparatus according to the Embodiment of the invention. As shown in FIG. 1, the image forming system 100 is comprised of an image forming apparatus A and sheet processing apparatus B provided together in the apparatus A. <Entire Configuration of the Image Forming Apparatus> The image forming apparatus A is comprised of an image forming unit A1, scanner unit A2 and feeder unit A3. The image forming unit A1 is provided with a paper feed section 2, image forming section 3, sheet discharge section 4 and data processing section 5 inside an apparatus housing 1. The paper feed section 2 is comprised of a plurality of cassette mechanisms 2a, 2b and 2c for storing image-forming sheets of respective different sizes, and feeds out sheets of the size designated from a main body control section not shown to a paper feed path 2f. Each of the cassette mechanisms 2a, 2b and 2c is installed to be detachable from the paper feed section 2, and includes an integral separation mechanism for separating sheets inside on a sheet-by-sheet basis and an integral paper feed mechanism for feeding out the sheet. The paper feed path 2f is provided with a transport roller for feeding the sheet supplied from each of the cassette mechanisms 2a, 2b and 2c to the downstream side, and in an end portion of the path, a registration roller pair for aligning a front end of each sheet. To the paper feed path 2f are connected a large-capacity cassette 2d and manual feed tray 2e. The large-capacity cassette 2d is comprised of an option unit for storing sheets of a size consumed in large quantity. The manual feed tray 2e is configured to be able to supply particular sheets such as a thick-paper sheet, coating sheet and film sheet difficult to separate and feed. The image forming section 3 is configured using an electrophotographic scheme in this Embodiment, and is provided with a photosensitive drum 3a that rotates, and a light emitting device 3b for emitting an optical beam, a developing device 3c and cleaner (not shown) arranged around the drum. The section shown in the figure is a monochrome printing mechanism, and is to irradiate the photosensitive drum 3a with its circumferential surface charged uniformly with the light corresponding to an image signal by the light emitting device 3b to optically form a latent image, and by attaching toner to the latent image with the developing device 3c, form a toner image. In accordance with timing at which the image is formed on the photosensitive drum 3a, a sheet is fed to the image forming section 3 from the paper feed path 2f, transfer bias is applied from a transfer charging device 3d, and the toner image formed on the photosensitive drum 3a is thereby transferred onto the sheet. The sheet with the toner image transferred thereto is heated and pressurized when passing through a fuser device 6 to fuse the toner image, is discharged from a sheet discharge opening 4b by a sheet discharge roller 4a, and is transported to the sheet processing apparatus B described later. The scanner unit A2 is provided with platen 7a for placing an image original document, a carriage 7b that performs reciprocating motion along the platen 7a, a photoelectric conversion element 7c, and a reduction optical system 7d for guiding reflected light from the original document on the platen 7a by the carriage 7b to the photoelectric conversion element 7c. The photoelectric conversion element 7c performs photoelectric conversion on optical output from the reduction optical system 7d into image data to output to the image forming section 3 as an electric signal. Further, the scanner unit A2 is provided with travel platen 7e to read the sheet fed from the feeder unit A3. The feeder unit A3 is comprised of a paper feed tray 8a for stacking original document sheets, a paper feed path 8b for guiding the original document sheet fed out of the paper feed tray 8a to the travel platen 7e, and a sheet discharge tray 8c for storing the original document sheet passing through the travel platen 7e. The original document sheet from the paper feed tray 8a is read by the carriage 7b and reduction optical system 7d, in passing through the travel platen 7e. <Entire Configuration of the Sheet Processing Apparatus> Next, descriptions will be given to the entire configuration of the sheet processing apparatus B for performing post-processing on the sheet fed from the image forming apparatus A. FIG. 2 is a configuration explanatory view of the sheet processing apparatus B according to this Embodiment. The sheet processing apparatus B is provided with an apparatus housing 11 provided with a carry-in opening 10 to introduce a sheet from the image forming apparatus A. The apparatus housing 11 is positioned and disposed in accordance with the housing 1 of the image forming apparatus A so as to communicate the carry-in opening 10 to the sheet discharge opening 4b of the image forming apparatus A. The sheet processing apparatus B is provided with a sheet carry-in path 12 for transporting a sheet introduced from the carry-in opening 10, a first sheet discharge path 13a branched off from the sheet carry-in path 12, a second sheet discharge path 13b, a third sheet discharge path 13c, a first path switch portion 14a, and a second path switch portion 14b. Each of the first path switch portion 14a and the second path switch portion 14b is comprised of a flapper guide for changing a transport direction of a sheet transported in the sheet carry-in path 12. By a drive section not shown in the figure, the first path switch portion 14a switches between a mode for guiding a sheet from the carry-in opening 10 in a direction of the first sheet discharge path 13a to transport in a lateral direction without modification and the second sheet discharge path 13b to transport downward, and another mode for guiding to the third sheet discharge path 13c to transport upward. The first sheet discharge path 13a and second sheet discharge path 13b are communicated so as to be able to reverse the transport direction of the sheet once introduced to the first sheet discharge path 13a to switchback-transport to the second sheet discharge path 13b. The second path switch portion 14b is disposed on the downstream side of the first path switch portion 14a, with respect to the transport direction of the sheet transported in the sheet carry-in path 12. By a drive section similarly not shown in the figure, the second path switch portion 14b switches between a mode for introducing the sheet passing through the first path switch portion 14a to the first sheet discharge path 13a, and another mode for switchback-transporting the sheet once introduced to the first sheet discharge path 13a to the second sheet discharge path 13b. The sheet processing apparatus B is provided with a first processing section B1, second processing section B2 and third processing section B3 which perform respective different post-processing. Further, in the sheet carry-in path 12 is disposed a punch unit 15 for punching a punch hole in the carried-in sheet. The first processing section B1 is a binding processing section for collecting a plurality of sheets carried out of a sheet discharge opening 16a in a downstream end of the first sheet discharge path 13a with respect to the transport direction of the sheet transported in the sheet carry-in path 12 to collate and perform binding processing, and discharging to a stacking tray 16b provided outside the apparatus housing 11. Further, the first processing section B1 is provided with a sheet transport apparatus 16c for transporting the sheet or a bunch of sheets, and a binding processing unit 16d for performing the binding processing on the bunch of sheets. In the downstream end of the first sheet discharge path 13a is provided a discharge roller pair 16e to discharge the sheet from the sheet discharge opening 16a and to switchback-transport from the first sheet discharge path 13a to the second sheet discharge path 13b. The second processing section B2 is a folding processing section for making a bunch of sheets using a plurality of the sheets switchback-transported from the second sheet discharge path 13b, performing the binding processing on the bunch of the sheets, and then, performing folding processing. As described later, the second processing section B2 is provided with a folding processing apparatus F for performing the folding processing on the carried-in sheet or bunch of sheets, and a binding processing unit 17a disposed on the immediately upstream side of the folding processing apparatus F along the sheet transport direction of the sheet transported to the second sheet discharge path 13b to perform the binding processing on the bunch of sheets. The bunch of sheets subjected to the folding processing is discharged to a stacking tray 17c provided outside the apparatus housing 11 by a discharge roller 17b. The third processing section B3 performs jog sorting for sorting sheets fed from the third sheet discharge path 13c into a group for offsetting by a predetermined amount in a sheet width direction orthogonal to the transport direction to collect, and another group for collecting without offsetting. The jog-sorted sheets are discharged to a stacking tray 18 provided outside the apparatus housing 11, and a bunch of sheets subjected to offset and a bunch of sheets without being offset are stacked. FIG. 3 schematically illustrates the entire configuration of the second processing section B2. As described above, the second processing section B2 is provided with the folding processing apparatus F for folding a bunch of sheets, which are carried in from the second sheet discharge path 13b, collected and collated, in two, and the binding processing unit 17a for performing the binding processing on a bunch of sheets prior to the folding processing. The binding processing unit 17a shown in the figure is a stapler apparatus for hitting a staple to bind the bunch of sheets. In order to carry the sheet in the folding processing apparatus F, a sheet transport path 20 is connected to the second sheet discharge path 13b. With respect to the transport direction of the sheet transported to a sheet stacking tray 21 from the second sheet discharge path 13b, on the downstream side of the sheet transport path 20, the sheet stacking tray 21 constituting a part of the sheet transport path is provided to position the sheet undergoing the folding processing to stack. On the immediately upstream side of the sheet stacking tray 21, the binding processing unit 17a and its staple receiving portion 17d are provided in opposed positions with the sheet transport path 20 sandwiched therebetween. On one side of the sheet stacking tray 21, a folding roller pair 22 as a folding rotating body pair is arranged to be opposed to one surface of the sheet or a bunch of sheets stacked in the sheet stacking tray. The folding roller pair 22 is comprised of a pair of folding rollers 22a, 22b with roller surfaces thereof mutually brought into press-contact, and a nip portion 22c that is a press-contact portion thereof is disposed toward the sheet stacking tray 21. The folding rollers 22a, 22b are disposed parallel on the upstream side and downstream side along a carry-in direction of the sheet carried in the sheet stacking tray 21 from the upstream side above to the downstream side below, with respective distances from the sheet stacking tray 21 being approximately equal. In addition, in the present invention, a rotating portion of the folding rotating body pair is not limited to the folding rollers 22a, 22b of this Embodiment, and is capable of being comprised of a rotating belt and the like. Further, the folding roller pair 22 is capable of being configured by arranging a plurality of folding rollers (rotating bodies) continuously in series along a shaft direction of each of the folding rollers 22a, 22b. In each of the folding rollers 22a, 22b of the folding roller pair 22 of this Embodiment, as shown in FIG. 3, with the rotation shaft center of each of rotation shafts 22a1, 22b1 as the center, roller circumferential surfaces thereof have first roller surfaces 22a2, 22b2 with certain radiuses R1, and second roller surfaces 22a3, 22b3 with distances from the rotation shaft centers of the rotation shafts smaller than the radius R1 of the first roller surface, respectively. As in the normal roller surface, the first roller surfaces 22a2, 22b2 are formed of rubber materials and the like with a relatively high coefficient of friction. In contrast thereto, the second roller surfaces 22a3, 22b3 are formed of plastic resin materials and the like with a coefficient of friction smaller than the coefficient of the first roller surfaces 22a2, 22b2. The rotation shafts 22a1, 22b1 of the folding rollers 22a, 22b are driven to rotate by a common drive section such as a drive motor. By this means, it is possible to always synchronize rotation positions of the first roller surfaces 22a2, 22b2 and the second roller surfaces 22a3, 22b3 mutually. On the opposite side to the folding roller pair 22 across the sheet stacking tray 21, a folding blade 23 is disposed. The folding blade 23 is supported by a blade carrier 24 with its front end directed toward the nip portion 22c of the folding roller pair 22. The blade carrier 24 is provided to be able to travel by a shift section comprised of a cam member and the like, in a direction traversing the sheet stacking tray 21 at an approximately right angle i.e. in a direction crossing the transport direction of the sheet transported to the sheet stacking tray 21 from the second sheet discharge path 13b. In the front-back direction i.e. the shaft line direction of the folding roller in FIG. 3, on opposite sides with the blade carrier 24 therebetween, cam members 25 (only one is shown in the figure) comprised of a pair of mutually mirror symmetrical eccentric cams are provided in opposed positions. The cam member 25 rotates by a drive section such as a drive motor around a rotation shaft 25a provided in the eccentric position as the center. In the cam member 25, a cam groove 25b is formed along its outer edge. The blade carrier 24 is provided with a cam pin 24c that is fitted into the cam groove 25b slidably as a cam follower. When the cam member 25 is rotated by the drive motor, the blade carrier 24 reciprocates and travels in directions for approaching and separating from the sheet stacking tray 21. By this means, as shown in FIG. 3, it is possible to shift the folding blade 23 linearly to be able to proceed and retract, between an initial position that is a position in which a front end of the folding blade 23 does not enter the sheet transport path formed of the sheet stacking tray 21, and a maximum push position in which the front end is nipped by the nip portion 22c of the folding roller pair 22, along a push path for connecting between both positions. In a lower end of the sheet stacking tray 21 is disposed a regulation stopper 26 for bringing the front end of the carried-in sheet in the transport direction into contact therewith to regulate. The regulation stopper 26 is provided to be able to move up and down along the sheet stacking tray 21 by a sheet up-and-down mechanism 27. The sheet up-and-down mechanism 27 of this Embodiment is a conveyor belt mechanism which is disposed on the back side of the sheet stacking tray 21, below the blade carrier 24 when the carrier is in the initial position that is a position in which the front end of the folding blade 23 does not enter the sheet transport path formed of the sheet stacking tray 21, and which is comprised of a pair of pulleys 27a, 27b respectively disposed near an upper end and lower end of the sheet stacking tray 21 along the tray 21, and a conveyor belt 27c looped between both of the pulleys. The regulation stopper 26 is fixed onto the conveyor belt 27c. By rotating the pulley 27a or 27b on the drive side by a drive section such as a drive motor, the regulation stopper 26 moves up and down between a lower end position and a desired height position shown in FIG. 3, and is thereby capable of shifting the sheet or bunch of sheets along the sheet stacking tray 21. Moreover, the folding processing apparatus F of this Embodiment is further provided with a sheet side-portion alignment mechanism to align side edges of the sheet carried in the sheet stacking tray 21 to perform alignment. As shown in FIG. 4, the sheet side-portion alignment mechanism includes a pair of sheet side-portion alignment members 28a, 28b disposed symmetrically on opposite sides of the sheet stacking tray 21 in the sheet width direction (direction orthogonal to the sheet transport direction). In addition, FIG. 4 is a plan schematic view obtained by viewing the folding processing apparatus F from above. The sheet side-portion alignment members 28a, 28b are held to be capable of shifting to be able to relatively approach and separate in the sheet width direction. With respect to the sheet which is transported to the sheet stacking tray 21 and of which the front end strikes the regulation stopper 26, the sheet side-portion alignment members 28a, 28 are shifted, and thereby align positions of the sheet in the width direction. <Inward Three-Fold Processing> The sheet processing apparatus B of this Embodiment is capable of performing inward three-fold processing on the sheet transported to the sheet stacking tray 21 that is the sheet transport path, by the folding processing apparatus F. The inward three-fold processing is processing for folding in three so that an end portion on one side of a sheet folded by first folding processing is folded inside the sheet folded by second folding processing, when the sheet is folded in two by the first folding processing and the second folding processing is performed on the sheet in a portion different from a first fold position. Herein, schematic operation in performing the inward three-fold processing by the folding processing apparatus F of this Embodiment will be described with reference to FIGS. 5A to 11B. FIGS. 5A to 11B illustrate, in cross-sectional schematic views, motion of each section according to a flow of a sheet S when the inward three-fold processing is executed. The sheet stacking tray 21 of this Embodiment is formed, while being inclined with respect to the vertical direction, and while the surface on one side of the sheet S is guided by a guide face 21a forming the sheet stacking tray 21, the sheet is transported so as to fall with a sheet front end S1 down and a sheet rear end S2 up, and is halted when the sheet front end is struck by the regulation stopper 26 (FIG. 5A). At this point, a position of the regulation stopper 26 is disposed so that the first fold position of the sheet S with the sheet front end S1 struck is a position opposed to the folding blade 23. The folding blade 23 is disposed in the position for pushing the sheet S toward the folding roller pair 22 from the side of the guide face 21a of the sheet stacking tray 21. In other words, the guide face 21a of the sheet stacking tray 21 and the folding roller pair 22 are disposed in positions that correspond to each other with the sheet S therebetween. After aligning the positions in the sheet width direction by the sheet side-portion alignment members 28a, 28b described previously in this state, the folding blade 23 is operated to fold the sheet S in two, and pushes the folded portion to the nip portion 22c of the folding roller pair 22 (FIG. 5B). In synchronization with push operation of the folding blade 23, the folding roller pair 22 and discharge roller 17b are driven to rotate forward, and draw the sheet S into the folding roller pair 22 and discharge roller 17b. By this means, the sheet S is pressed by the nip portion of the folding roller pair 22, and the first folding processing is performed (FIG. 6A). In order to perform the second folding processing next, sheet transport is halted at the time the sheet rear end S2 subjected to the first folding processing arrives at a predetermined position (FIG. 6B), and the folding roller pair 22 and discharge roller 17b are driven to rotate backward to execute switchback-transport processing. In performing the inward three-fold processing on the sheet, the sheet rear end S2 is an end portion (hereinafter, referred to as “fold-in end portion”) which is folded inside the sheet folded by the second folding processing. Then, in performing the switchback-transport processing, the fold-in end portion S2 is pressed downward (direction of the sheet stacking tray 21 where the sheet front end S1 exists) by an L-shaped press guide member 30 (FIG. 7A), and the press guide member 30 guides the sheet S which is again transported in the direction of the sheet stacking tray 21 where the regulation stopper 26 is disposed (FIG. 7B). In addition, the configuration and operation of the press guide member 30 will be described later in detail. When the front end of the sheet S arrives at the regulation stopper 26 that is shifted beforehand to a sheet receiving position, by switchback-transport (FIG. 8A), the press guide member 30 is returned to a retract position, and then, is shifted to a backward transport guide position (FIG. 8B), and the regulation stopper 26 is shifted to a position such that a second fold position is opposed to the folding blade 23 (FIG. 9A). Then, after completing the shift, the press guide member 30 is shifted to a guide position parallel with the guide face 21a of the sheet stacking tray 21 (FIG. 9B). Next, the folding blade 23 is operated again to push the sheet S to the nip portion 22c of the folding roller pair 22 (FIG. 10A). At this point, a blade guide member 40 that is a push guide member disposed above the folding blade 23 protrudes, and the fold-in end portion S2 of the sheet is thereby guided to be pushed into the nip portion 22c (FIG. 10B). In addition, the configuration and operation of the blade guide member 40 will be described later also in detail. The sheet S fed to the folding roller pair 22 by push of the folding blade 23 passes through the nip portion 22c and is thereby subjected to the second folding processing (FIG. 11A), and the inward three-folded sheet S is discharged by the discharge roller 17b (FIG. 11B). <Press Guide Member> The press guide member 30 that is the press member described previously will be described next with reference to FIGS. 12 to 14C. In addition, FIG. 12 is a perspective view of the folding processing apparatus F in a state in which the press guide member 30 is exposed, and FIG. 13 is a view illustrating a relationship between a rotation locus of the press guide member 30 and another member. FIGS. 14A to 14C contain operation explanatory views of the press guide member 30. (Shape of the Press Guide Member) The press guide member 30 presses the fold-in end portion S2 of the sheet downward, and guides to transport to the sheet stacking tray 21, in switchback-transporting the sheet with the first folding processing executed. In other words, the press guide member 30 is also a direction change member to change the direction of the fold-in end portion S2 of the sheet to the direction of the sheet stacking tray 21 where the sheet front end S1 exists, in switchback-transporting the sheet with the first folding processing executed. As shown in FIG. 12 (and see FIG. 4), the press guide member 30 is disposed on the side opposite to the side on which the folding roller pair 22 is disposed with the sheet S guided to the guide face 21a of the sheet stacking tray 21 therebetween. Then, in this Embodiment, three members are attached, at approximately regular intervals, to a rotation shaft 31 that is a support member disposed in the sheet width direction. Two members on opposite sides are disposed in positions for enabling the members to come into contact with opposite end portions of the sheet S transported in the sheet stacking tray 21, and one member in the center is disposed in a position for enabling the member to come into contact with substantially the center of the transported sheet in the width direction. The press guide member 30 is capable of shifting by a shift section. In this Embodiment, the rotation shaft 31 is coupled to a press guide motor 33 via a drive transfer member 32 such as a drive belt, and it is configured that the rotation shaft 31 is rotated by drive of the press guide motor 33, and that integrally therewith, three press guide members 30 are capable of rotating. As shown in FIG. 13, the press guide member 30 has a rotation portion 30a capable of rotating around the rotation shaft 31 as the center, and a guide portion 30b that is a first guide face for guiding the sheet S undergoing switchback-transport, and is comprised of a member of L-shaped cross section where the guide portion 30b is coupled at an approximately right angle, while being continued to the rotation portion 30a. Then, a portion between the rotation portion 30a and the guide portion 30b i.e. a corner portion of the shape of an L that is the front end of the rotation portion 30a is formed as a press portion 30c for pressing the sheet S. A notch is formed in the guide face 21a, and the press guide member 30 is provided to be exposed from the notch. Then, when the sheet S is carried in the sheet stacking tray 21, the member retracts to a retract position (see FIG. 5A). When the member is in the retract position, the rotation portion 30a is provided to be substantially the same plane as the guide face 21a. Therefore, the rotation portion 30a functions as a part of the guide face 21a, and acts as a guide face (second guide face) for guiding the sheet carried in the sheet stacking tray 21. Then, it is essential only that the guide portion 30b does not protrude from the guide face 21a when the press guide member 30 is in the retract position, and it is thereby possible to reduce storage space of the press guide member 30 in the retract state. (Position of the Rotation Center) As shown in FIG. 13, the rotation shaft 31 that is the rotation center of the press guide member 30 of this Embodiment is disposed on the upstream side from a nip line L1 for connecting between the nip portion 22c of the folding roller pair 22 and the folding blade 23, in the transport direction in which the sheet S is carried in the sheet stacking tray 21, and is disposed on the side opposite to the side on which the folding roller pair 22 is disposed, farther than the guide face 21 of the sheet stacking tray 21. Further, the rotation shaft 31 of this Embodiment is disposed on the downstream side, in the transport direction, from a rotation shaft line L2 which passes through the rotation shaft 22a1 of the folding roller 22a existing on the side closer to the rotation shaft 31 in the folding rollers 22a, 22b, and which is parallel with the nip line L1. Then, the rotation portion 30a is configured to rotate in a direction in which the press portion 30c presses the sheet S to the side for switchback-transport. Accordingly, in switchback-transporting the sheet S with the first folding processing executed thereon, as shown in FIG. 14A, when the press guide member 30 in the retract position rotates, as shown in FIG. 14B, the press portion 30c presses the fold-in end portion S2 of the sheet down from above the fold-in end portion S2 to below. By this means, the fold-in end portion S2 is guided to the downstream side (downward) in the sheet stacking tray 21 in the sheet transport direction, in which the sheet S is received in the sheet stacking tray 21 before the first folding processing is performed, while being switchback-transported. In other words, the press portion 30c changes the direction of the fold-in end portion S2 of the sheet to the direction of the sheet stacking tray 21 where the sheet front end S1 exists. After changing the direction of the fold-in end portion S2, the press guide member 30 stays in the position without changing, and is thereby capable of guiding the fold-in end portion S2 to the downstream side in the sheet transport direction, in which the sheet S is received in the sheet stacking tray 21 before the first folding processing is performed. Further, as shown in FIG. 14C, when the press portion 30c rotates to a guide position where the portion is rotated to a position of the guide face 21a, the press portion 30c comes into contact with the sheet, then presses the fold-in end portion S2 of the sheet down so as to draw into the guide face 21a side from the nip portion 22c side, and guides the portion in a direction of the sheet stacking tray 21 where the regulation stopper 26 is disposed. Therefore, even when the fold-in end portion S2 of the sheet is curled upward, the sheet des not proceed toward above in the sheet stacking tray 21, and is reliably transported toward below. (Rotation Region of the Rotation Portion) A length of the rotation portion 30a of the press guide member 30 of this Embodiment i.e. a length from the rotation shaft 31 that is a rotation support to the press portion 30c is configured to be longer than the shortest distance to the first roller surface 22a2 in the folding roller 22a on the side closer to the rotation shaft 31, and be shorter than the shortest distance to the second roller surface 22a3, in two folding rollers 22a, 22b, as shown in FIG. 13. As described above, even when the length of the rotation portion 30a is set to be longer than the shortest distance to the first roller surface 22a2, by halting the folding roller pair 22 so that the second roller surfaces 22a3, 22b3 are opposed to the rotation portion 30a in switchback of the sheet, in rotating the rotation portion 30a, the portion does not interfere with the folding roller pair 22. Then, since it is possible to set the rotation portion 30a to be longer than the shortest distance to the first roller surface 22a2 that is the large-diameter portion of the folding roller 22a, with respect to the sheet undergoing switchback-transport, the press portion 30c presses in a position nearer the nip portion 22c, and guides to the sheet stacking tray 21 with more reliability. In addition, in the case of making the rotation portion 30a long, in order for the rotating press guide member 30 not to interfere with the folding blade 23, the rotation shaft 31 should be disposed in a position apart from the folding blade 23 in the sheet transport direction. In this case, as a result, the rotation shaft 31 should be disposed in a position also apart from the folding roller pair 22. In this respect, in this Embodiment, as described previously, since the rotation shaft 31 is configured to be disposed between the nip line L1 and the rotation shaft line L2 in the sheet transport direction, without increasing the length of the rotation portion 30a unnecessarily, it is possible to bring the position for the press portion 30c to press the sheet undergoing switchback-transport closer to the nip portion 22c. Herein, for the folding roller pair, as well as using the rollers with different diameters having the first roller surfaces 22a2, 22b2 and second roller surfaces 22a3, 22b3 with the diameters being different as in this Embodiment, it is also possible to use a roller pair with certain roller diameters, and in this case, it is necessary to make the length of the rotation portion 30a shorter than the shortest distance to the outer region of the folding roller on the side closer to the rotation shaft. Further, as shown in FIG. 13, the press guide member 30 of this Embodiment is in the shape that the guide portion 30b is inside a rotation locus L3 of the rotation portion 30a, and does not protrude outside the region. By this means, as described previously, even when the rotation portion 30a configured to be long rotates, the guide portion 30b does not interfere with the folding roller pair 22. In switchback-transporting the sheet subjected to the first folding processing as described above, the sheet is returned to the sheet stacking tray 21, while being guided by the press guide member 30. After the sheet comes into contact with the regulation stopper 26 and switchback-transport is completed, the press guide member 30 is returned to the retract position. At this point, the member is shifted to the backward transport guide position protruding to the sheet transport path side slightly more than the guide face 21a, so that the rotation portion 30a that is the second guide face of the press guide member 30 is a guide of the sheet S transported in the reverse direction in the sheet stacking tray 21 (see FIG. 8B). After the press guide member 30 shifts to the above-mentioned backward transport guide position, the regulation stopper 26 is moved up, and the sheet is transported backward so that the second fold position is in the position opposed to the folding blade 23. At this point, the sheet S is guided by the rotation portion 30a of the press guide member 30, and therefore, is transported, without being caught in the notch for attachment of the press guide member formed in the guide face 21a, and the like (see FIG. 9A). <Blade Guide Member> As described above, after the second fold position of the sheet subjected to the switchback-transport shifts to the position opposed to the folding blade 23, the press guide member 30 is shifted to the retract position, and the folding blade 23 is operated to execute second folding operation. At this point, it is configured that the blade guide member 40 provided above the folding blade 23 guides the fold-in end portion S2 of the sheet (see FIG. 10B). The configuration and operation of the blade guide member 40 will specifically be described next with reference to FIGS. 15A to 19B. In addition, FIGS. 15A and 15B contain rotation explanatory views of the blade guide member 40, and FIGS. 16A to 19B contain views illustrating operation of the folding blade 23 and blade guide member 40 in executing the second folding processing on the sheet. (Configuration of the Blade Guide Member) In executing the second folding processing on the sheet S, the blade guide member 40 is to shift in a push direction of the folding blade 23, and with respect to the folding blade 23, to guide, in the push direction, the sheet end portion on the fold side formed by the first folding processing i.e. the sheet fold-in end portion S2 so as to guide to the nip portion 22c of the folding roller pair 22. Therefore, as shown in FIGS. 15A and 15B, the blade guide member 40 has a contact portion 40a for coming into contact with the sheet rear end, and a fit hole portion 40b having a partial notch is formed in an end portion on one side of the contact portion 40a, and is fitted rotatably into a shaft portion 40f formed in a base portion 40e. Further, in an end portion on the other side of the contact portion 40a, an arm portion 40c is formed integrally, and an engagement protruding portion 40d is formed in an end portion of the arm portion 40c. Then, the engagement protruding portion 40d is engaged slidably in a long hole 50 formed in a frame of the sheet processing apparatus B. The long hole 50 is formed substantially parallel with the guide face 21a of the sheet stacking tray 21 in the upper vicinity of the blade carrier 24. The base portion 40e is attached to the blade carrier 24 slidably in a direction parallel to a shift direction of the blade carrier 24. Then, a tensile spring 51 is attached to between a locking portion 40e1 formed in the base portion 40e and a locking portion 24a formed in the blade carrier 24. The blade carrier 24 is provided with a press protruding portion 24b capable of coming into contact with the base portion 40e to press. The press protruding portion 24b is provided in the blade carrier 24 rotatably, and is biased in a counterclockwise direction in FIGS. 15A and 15B by a coil spring 52 attached to the rotation shaft. By this means, when the blade carrier 24 shifts in the blade push direction, the press protruding portion 24b comes into contact with the base portion 40e to press the base portion 40e, and the blade guide member 40 shifts integrally with the blade carrier 24. In addition, the coil spring 52 provided in the press protruding portion 24b acts as the so-called torque limiter, and rotates clockwise when a predetermined force or more in the clockwise direction is applied to the press protruding portion 24b. (Change in Angle of the Contact Portion with Respect to the Shift Direction of the Folding Blade) In the above-mentioned configuration, as shown in FIG. 15A, when the blade carrier 24 is in a home position, the blade guide member 40 is pulled by the coil spring 51, and is in a position such that the contact portion 40a is brought into contact with the rotation shaft 31 that is the rotation support of the press guide member 30. This state is the home position of the blade guide member 40. At this point, the contact portion 40a stands to be substantially the same plane as the guide face 21a. Then, when the blade carrier 24 shifts in the blade push direction, the blade guide member 40 is pressed by the press protruding portion 24b to shift together with the blade carrier 24 from the home position, and as shown in FIG. 15B, shifts until a butt portion 40e2 formed to stand in the rear end of the base portion 40e comes into contact with the rotation shaft 31. As described above, when the blade guide member 40 shifts in the blade push direction, the engagement protruding portion 40d is guided by the long hole 50 to slide downward, and the contact portion 40a rotates around a shaft portion 40f as the center. Accordingly, in a state of FIG. 15A in which the blade guide member 40 is in the home position, an angle with respect to the shift direction of the blade carrier 24 i.e. the shift direction of the folding blade 23 is an approximately right angle, and the contact portion 40a is in the standing state. As the blade carrier 24 shifts in a direction in which the folding blade 23 is pushed, as shown in FIG. 15B, the member rotates so as to fall to the upstream side in the push direction of the folding blade 23, and it is configured that the angle of the contact portion 40a with respect to the shift direction changes to an acute angle as the blade carrier 24 shifts. Further, as shown in FIG. 15A, a protruding portion 40f1 is formed in the shaft portion 40f that is a rotation axis of the contact portion 40a. On the other hand, the notch formed in the fit hole portion 40b fitted into the shaft portion 40f is formed to be wider than a width of the protruding portion 40f1, and the blade guide member 40 is capable of rotating in a range of the notch. In the above-mentioned configuration, when the blade carrier 24 shifts to the home position, the base portion 40e is pulled by the tensile spring 51. At this point, the notch face of the fit hole portion 40b comes into contact with the protruding portion 40f1, and further rotation of the contact portion 40a is regulated. Therefore, in a state in which the contact portion 40a is brought into contact with the rotation shaft 31, further shifts are regulated in the blade guide member 40, and the contact portion 40a maintains the standing state in the home position. Further, in the blade guide member 40 of this Embodiment, the contact portion 40a and arm portion 40c are comprised of linear members in cross section, and the arm portion 40c is formed at a predetermined angle with respect to the contact portion 40a. By this means, also in the case of configuring that the contact portion 40a is substantially the same plane as the guide face 21a when the blade guide member 40 is in the home position, the end portion on the side provided with the engagement protruding portion 40d of the arm portion 40c is in the position apart from the guide face 21a on the side opposite to the side on which the folding roller pair 22 exits. Therefore, it is possible to arrange the long hole 50 in which the engagement protruding portion 40d engages apart from the guide face 21a on the side opposite to the side on which the folding roller pair 22 exists, and to arrange in the position of not interfering with the guide face 21a. Accordingly, in the state in which the blade guide member 40 is in the home position, it is possible to configure so that the contact portion 40a functions as a guide portion of a sheet transported in the sheet stacking tray 21. (Operation of the Folding Blade and Blade Guide Member) Described next is operation of the blade guide member 40 when the folding blade 23 is operated so as to execute the second folding operation on the sheet, with reference to FIGS. 16A to 19B. FIG. 16A illustrates a state in which the blade carrier 24 is in the home position, and at this point, the blade guide member 40 is also in the state of the home position. In addition, in the following description, the “push direction” refers to a direction in which the blade carrier 24 pushes the folding blade 23 to the nip portion 22c of the folding roller pair 22 from the position of the home position, and “return direction” refers to a direction in which the blade is returned to the home position from the nip portion 22c side. In the case of being in the above-mentioned home position, the front end of the folding blade 23 is substantially the same plane as the guide face 21a, or on the return-direction side than the guide face 21a (first position), and is separated from the sheet S in the sheet stacking tray 21. Therefore, the sheet, which is guided by the guide face 21a and is transported in the sheet stacking tray 21, is not caught in the blade front end. In addition, also in a state in which the front end of the folding blade 23 protrudes to the folding roller 22 side than the guide face 21a, unless the sheet transported to the sheet stacking tray 21 by another guide member is caught in the blade front end, it is said that the blade front end retracts from the sheet transport path, and therefore, this state may be a first position. Further, when the blade guide member 40 is in the home position, the contact portion 40a of the blade guide member 40 is in a position in contact with the rotation shaft 31. At this point, the press protruding portion 24b is separated from the base portion 40e. Next, in order to push the folding blade 23, when the cam drive motor is driven, the cam member 25 is rotated to shift the blade carrier 24 in the push direction. Then, the press protruding portion 24b comes into contact with the base portion 40e, and the blade guide member 40 shifts in the push direction integrally with the blade carrier 24 and folding blade 23 (FIG. 16B). At this point, it is configured that the front end portion of the folding blade 23 protrudes to the push direction more than the front end portion of the blade guide member 40. When the blade carrier 24 shifts further in the push direction, as shown in FIG. 17A, the first folding processing is performed, the second fold position is opposed to the folding blade 23, and the front end of the folding blade 23 comes into contact with the sheet S halted in the sheet stacking tray 21 (second position). At this point, since the front end of the folding blade 23 protrudes in the push direction more than the blade guide member 40 as described previously, the folding blade 23 first comes into contact with the fold position of the sheet S. When the blade guide member 40 comes into contact with the fold position of the sheet faster than the folding blade 23, displacement tends to occur in the position in which the front end of the folding blade 23 comes into contact with the sheet, and a possibility occurs that the sheet is folded with the second fold position displaced. However, in this Embodiment, since the front end of the folding blade 23 first comes into contact with the sheet S, it is possible to suppress the occurrence of displacement of the fold position as described above. When the blade carrier 24 shifts in the push direction in the above-mentioned state, the second fold position of the sheet S is pushed toward the nip portion 22c of the folding roller pair 22 by the folding blade 23. Concurrently therewith, the contact portion 40c of the blade guide member 40 comes into contact with the fold-in end portion S2 of the sheet subjected to the first folding, and guides so as to push the end portion to the nip portion 22c (FIG. 17B). As described above, since the blade guide member 40 guides the fold-in end portion S2 of the sheet to the nip portion 22c, the fold-in end portion S2 of the sheet travels to the nip portion 22c, without being turned up. Further, in approaching the nip portion 22c, there is the risk that the pushed blade guide member 40 interferes with outer regions of the folding rollers 22a, 22b. At this point, in the blade guide member 40 of this Embodiment, as described previously, as the member shifts in the push direction, the angle of the contact portion 40a with respect to the push direction changes to an acute angle (changes from the state of FIG. 17A to the state of FIG. 17B). Therefore, the contact portion 40a is capable of further entering the vicinity of the nip portion 22c, and it is possible to reliably guide the fold-in end portion S2 of the sheet to the nip portion. When the blade carrier 24 further shifts in the push direction, and as shown in FIG. 17B, the butt portion 40e2 comes into contact with the rotation shaft 31, the blade guide member 40 is regulated not to further shift in the push direction. In addition, in a state in which the blade guide member 40 shifts in the push direction most, the front end (end portion on the folding roller pair 22 side with respect to the push direction) of the blade guide member 40 protrudes to the nip portion 22c side more than the tangent line (of two folding rollers 22a, 22b) for connecting between outer regions of the folding roller 22a and folding roller 22b on the sheet stacking tray 21 side. On the other hand, when the blade carrier 24 is pushed in the push direction by rotation of the cam member 25, as shown in FIG. 18A, since a certain force or more is applied to the coil spring 52, the press protruding portion 24b rotates clockwise against the biasing force of the coil spring 52, and moves into a lower portion of the base portion 40e. By this means, the press protruding portion 24b does not press the blade guide member 40, while the blade guide member 40 is halted, only the folding blade 23 shifts in the push direction, and the blade front end shifts to a position (third position) for pushing the sheet S to the nip portion 22c. The front end of the folding blade 23 at this point protrudes more significantly than the front end of the contact portion 40a of the blade guide member 40. In other words, a distance from the blade front end to the contact portion front end in the third position is longer than the distance from the blade front end to the contact portion front end in the second position. By this means, the sheet is reliably drawn into the nip portion 22c of rotating folding roller pair 22 in a state of being folded in the second fold position, and the sheet front end S1 is also drawn into the nip portion 22c, and is in a three-fold state. When the cam member 25 further rotates, the blade carrier 24 shifts in the return direction together with the folding blade 23 (FIG. 18B). At this point, since the press protruding portion 24b is brought into press-contact with the base portion 40e of the blade guide member 40 by the biasing force of the coil spring 52, the blade guide member 40 also shifts in the return direction integrally with the blade carrier 24 i.e. concurrently with the folding blade 23 by the friction force between the press protruding portion 24b and the bottom of the base portion 40e. When the cam member 52 further rotates and the blade carrier 24 shifts in the return direction, the contact portion 40a of the blade guide member 40 comes into contact with the rotation shaft 31, and the blade guide member 40 returns to the home position. Then, the blade guide member 40 is regulated not to further shift in the return direction (FIG. 19A). When the cam member 25 further rotates, in a state in which the blade guide member 40 does not shift, only the folding blade 23 shifts in the return direction, and returns to the home position (FIG. 19B). As described above, when the blade carrier 24 shifts in the return direction, the folding blade 23 and blade guide member 40 shift in the return direction at the same time, and before the blade carrier 24 and folding blade 23 return to the home positions, the blade guide member 40 returns to the home position. In other words, the blade guide member 40 retracts from the sheet drawn by the folding roller pair 22 and discharge roller 17b faster than the folding blade 23. Therefore, a transport load by the blade guide member 40 is reduced on the sheet S drawn by the discharge roller 17b and the like. (Arrangement Relationship Between the Blade Guide Member and the Press Guide Member) In this Embodiment, as shown in FIG. 4 that is a plan schematic view of the folding processing apparatus F, the blade guide member 40 is disposed in two predetermined positions in the sheet width direction. In the folding blade 23 of this Embodiment, for push front end portions 23a are formed to protrude substantially at regular intervals in the sheet width direction on the push side. The push front end portion 23a pushes the sheet, the sheet is thereby pushed to the nip portion 22c of the folding roller pair 22, and the folding processing is executed. Then, the blade guide members 40 are disposed above the push front end portions 23a on the opposite sides among the four push front end portions 23a. Accordingly, in the sheet S pushed by the folding blade 23, the fold-in end portion S2 is guided by the blade guide members 40 on the opposite sides in the width direction. In order to guide the fold-in end portion S2 of the sheet to the nip portion 22c, it is desirable that the blade guide member 40 is disposed above all the push front end portions 23a formed in four portions, but when the member is disposed above all the portions, the number of parts increases. In contrast thereto, in this Embodiment, as described previously, since the blade guide member 40 is disposed in positions of two push front end portions 23a formed on the opposite end portion sides in the sheet width direction, it is possible to decrease the number of parts. Then, in the fold-in end portion S2 of the sheet pushed by the folding blade 23 in the second folding processing, since the vicinity of the end portion is easier to turn up than the center portion in the sheet width direction, by guiding this portion by the blade guide member 40 to the nip-portion direction, it is possible to effectively prevent the turn-up from occurring. In addition, the two blade guide members 40 are not disposed in the opposite end portions in the sheet width direction, but are disposed above the push front end portions 23a formed closer to the center slightly than the opposite end portions. This is because it is effective to push portions closer to the center slightly than the end portions in the width direction of the sheet, in pushing the sheet by the push front end portions 23a, and the blade guide member 40 is disposed corresponding to the position of the push front end portion 23a. With respect to the position of the blade guide member 40, the press guide members 30 of this Embodiment are disposed on the outer sides than the two blade guide members 40 in the sheet width direction. Specifically, two press guide members 30 are disposed substantially at the same distance as the width of the minimum-size sheet capable of being processed in the folding processing apparatus F, and in performing the folding processing on the minimum-size sheet, are disposed in positions for enabling opposite ends of the sheet in the width direction to be pressed and guided. In addition, in this Embodiment, as well as the two press guide members 30 capable of pressing and guiding the opposite ends of the sheet, the press guide member 30 capable of pressing and guiding the center in the sheet width direction is provided, and total three press guide members 30 are provided. More specifically, the minimum-size sheet capable of being processed in the folding processing apparatus F in this Embodiment is A4, and a length of the width in the short direction of the general A4-size sheet is 210 mm. In the two press guide members 30 capable of pressing and guiding the opposite ends of the sheet in the width direction, a length in the sheet width direction is formed to be 18 mm, a length for connecting between respective end portions on the outer sides of the two press guide members 30 by a straight line is 226 mm longer than the sheet width of the A4-size sheet, and the end portion of the A4-size sheet in the width direction overlaps a part of the face of the press guide member 30 closer to the center in the width direction by 10 mm on each of the sides. The maximum-size sheet capable of being processed in the folding processing apparatus F is A3, and a length of the width in the short direction of the general A3-size sheet is 297 mm. By setting the length for connecting between respective end portions on the outer sides of the two press guide members 30 capable of pressing and guiding the opposite ends of the sheet in the width direction by the straight line to be longer than the sheet width of the minimum-size sheet, it is possible to also provide the end portions of the maximum-size sheet with the effect of the guide. When the sheet with the first folding processing executed is feedback-transported, and as described previously, the press guide member 30 presses the fold-in end portion S2 of the sheet to guide so as to return to the sheet stacking tray 21, it is effective at preventing turn-up to press and guide the opposite end portions in the sheet width direction. Therefore, two press guide members 30 are disposed on the outer sides in the sheet width direction than the blade guide members 40. In this Embodiment, the press guide members 30 disposed on the opposite sides in the sheet width direction are disposed substantially at the same distance as the width of the minimum-size sheet, and the blade guide members 40 are disposed at a distance shorter than the width of the minimum-size sheet on the inner sides than the members 30. <Drive Control> Described next is a control configuration of a drive system in performing the folding processing on the sheet. As shown in a block diagram shown in FIG. 20, in order to follow a procedure of flowcharts shown in FIGS. 21 and 22, a control section 60 controls drive of a folding roller motor 61 for driving and rotating the folding roller pair 22, a discharge roller motor 62 for driving and rotating the discharge roller 17b, and a regulation stopper motor 63 for operating the sheet up-and-down mechanism 27 to move the regulation stopper 26 up and down. Further, similarly, the control section 60 controls drive of a cam motor 64 for driving the cam member 25 to operate the blade carrier 24, and a press guide motor 33 for rotating the press guide member 30. FIGS. 21 and 22 are flowcharts showing a drive control procedure when the sheet S is transported to the sheet stacking tray 21, the sheet front end strikes the regulation stopper halted at a predetermined position, and the folding processing is executed from the state in which the first fold position is in the position opposed to the folding blade 23. When the folding processing is executed, the cam motor 64 is driven to shift the blade carrier 24 in the push direction, and the folding blade 23 comes into contact with the first fold position of the sheet S to push to the nip portion 22c (S1). Concurrently therewith, the folding roller motor 61 and discharge roller motor 62 are driven to drive the folding roller pair 22 and discharge roller 17b to rotate forward (S2). Each of the motors uses a pulse motor, and when the motor is driven, the number of drive pulses thereof is counted. By rotation of the cam member 25, when the folding blade 23 protrudes by a predetermined amount for pushing the first folding portion of the sheet S up to the nip portion 22c of the folding roller pair 22, the travel direction is reversed, and the blade 23 shifts in the return direction, and returns to the home position (S3). The folding processing is performed on the sheet S pushed to the nip portion 22c of the folding roller pair 22 by push of the folding blade 23 for a period during which the sheet S is nipped and transported by the folding roller pair 22, and the sheet is transported by the discharge roller 17b constituting the sheet transport section together with the folding roller pair 22 without any modification. When the sheet is nipped and transported by the discharge roller 17b (S4), the folding roller motor 61 is halted when the second roller surfaces 22a3,22b3 of the folding rollers 22a, 22b are opposed to each other (S5, S6). By this means, the folding roller pair 22 does not nip the sheet, and the sheet is transported by the discharge roller 17b. At this point, the sheet is transported by the discharge roller 17b, while being guided by the second roller surfaces 22a3, 22b3 with a small coefficient of friction. In addition, in this Embodiment, it is determined whether the sheet is transported to the discharge roller 17b, or whether the second roller surfaces 22a3, 22b3 of the folding roller pair 22 are opposed to each other by a pulse count of the motor, and another configuration may be adopted, for example, where the sheet S is detected by a sensor, and corresponding to the detection result, drive of the motor is controlled. Then, when the position of the fold-in end portion S2 of the transported sheet S arrives at within a predetermined region (S7), the drive of the discharge roller motor 62 is halted to halt sheet transport (S8). The predetermined region is a region between the rotation locus L3 of the press guide member 30 for the fold-in end portion S2 of the sheet S and the guide face 21a of the sheet stacking tray 21 (see FIG. 14A). By halting the sheet S so that the fold-in end portion S2 is within the region, when the press guide member 30 is rotated, it is possible to press the sheet S reliably in the direction for switchback-transport by the press portion 30c (see FIG. 14B), and further, it is possible to guide the fold-in end portion S2 undergoing the switchback-transport by the guide portion 30b (see FIG. 14C). After halting the fold-in end portion S2 of the sheet S within the region, the press guide motor 33 is driven to rotate the press guide member 30 so as to arrive at a position (position shown in FIG. 14C) where the guide portion 30b of the press guide member 30 is capable of guiding the switchback-transported sheet S (S9). Further, together with rotation of the press guide member 30, the regulation stopper motor 63 is driven to shift the regulation stopper 26 to a position for enabling the switchback-transported sheet S to be received. After the press guide member 30 rotates as described above, the discharge roller motor 62 and folding roller motor 61 are driven to rotate backward (S10). By this means, the discharge roller 17b and folding roller pair 22 rotate backward, and the sheet S is switchback-transported. At this point, as described previously, since the sheet is guided by the press guide member 30, the sheet does not generate a transport failure, and is switchback-transported in the direction of the sheet stacking tray 21 where the regulation stopper 26 is disposed. Further, with respect to the fold-in end portion S2 of the sheet S, it is described in the above-mentioned Embodiment that the press guide member 30 is rotated so that the guide portion 30b of the press guide member 30 arrives at the position for enabling the sheet S undergoing switchback-transport to be guided by driving the press guide motor 33, after halting the fold-in end portion S of the sheet S within the region between the rotation locus L3 of the press guide member 30 and the guide face 21a of the sheet stacking tray 21, and the press guide member 30 may be rotated without halting the sheet S, when the sheet is switchback-transported. In this case, transport of the sheet S is halted when the fold-in end portion S2 of the sheet S is in a position closer to the nip portion 22c of the roller pair 22 than the region between the rotation locus L3 of the press guide member 30 and the guide face 21a of the sheet stacking tray 21. Subsequently, the discharge roller motor 62 and folding roller motor 61 are driven to rotate backward, it is determined that the fold-in end portion S2 of the sheet S reaches within the region between the rotation locus L3 of the press guide member 30 and the guide face 21a of the sheet stacking tray 21 by a pulse count of the motor, and the press guide member 30 is rotated. When the discharge roller motor 62 and folding roller motor 61 are driven to switchback-transport the sheet S, the sheet S passing through the nip portion 22c of the folding roller pair 22 falls until the sheet comes into contact with the regulation stopper 26, and the switchback-transport is completed (S11), drive of the discharge roller motor 62 and folding roller motor 61 is halted (S12). Herein, completion of the switchback-transport of the sheet S may be determined by counting the numbers of drive pulses of the discharge roller motor 62 and folding roller motor 61 to recognize that the sheet S is transported by a predetermined amount. Next, the press guide motor 33 is driven to return the press guide member 30 to the retract position. At this point, a velocity at which the press guide member 30 is returned to the retract position (see FIG. 14A) from the guide position (see FIG. 14C) is set to be faster than a velocity at which the press guide member 30 is shifted to the guide position from the retract position. In shifting the press guide member 30 to the guide position from the retract position, the velocity is decreased to rotate so as to press the sheet S halted for switchback-transport and change the direction. In contrast thereto, in shifting from the guide position to the retract position, by returning faster, it is possible to hasten the timing of executing next operation. Then, after the press guide member 30 shifts to the backward transport guide position (see FIG. 9A) (S13), the regulation stopper motor 63 is driven to shift so that the second fold position of the sheet S is the position opposed to the folding blade 23 (S14). In this state, the cam motor 64, folding roller motor 61 and discharge roller motor 62 are driven to execute second folding operation (S15 to S17). In addition, in this Embodiment, the motor to drive each member is provided individually, and it is also possible to drive each member by using a common motor and switching drive with a clutch and the like. <Another Embodiment> The Embodiment described previously illustrates the example of forming the press guide member 30 in the shape of an L, rotating the member around the rotation shaft 31 as the center, and pressing the sheet S undergoing switchback-transport or changing the direction of the end portion of the sheet S to guide, and as a press member (direction change member) for pressing the sheet S undergoing switchback-transport or changing the direction of the end portion, a rod-shaped member may be formed and configured to shift linearly. Further, as a substitute for the press guide member 30, in the position in which the press guide member 30 is disposed are disposed a fan and duct for locally collecting air blown from the fan. By rotating the fan at timing at which a position of the fold-in end portion S2 of the transported sheet S arrives at a predetermined region after executing the first folding processing and switchback-transporting, it is also possible to change the direction of the fold-in end portion S2. Further, the Embodiment described previously illustrates the example of configuring the folding rollers 22a, 22b using rollers having the first roller surfaces 22a2, 22b2 which are circular outer surfaces with certain outside diameters, and second roller surfaces 22a3 and 22b3 with the outside diameters smaller than in the first roller surfaces. However, the folding rollers 22a, 22b may be configured using rollers with certain outside diameters, for example, circular rubber rollers and the like. In this case, when the sheet S passes through the folding roller pair, since the sheet S is always nipped by the nip portion of the folding roller pair, it is possible to manage a transport amount of the sheet S by rotation of the folding roller pair. Accordingly, in the case of halting the fold-in end portion S2 of the sheet S in a predetermined position (see FIG. 7A), it is possible to control by a drive amount of the folding roller. Furthermore, the Embodiment described previously illustrates the example of controlling a transport amount of the sheet S and a rotation amount of the press guide member 30 by counting the number of pulses of the motor. As well as the motor pulse, for example, it may be configured that a photosensor for detecting the sheet or a photosensor for detecting the press guide member 30 is provided, and that by detecting that the sheet S is transported to a predetermined position or the press guide member 30 is rotated to a predetermined angle using the sensor, the transport amount of the sheet S or rotation of the press guide member is controlled. Still furthermore, the Embodiment described previously illustrates the example where the regulation stopper 26 with which the front end of the carried-in sheet in the transport direction is brought into contact to regulate is disposed in the lower end of the sheet stacking tray 21, and is provided to be able to move up and down along the sheet stacking tray 21 by the sheet up-and-down mechanism 27. In another Embodiment, a roller pair may be disposed which transports the sheet to the upstream side and downstream side of the sheet stacking tray 21 in the sheet transport direction with the folding blade 23 and folding roller pair 22 therebetween. In this case, in switchback-transporting the sheet S subjected to the first folding processing, it is possible to return the sheet to both the upstream side and the downstream side in the sheet transport direction of the sheet stacking tray 21 with the folding blade 23 and folding roller pair 22 therebetween. In addition, this application claims priority from Japanese Patent Application No. 2019-236597 and Japanese Patent Application No. 2020-198388 incorporated herein by reference. 17133871 canon finetech nisca inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Apr 5th, 2022 05:11PM Apr 5th, 2022 05:11PM Technology Technology Hardware & Equipment
nyse:caj Canon Aug 8th, 2017 12:00AM Oct 26th, 2015 12:00AM https://www.uspto.gov?id=US09725807-20170808 Nano imprinting with reusable polymer template with metallic or oxide coating Methods and systems are provided for fabricating polymer-based imprint lithography templates having thin metallic or oxide coated patterning surfaces. Such templates show enhanced fluid spreading and filling (even in absence of purging gases), good release properties, and longevity of use. Methods and systems for fabricating oxide coated versions, in particular, can be performed under atmospheric pressure conditions, allowing for lower cost processing and enhanced throughput. 9725807 1. A system for forming an imprint lithography template comprising: (a) a substrate support system configured to retain a flexible film substrate in a flat configuration and subsequently translate such retained flexible film substrate from a first position to a second position and from a third position to a fourth position; (b) a fluid dispense system positioned proximate to the first position, the fluid dispense system configured to dispense polymerizable material onto the retained flexible substrate; (c) a motion stage having a template chuck, the template chuck configured to retain a master template, the motion stage moveable between the first and second positions and further configured to translate a master template retained by the template chuck into superimposition with the retained flexible film substrate as the retained flexible film is translated from the first position to the second position; (d) an imprint head configured to vary a distance between the master template and the retained flexible film substrate to define a volume therebetween that is filled by the polymerizable material; (e) an energy source configured to provide curing energy to solidify the polymerizable material when the polymerizable material fills the volume between the master template and the retained flexible film substrate to define a patterned layer; and (f) an atmospheric pressure plasma chemical vapor deposition (AP-CVD) system located between the third and fourth positions, the AP-CVD system including a power control unit, a plasma generation unit including opposing electrodes connected to the power control unit, the electrodes positioned in parallel with the substrate along a first direction, the AP-CVD configured to generate and deposit an oxide layer having a particular thickness onto the patterned layer as the retained flexible film substrate is translated from the third position to the fourth position, the retained flexible film substrate positioned between the opposing electrodes, the particular thickness of the oxide layer based on i) a type of material of the oxide layer and ii) a pattern of the patterned layer. 2. The system of claim 1 wherein the AP-CVD system further comprises an atmospheric pressure plasma dielectric barrier discharge (APDBD) system. 2 CROSS-REFERENCE TO RELATED APPLICATIONS This application is a divisional of U.S. patent application Ser. No. 14/216,017 filed Mar. 17, 2014, which claims the benefit under 35 U.S.C. §119(e)(1) of U.S. Provisional No. 61/792,280 filed on Mar. 15, 2013; both of which are incorporated by reference herein. BACKGROUND INFORMATION Nano-fabrication includes the fabrication of very small structures that have features on the order of 100 nanometers or smaller. One application in which nano-fabrication has had a sizeable impact is in the processing of integrated circuits. The semiconductor processing industry continues to strive for larger production yields while increasing the circuits per unit area formed on a substrate; therefore nano-fabrication becomes increasingly important. Nano-fabrication provides greater process control while allowing continued reduction of the minimum feature dimensions of the structures formed. Other areas of development in which nano-fabrication has been employed include biotechnology, optical technology, mechanical systems, and the like. An exemplary nano-fabrication technique in use today is commonly referred to as imprint lithography. Exemplary nanoimprint lithography processes are described in detail in numerous publications, such as U.S. Pat. No. 8,349,241, U.S. Pat. No. 8,066,930, and U.S. Pat. No. 6,936,194, all of which are hereby incorporated by reference herein. A nanoimprint lithography technique disclosed in each of the aforementioned U.S. patents includes formation of a relief pattern in a formable (polymerizable) layer and transferring a pattern corresponding to the relief pattern into an underlying substrate. The substrate may be coupled to a motion stage to obtain a desired positioning to facilitate the patterning process. The patterning process uses a template spaced apart from the substrate and a formable liquid applied between the template and the substrate. The formable liquid is solidified to form a rigid layer that has a pattern conforming to a shape of the surface of the template that contacts the formable liquid. After solidification, the template is separated from the rigid layer such that the template and the substrate are spaced apart. The substrate and the solidified layer are then subjected to additional processes to transfer a relief image into the substrate that corresponds to the pattern in the solidified layer. BRIEF DESCRIPTION OF DRAWINGS So that features and advantages of the present invention can be understood in detail, a more particular description of embodiments of the invention may be had by reference to the embodiments illustrated in the appended drawings. It is to be noted, however, that the appended drawings only illustrate typical embodiments of the invention, and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 illustrates a simplified side view of a nanoimprint lithography system having a template and a mold spaced apart from a substrate. FIG. 2 illustrates a simplified view of the substrate illustrated in FIG. 1, having a patterned layer thereon. FIGS. 3A and 3B illustrate an exemplary method of forming a template according to the invention. FIGS. 4A and 4B illustrate an exemplary method of imprinting a patterned layer onto a substrate using the template of FIGS. 3A and 3B. FIGS. 5A and 5B illustrate another exemplary method of forming a template according to the invention. FIGS. 6A and 6B illustrate yet another exemplary method of forming a template according to the invention. FIG. 7 illustrates a further exemplary method of forming a template according to the invention. FIG. 8 depicts shear force experimental results of templates according to the invention. FIG. 9 depicts separation force experimental results of a template according to the invention. FIG. 10 depicts fluid filling experimental results of a template according to the invention. DETAILED DESCRIPTION Referring to the figures, and particularly to FIG. 1, illustrated therein is a nanoimprint lithography system 10 used to form a relief pattern on substrate 12. Substrate 12 may be coupled to substrate chuck 14. As illustrated, substrate chuck 14 is a vacuum chuck. Substrate chuck 14, however, may be any chuck including, but not limited to, vacuum, pin-type, groove-type, electrostatic, electromagnetic, and/or the like. Exemplary chucks are described in U.S. Pat. No. 6,873,087, which is hereby incorporated by reference herein. Substrate 12 and substrate chuck 14 may be further supported by stage 16. Stage 16 may provide translational and/or rotational motion along the x, y, and z-axes. Stage 16, substrate 12, and substrate chuck 14 may also be positioned on a base (not shown). Spaced-apart from substrate 12 is template 18. Template 18 may include a body having a first side and a second side with one side having a mesa 20 extending therefrom towards substrate 12. Mesa 20 having a patterning surface 22 thereon. Further, mesa 20 may be referred to as mold 20. Alternatively, template 18 may be formed without mesa 20. Template 18 and/or mold 20 may be formed from such materials including, but not limited to, fused-silica, quartz, silicon, organic polymers, siloxane polymers, borosilicate glass, fluorocarbon polymers, metal, hardened sapphire, and/or the like. As illustrated, patterning surface 22 comprises features defined by a plurality of spaced-apart recesses 24 and/or protrusions 26, though embodiments of the present invention are not limited to such configurations (e.g., planar surface). Patterning surface 22 may define any original pattern that forms the basis of a pattern to be formed on substrate 12. Template 18 may be coupled to chuck 28. Chuck 28 may be configured as, but not limited to, vacuum, pin-type, groove-type, electrostatic, electromagnetic, and/or other similar chuck types. Exemplary chucks are further described in U.S. Pat. No. 6,873,087. Further, chuck 28 may be coupled to imprint head 30 such that chuck 28 and/or imprint head 30 may be configured to facilitate movement of template 18. Nanoimprint lithography system 10 may further comprise a fluid dispense system 32. Fluid dispense system 32 may be used to deposit formable material 34 (e.g., polymerizable material) on substrate 12. Formable material 34 may be positioned upon substrate 12 using techniques, such as, drop dispense, spin-coating, dip coating, chemical vapor deposition (CVD), physical vapor deposition (PVD), thin film deposition, thick film deposition, and/or the like. Formable material 34 may be disposed upon substrate 12 before and/or after a desired volume is defined between mold 22 and substrate 12 depending on design considerations. Formable material 34 may be functional nano-particles having use within the bio-domain, solar cell industry, battery industry, and/or other industries requiring a functional nano-particle. For example, formable material 34 may comprise a monomer mixture as described in U.S. Pat. No. 7,157,036 and U.S. Pat. No. 8,076,386, both of which are herein incorporated by reference. Alternatively, formable material 34 may include, but is not limited to, biomaterials (e.g., PEG), solar cell materials (e.g., N-type, P-type materials), and/or the like. Referring to FIGS. 1 and 2, nanoimprint lithography system 10 may further comprise energy source 38 coupled to direct energy 40 along path 42. Imprint head 30 and stage 16 may be configured to position template 18 and substrate 12 in superimposition with path 42. System 10 may be regulated by processor 54 in communication with stage 16, imprint head 30, fluid dispense system 32, and/or source 38, and may operate on a computer readable program stored in memory 56. Either imprint head 30, stage 16, or both vary a distance between mold 20 and substrate 12 to define a desired volume therebetween that is filled by formable material 34. For example, imprint head 30 may apply a force to template 18 such that mold 20 contacts formable material 34. After the desired volume is filled with formable material 34, source 38 produces energy 40, e.g., ultraviolet radiation, causing formable material 34 to solidify and/or cross-link conforming to a shape of surface 44 of substrate 12 and patterning surface 22, defining patterned layer 46 on substrate 12. Patterned layer 46 may comprise a residual layer 48 and a plurality of features shown as protrusions 50 and recessions 52, with protrusions 50 having a thickness t1 and residual layer having a thickness t2. The above-mentioned system and process may be further employed in nano imprint lithography processes and systems referred to in U.S. Pat. No. 6,932,934, U.S. Pat. No. 7,077,992, U.S. Pat. No. 7,179,396, and U.S. Pat. No. 7,396,475, all of which are hereby incorporated by reference in their entirety. Conventional glass, quartz or fused silica templates used in nanoimprint lithography are typically fabricated by e-beam processes, followed by multiple vacuum processes such as reactive ion etching (RIE). However, such processes are both expensive and time-consuming. Template replication processes are known that use lithography (e.g., nanoimprint lithography) to create replica templates in glass (or a similar substrate) using an e-beam fabricated master template. While less costly that direct e-beam fabrication, such glass template replication still requires RIE etching to transfer pattern features into the glass, followed by SEM inspection in order to finalize and confirm feature geometry. These processes are still time-intensive, and can cause a bottleneck in high-throughput nanoimprint lithography manufacturing processes. Polymer templates, i.e., templates where the patterning surface of the template is itself formed of a polymeric material (e.g. via lithography processes), can be fabricated more quickly and less expensively than glass templates, but they likewise have disadvantages. For example, such polymer templates generally do not have a patterning surface with high enough surface hardness and strength to achieve comparable durability to glass templates. The polymer template pattern features thus are prone to damage through successive imprinting cycles. When in use, polymer templates also typically require continued surface treatment for clean pattern separation from the cured, patterned polymeric material, as such polymer templates generally have high surface free energy and there is a tendency for polymer-to-polymer adherence which degrades template performance over successive imprint cycles. Provided herein are polymer templates with a thin metallic or oxide layer (or layers) on the patterned surface that provides multiple benefits over current glass or fused silica templates or other polymer based templates. Also provided are methods of fabricating such templates, and template fabrications systems that incorporate such methods. Referring to FIGS. 3A-3B, template 188 is formed of three layers: base template substrate or layer 12, patterned polymer layer 146 and thin metal or oxide layer 160 covering patterned layer 146. Base substrate 12 can be a Si or glass wafer, glass plate or a flexible film, such as a plastic film. Patterned layer 146 can be formed by UV or thermal imprint, or any other lithography processes. FIGS. 4A-4B depict, in turn, use of template 188 to imprint polymeric material 34 deposited on substrate 162 to yield patterned layer 196. The thickness of the metal or oxide layer can be in the range of 2-50 nm. In certain variations, the range can be 2-25 nm, or 2-20 nm, or 2-15 nm, or 2-10 nm. For metal layers, the types of metal deposited on patterned layer 146 to form layer 160 can be Gold Palladium (AuPd), Silver Palladium (AgPd), Gold (Au), Silver (Ag), Platinum (Pt), or an alloy of any of these metals, or multiple layers of any of them. For AuPd and AgPd alloys in particular, the ratio of Au or Ag to Pd can range from 20:80 to 80:20. Suitable metal deposition methods include e.g. sputtering or evaporation or atomic layer deposition (ALD). For oxide layers, the types of oxides deposited on patterned layer 146 can include e.g. silicon dioxide (SiO2) or SiO2-like silicon oxide layers (SiOx). Suitable oxide deposition methods include e.g. sputtering or chemical vapor deposition (CVD) or ALD. As used herein chemical vapor deposition (CVD) includes plasma enhanced chemical vapor deposition (PECVD), and atmospheric pressure plasma CVD, including atmospheric pressure plasma jet (APP-Jet) and atmospheric pressure dielectric barrier discharge (AP-DBD) processes, such as those processes described in “Open Air Deposition of SiO2 Films by an Atmospheric Pressure Line-Shaped Plasma,” Plasma Process. Polym. 2005, 2, 4007-413, and “Plasma-Enhanced Chemical Vapor Deposition of SiO2 Thin Films at Atmospheric Pressure by Using HMDS/Ar/O2, J. Korean Physical Society, Vol. 53, No. 2, 2008, pp. 892896, incorporated herein by reference. The advantages of the templates provided herein, i.e., templates having a thin layer of metal(s) or oxide(s) applied on top of a patterned polymeric layer, are manifold. First, such templates show much better fluid spreading as compared to glass or polymer templates. Without being bound by theory, it is believed that liquid imprint resist spreading and template pattern feature filling is enhanced by the hydrophilic properties of the thin metallic or oxide layer. Such enhanced resist spreading and filling properties are important in enabling high-speed imprinting processes. In particular, in UV imprinting processes using templates of the present invention, observed resist spreading was much faster than seen using a fused silica template. Further still, many UV imprint processes at the nanoscale use a helium atmosphere when imprinting in order to achieve acceptable high-throughput speeds. A helium atmosphere minimizes gas trapping that would otherwise occur in ambient air, allowing for both faster (and more faithful) feature filling times than the same process performed in ambient air. That is, helium purging may be necessary for fast and faithful template pattern filling at the nano scale imprinting level. However, helium is comparably expensive and not always readily available in general cleanroom facilities. However, given the enhanced resist spreading driven by the hydrophilic surface properties of the present inventive templates, helium-free UV imprinting is possible even at fine feature (i.e., sub-100 nm) patterning levels. Second, during template separation the metallic or oxide layer contributes to good release performance by, among other things, blocking the liquid imprint resist from otherwise adhering to or bonding with the precured polymer pattern underneath prior to or during curing. Such polymer-polymer interactions are a disadvantage of polymer templates for the reasons previously identified. Also the thin metal or oxide layer can prolong the template useful lifetime by protecting the polymer features underneath. This is proven by pattern longevity test using sub-100 nm linewidth grating pattern. There is an optimal value in the thickness of metal coating and oxide coating for lowest separation force and to block the liquid resist from penetrating through it. When the coating thickness is too thin, separation force will be high since the imprinting layer can intermix with the underlying patterned polymer layer of the template. As the thickness increases to optimal condition, separation force will drop. As the thickness exceeds the optimal condition, separation force will be increased since the feature of the template will be stiffened and/or distorted, i.e., the separation force increases due to surface stiffening, roughening and pattern interlocking effect from mushroom-like deposition profile which likewise cases pattern profile distortion. Therefore, for a given material and pattern, a separation force test can be used to determine the optimal coating thickness for the resultant template, as further described herein. Third, in addition to the enhanced spreading and separation performance and increased template longevity, the conductive thin metal coating is helpful in reducing or removing static charges on the template. Reduction or removal of static charges, in turn, reduces the chances of charged airborne contaminant particles being attracted to and collecting on the template surface. The presence of such particles on the template surface can otherwise cause patterning defects and/or template damage. Fourth, the fabrication of the polymer template with metallic or oxide coating provides self linewidth modulation and/or linewidth reduction properties. For example, if the original polymeric patterns on the template consist of 50/50 nm line/space, an ˜7.5 nm uniform coating of the metallic or oxide layer can change the line/space into 65/35 nm. Imprinted features using this type of template will have reverse fill factor, i.e., 35/65 nm for line/space. Fifth, fabrication methods provided herein can reduce template replication cost and processing time. For example, the fabrication of a polymer template with metallic or oxide coating according to the invention can be performed in two simple steps; (1) imprinting (to form pattern features) and (2) deposition of the metallic or oxide layer coating material. In particular, for a polymer template with e.g. SiO2-like (SiOx) coating, an inline atmospheric pressure plasma CVD system or other inline atmospheric pressure deposition processes can be utilized. Such processes, which are performed under atmospheric pressure conditions, can significantly lower processing cost and greatly enhance throughput, as compared to other deposition processes which require vacuum conditions and higher temperatures. Examples of such processes are depicted in FIGS. 5A-5B. FIG. 5A shows an atmospheric pressure plasma jet CVD approach for depositing oxide layer 226 onto previously formed polymer pattern 224 on base layer 222. Base layer 222 is secured to motion stage 220 and translates relative to atmospheric pressure plasma jet (APPJ) system 200 which itself is oriented perpendicular to motion stage 220. APPJ system 200 consists of first and second plates or bodies 204 and 206 having first and second outer electrodes 212 and 210 disposed thereon, respectively. Inner electrode 208 is disposed between first and second electrodes 212 and 210. The electrodes are connected to voltage supply 203. Plasma gas (typically O2/Ar or He mixture) is provided through the top of the system at input 240. Precursor and carrier gas is provided near the bottom of the inner electrode 212 via supply 202. In operation, plasma/precursor mixture 230 is generated and directed downward toward patterned layer 224 where it forms oxide layer 226 on patterned layer 224 as it is translated relative to system 200. FIG. 5B shows a similar approach using atmospheric pressure plasma dielectric barrier discharge system (DBD) system 300. Here, first and second electrodes 310 and 312 are connected to voltage supply 304 and otherwise disposed in parallel with base layer 322 and patterned polymer layer 326, with first electrode 310 positioned above patterned polymer layer 326 and second electrode 312 positioned between motion stage 320 and base layer 322. Plasma gas (O2/Ar or He mixture) and precursor and carrier gases are provided through input 340 and supply 302, respectively. In this approach, the template may remain static during oxide layer 326 formation from generated plasma/precursor mixture 330. In addition, continuous roll-to-roll methods for translating a flexible plastic substrate having prepatterned features can also be used for forming templates according to the invention, leading to additional cost savings. Examples of such processes are depicted in FIGS. 6A and 6B, which illustrate the atmospheric pressure plasma jet CVD system and atmospheric pressure plasma dielectric barrier discharge system (DBD) of FIGS. 5A-5B, respectively, adapted for oxide deposition onto a polymer template formed on a flexible film substrate, such as a polycarbonate (PC) film. The flexible substrates can be retained and translated relative to the APPJ and APP-DBD systems using e.g. roll-to-roll systems such as described in U.S. Patent Publication No. 2013-0214452, incorporated herein by reference in its entirety. Turning to FIG. 6A, flexible film substrate (or base layer) 422 is supported by rollers 450 and 452 which operate under tension to retain the substrate in a flat configuration and which, when rotated, can translate substrate 422 relative to atmospheric pressure plasma jet (APPJ) system 400. APPJ system 400 is configured as otherwise described above with respect to FIG. 5A (i.e., with voltage source 403 connected to outer electrodes 410 and 412 positioned on opposing plates 406 and 404 and with inner electrode 408 disposed in between, together with plasma gas input 440 and precursor and carrier gas supply 402, and operating such that generated plasma/precursor mixture 430 is directed downward and deposited onto patterned polymer layer 424, thereby forming oxide layer 426). With reference to FIG. 6B, rollers 550 and 552 likewise support and retain flexible film substrate (or base layer) 522 and operates as described above with respect to the roller system of FIG. 6A such that substrate 522 is translated relative to APP-DBD system 500. APP-DBD system 500 is configured as otherwise described above with respect to FIG. 5B (i.e., with voltage source 504 connected to opposing parallel electrodes 510 and 512 with electrode 510 positioned above patterned polymer layer 524 and electrode 512 positioned below substrate 522, together with plasma gas input 540 and precursor and carrier gas supply 502, and operating such that generated plasma/precursor mixture 530 deposits onto patterned polymer layer 524, thereby forming oxide layer 526). Further still, such atmospheric pressure processes as described with respect to FIGS. 6A and 6B can be combined with imprint lithography patterning techniques (also done at atmospheric pressure) such that patterning of the precursor substrate (e.g., glass or plastic film) can be directly followed by oxide coating of the patterned layer, thereby providing for continuous, in-line processes for fabricating such templates. An example of such a process is depicted in FIG. 7 which depicts imprinting of a flexible substrate with to form a polymeric template immediately followed by metal or oxide layer deposition via atmospheric pressure plasma jet (APPJ) system similar to that of FIG. 6A. More particularly, system 600 includes rollers 650 and 652 under tension with additional support rollers 654, 656, 658, and 660 which collectively operate to support and retain flexible film substrate 622 in a flat configuration and translate the substrate across a series of positions. In the imprinting step, substrate 622 is translated from a first position to a second position during which fluid dispense system 32 deposits droplets of polymerizable material 34 onto substrate 622 while master template 612 (connected to a template chuck on a motion stage not shown) is moved into superimposition with and co-translates with substrate 622 such that polymerizable material fills the relief pattern of master template 612. Energy source 606 provides actinic energy to cure polymerizable material 34 and form patterned polymer layer 624 during such cotranslation. Master template 612 is then separated from formed layer 624 and returned to its initial position. Substrate 622 is then fed from the second position to a third position. Roller belt system 640, comprising rollers 642 and 644, is provided to maintain tension on substrate 622 during such movement, with belt system 646 further having protective film 646 that protects features of patterned polymer layer 624 from damage as it translates around roller 642. Substrate 622 containing patterned polymer layer 624 is then translated from the third to fourth position and in the process passes beneath APPJ system 670 which essentially operates as system 400 described above with respect to FIG. 6A to deposit oxide layer 626 onto patterned polymer layer 624 (i.e., voltage source 603 is connected to outer electrodes 610 and 611 positioned on opposing plates 605 and 604 and with inner electrode 608 disposed in between, together with plasma gas input 642 and precursor and carrier gas supply 602, and operating such that generated plasma/precursor mixture 630 is directed downward and deposited onto patterned polymer layer 624, thereby forming oxide layer 626 on patterned polymer layer 624 as substrate 622 is translated past APPJ system 670). EXAMPLES Metal-Coated Polymer Templates Example 1: Metal Layer Thickness Determination Shear force testing was used to determine optimal coating thicknesses for various AuPd coating thicknesses. Silicon wafer substrates were coated with an adhesion layer, with a UV curable imprint resist fluid (MonoMat™, Molecular Imprints, Austin, Tex.) in turned deposited via small droplets onto the adhesion layer and then imprinted with a blank imprint template and cured to form a flat polymeric layer. AuPd (60%/40%) was sputtered onto the polymeric layer using an Edwards S150B Sputter Coater (Edwards Ltd., West Sussex, UK) at various sputtering times ranging from 0-180 seconds, resulting in corresponding AuPd layer thicknesses as follows: 0 sec (0 nm); 10 sec (2 nm); 30 sec (5 nm); 60 sec (9 nm); 90 sec (12 nm); 120 sec (18 nm); 180 sec (26 nm). Shear force tests were performed with each AuPd coated sample using an Instron Model 5524 force tester (Instron, Norwood, Mass.). The same UV curable imprint resist was deposited onto each AuPd coated sample, then placed in contact with the test specimen of the force tester (which itself was treated the same adhesion layer as above), followed by curing of the imprint resist. Each sample was then subjected to shear force testing with the results shown in the graph of FIG. 8. Glass test specimens (with and without an adhesion layer) were also tested as controls. As observed, shear force decreased at longer sputtering times up to a 90 second sputter time, which corresponds to a 10-15 nm AuPd layer. This sample had the lowest shear force (at 3.00 lbf), and corresponds to the lowest anticipated separation force in use. However, longer sputtering times beyond 90 seconds caused an increase shear (and thus separation force), likely due to increased sputtering time resulting in surface roughening of the AuPd layer, which can increase total surface contact area with the cured resist, thereby increasing in turn the adhesion force that must be overcome. Example 2: Template Formation Metal coated polymer templates with 130 nm pitch gratings (65 nm line width; 65 nm space width) were prepared as follows. A silicon master template having 130 nm pitch gratings as above was loaded onto a roll-to-roll imprinting tool (LithoFlex™ 100, Molecular Imprints, Austin, Tex.) and then the pattern was transferred to 170 um thick polycarbonate film by drop deposition of the UV curable imprint resist fluid as in Example 1 above onto the polycarbonate film followed by imprinting with the silicon master template to form patterned polymeric layers on the polycarbonate film having same dimensions (i.e., 130 nm pitch gratings with 65 nm line width and 65 nm space width). These patterned polymer layers were then subjected to AuPd or AgPd sputtering as described in Example 1 above, each for approximately 90 seconds sputtering (12 nm target thickness) to form AuPd or AgPd coated polymer templates at the following ratios: AuPd (75:25), AgPd (60:40) and AgPd at 30:70. Example 3: Patterning Performance The templates of Example 2 were subjected to imprint testing as follows. Imprint resist fluid as above was drop-dispensed onto adhesion layer treated silicon wafers and imprinted using the AuPd and AgPd polymer templates of Example 2. Imprinting was performed by hand rolling under atmospheric pressure conditions. Once cured, template separation was also done by a manual peel-off method. The resultant imprinted patterned layers on the silicon wafers were evaluated for visible defects, including global or local separation failure and/or feature shearing, breaking or distortion. Each template exhibited good pattern transfer without showing any local or global separation failure, or feature shearing, breakage or distortion. Example 4: Separation Force An AuPd (60%/40%) coated polymer template was prepared as in Example 2 above but with a 60 nm half-pitch (60 nm line width, 60 nm space width) concentric gratings pattern. The template was sputtered for approximately 90 seconds to form an approximately 12 nm layer. This template was subjected to multiple imprint testing, as described in Example 3, and the separation force observed was compared to the separation force observed when using a standard fused silica template of the same pattern dimensions. The results are depicted in FIG. 9 (with data from the standard fused silica template identified by reference letter “A”; and data from the AuPd polymer template identified by reference letter “B”). Both templates exhibited a lowering of separation force over successive imprints (from initial separation force at about 20 N down to levels of 10N or less after the 5th successive imprint), with the sample template similar in overall separation performance as compared to the fused silica template. Example 5: Fluid Filling The template of Example 4 above was subjected to machine imprint testing using HD700 imprint lithography tool (Molecular Imprints, Austin, Tex.) Fluid spread and fill times were monitored during imprinting. For each template, images were obtained of fluid spreading and filling at 3 seconds, 5 seconds, and 10 seconds, and these images were compared against those obtained using the standard fused silica template of the same pattern dimensions under identical conditions. These images are depicted at FIG. 10, with column “A” images corresponding to the fused silica template and the column “B” images corresponding to the Example 3 template. As can be observed, the Example 3 template provides enhanced fluid spreading and filling as compared to the fused silica template. The Example 3 template showed complete spreading and filling within 5 seconds, whereas the fused silica template still had not completely spread and filled by 10 seconds. Example 6: Template Longevity The template of Example 3 was subjected to 100× continuous imprint testing according to the procedures described in Example 3. After the 100th imprint there was still no imprint pattern degradation or any indication of global or local separation failure. Oxide Coated Polymer Templates Example 7: Template Formation (Vacuum Deposition) and Patterning Performance Oxide-coated polymer templates were prepared as otherwise described above in Example 2 but with silicon dioxide (SiO2) substituted for AuPd or AgPd and deposited by PECVD. A PTI-790 deposition system (Plasma-Therm, St. Petersburg, Fla.) was used to deposit SiO2 in various thicknesses onto the pre-patterned film to form SiO2-coated polymer templates. Templates were formed having SiO2 layer thicknesses of 10 nm and 15 nm as measured along the top of the gratings (with sidewall SiO2 thicknesses correspondingly reduced to 2.5 nm and 5 nm, respectively). These SiO2-coated polymer templates were subjected to imprint testing as described in Example 3, and likewise each template exhibited good pattern transfer without showing any local or global separation failure, or feature shearing, breakage or distortion. Example 8: Template Blending The 15 nm SiO2 coated polymer template of Example 8 was subjected to repeated bending to replicate use conditions associated with roll-to-roll imprinting. Specifically, the template (80 mm by 80 mm) was bent into a curve having an approximately 5 mm radius then allowed to return to its normal configuration. This process was repeated 20 times and the template was inspected by SEM. No surface cracking or other damage was observed. Example 9: UV Transmission SiO2 templates prepared according to Example 7 above were tested for UV and visible light transmission. These templates had SiO2 layer thicknesses of 10 nm, 16 nm, and 23 nm, respectively. Also tested for comparison were AuPd and AgPd templates formed according to Example 2 above, as well as bare polycarbonate film. Air was used as reference. The 10 nm, 18 nm and 23 nm SiO2 coating showed essentially the same UV transmission (75-76%) as bare PC film at A=365 nm. The AuPd and AgPd coated templates, by contrast, had transmission levels of 41% and 44% respectively, about a 45% loss relative to the SiO2 coated templates. Example 10: Template Formation (Atmospheric Pressure Plasma Jet Process, APPJ) SiO2-like material (SiOx) coated polymer templates using atmospheric pressure plasma jet (APPJ) were formed as follows. The initial patterned film was formed as described in Example 2. These pre-patterned polycarbonate film were then subject to APPJ deposition system (Surfx Technologies, Redondo Beach, Calif.) to coat SiOx material in various thicknesses (5 nm, 10 nm, 23 nm, 33 nm and 43 nm). Tetramethylcyclotetrasiloxane (TMCTS) precursor mixed with helium dilution gas and oxygen reacting gas was used. APPJ deposition head fixed on x-y stage was moved over the pre-patterned film surface with 10 mm gap at ambient environment to form the SiOx-coated polymer templates. Example 11: Patterning Performance SiOx-coated polymer templates prepared according to Example 10 were subjected to imprint testing as described in Examples 3 and 7. Each template exhibited good pattern transfer without showing any local or global separation failure, or feature shearing, breakage or distortion. Further modifications and alternative embodiments of various aspects will be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only. It is to be understood that the forms shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed, and certain features may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description. Changes may be made in the elements described herein without departing from the spirit and scope as described in the following claims. 14922953 canon nanotechnolgies, inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment
nyse:caj Canon Nov 3rd, 2009 12:00AM Jun 14th, 2005 12:00AM https://www.uspto.gov?id=US07613317-20091103 Image processing apparatus, image processing method, computer program and computer readable storage medium There is provided a message processing apparatus including an image inputting unit for inputting a document image, an extracting unit for extracting a character image from the input document image, and an embedding unit for embedding watermark message by correcting a density value of the extracted character image. 7613317 1. An image processing apparatus comprising: an inputting unit arranged to input a watermarked document image, wherein the watermarked document includes a first area where density value of pixel data of characters has been altered in order to embed predetermined information bits, and a second area different from the first area where density value of pixel data of characters has been altered in order to embed a watermark message; a generating unit arranged to generate, for each character image, a density histogram indicating a frequency of a density value of pixel data of each character image in the input document image; a first most frequent value obtaining unit arranged to obtain, for each character image in the first area, a most frequent value in the density histogram generated by the generating unit; a minimum value obtaining unit arranged to obtain the minimum most frequent value obtained by the first most frequent value obtaining unit for characters in which an information bit 1 is embedded; a maximum value obtaining unit arranged to obtain the maximum most frequent value obtained by the first most frequent value obtaining unit for characters in which an information bit 0 is embedded; a reference value obtaining unit arranged to obtain a reference frequency value by averaging the minimum frequency value obtained by the minimum value obtaining unit with the maximum frequency value obtained by the maximum value obtaining unit; a second most frequent value obtaining unit arranged to obtain, for each character in the second area, a most frequent value in the density histogram generated by the generating unit; a comparing unit arranged to compare, for each character image in the second area, the reference frequency value obtained by the reference value obtaining unit and the most frequent value obtained by the second most frequent value obtaining unit; and an extracting unit arranged to extract a watermark message embedded in each character image in the second area based on a result of the comparison by the comparing unit. 2. An image processing apparatus according to claim 1, wherein the extracting unit is arranged to assign a first predetermined value to the watermark message when the most frequent value is larger than the reference frequency value and a second predetermined value when the most frequent value is smaller than the reference frequency value. 3. An image processing method performed by an image processing apparatus comprising: an inputting step of inputting a watermarked document image, wherein the watermarked document includes a first area where density value of pixel data of characters has been altered in order to embed predetermined information bits, and a second area different from the first area where density value of pixel data of characters has been altered in order to embed a watermark message; a generating step of generating, for each character image, a density histogram indicating a frequency of a density value of pixel data of each character image in the input document image; a first most frequent value obtaining step of obtaining, for each character image in the first area, a most frequent value in the density histogram generated in the generating step; a minimum value obtaining step of obtaining the minimum most frequent value obtained in the first most frequent value obtaining step for characters in which an information bit 1 is embedded; a maximum value obtaining step of obtaining the maximum most frequent value obtained in the first most frequent value obtaining step for characters in which an information bit 0 is embedded; a reference value obtaining step of obtaining a reference frequency value by averaging the minimum frequency value obtained in the minimum value obtaining step with the maximum frequency value obtained in the maximum value obtaining step; a second most frequent value obtaining step of obtaining, for each character in the second area, a most frequent value in the density histogram generated in the generating step; a comparing step of comparing, for each character image in the second area, the reference frequency value obtained in the reference value obtaining step and the most frequent value obtained in the second most frequent value obtaining step; and an extracting step of extracting a watermark message embedded in each character image in the second area based on a result of the comparison in the comparing step. 4. A computer-readable medium having stored thereon a computer program for performing the image processing method according to claim 3. 5. An image processing method according to claim 3, wherein a first predetermined value is assigned to the watermark message when the most frequent value is larger than the reference frequency value and a second predetermined value is assigned to the watermark message when the most frequent value is smaller than the reference frequency value. 6. A computer-readable medium having stored thereon a computer program for performing the image processing method according to claim 5. 6 BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to the technology of embedding message on a document image, and extracting the embedded message. 2. Description of the Related Art In recent years, in digital image forming devices, such as a printer and a copying machine, the improvement in the quality of images is remarkable, and high-definition printed matter can now easily be obtained. That is, anyone can obtain printed matter demanded by image processing by a highly efficient scanner, printer, copying machine, and computer. Therefore, problems, such as an illegal copy of a document and falsification, have occurred. In order to prevent or inhibit such illegal copies or falsification, access control message has been embedded as watermark message in the printed matter itself in recent years. The following processes have been proposed as general realization method for such watermarks: (1) embedding message by controlling the quantity of space between words; (2) embedding message by rotating a character; (3) embedding message by scaling of character; and (4) embedding message by transforming a character. FIG. 1 shows the printed matter of the type which embeds message by controlling the quantity of space between words, e.g., the quantity of space between English words. Here, s and p are called a space. This space will be set to p1=(1+q) (p+s)/2, and s1=(1−q) (p+s)/2, if an embedding watermark message bit is “0”. It will be set to p1=(1−q) (p+s)/2, and s1=(1+q) (p+s)/2 if the embedding watermark message bit is “1”. The range of q is 0<q<1. FIG. 2 illustrates a case in which watermark message is embedded by expanding or reducing a character size. For example, in cases where a character size is expanded rather than the original character, “1” is embedded (A in FIG. 2), and “0” is embedded in cases where the character size is reduced (B in FIG. 2). The character that is the embedded object may be a continuous character, a character of a prescribed interval, or a character of a prescribed position. In FIG. 2, since the character “m” is expanded and the character “u” is reduced, the watermark message “10” is embedded. FIG. 3 is a figure illustrating a case in which watermark message is embedded by rotating a character (i.e., changing the lean of the character). For example, in cases where the character is rotated clockwise, “1” is embedded (C in FIG. 3), and “0” is embedded in cases where the character is rotated counterclockwise (D in FIG. 3). The character that is the embedded object may be a continuous character, a character of a prescribed interval, or a character of a prescribed position. In FIG. 3, since the character “m” is rotated clockwise and the character “t” is rotated counterclockwise, the message “10” is be embedded. However, in cases where watermark message was embedded using the above-described conventional methods, a sense of incongruity is produced to the difference in a character size, the difference in the interval of a character, and the difference in the lean of a character. SUMMARY OF THE INVENTION In view of the above problems in the conventional art, the present invention provides a message processing apparatus which can minimize degradation of a font, secure the embedding of the amount of message more than fixed, and perform embedding and extracting of watermark message with high noise resistance. In accordance with an aspect of the present invention, a message processing apparatus includes: an image inputting unit arranged to input a document image; an extracting unit arranged to extract a character image from the document image input by the image inputting unit; and an embedding unit arranged to embed watermark message by correcting a density value of the character image extracted by the extracting unit. In accordance with another aspect of the present invention, a message processing apparatus includes: an inputting unit arranged to input a watermarked document image; an analyzing unit arranged to obtain a most frequent value of the character images in the document image input by the inputting unit; and an extracting unit arranged to extract the watermark message by comparing the most frequent value of the character images with a predetermined value. Further features and advantages of the present invention will become apparent from the following description of exemplary embodiments (with reference to the accompanying drawings). BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates the electronic watermark embedding method using the interval of the character. FIG. 2 illustrates the electronic watermark embedding method using the size of the character. FIG. 3 illustrates the electronic watermark embedding method using the lean of a character. FIG. 4 is a block diagram of the digital watermark embedding apparatus of the present invention. FIG. 5 illustrates the system provided with the digital watermark embedding and extracting apparatus of the present invention. FIG. 6 is a flow chart illustrating the operation procedures of the digital watermark embedding apparatus in a first embodiment. FIG. 7 is a flow chart illustrating the circumscribed rectangle extraction and a setup of a reference value in the first embodiment. FIG. 8 is a flow chart illustrating the electronic watermark embedding method in the first embodiment. FIG. 9 illustrates a density histogram before and after the digital watermark embedding in the first embodiment. FIG. 10 is a block diagram of the digital watermark extracting apparatus of the present invention. FIG. 11 is a flow chart illustrating operation procedures of the digital watermark extraction in the first embodiment. FIG. 12 is a flow chart illustrating the digital watermark extraction method in the first embodiment. FIG. 13 shows a density histogram before and after the digital watermark embedding in a first modification. FIG. 14 is a flow chart illustrating operation procedures of embedding a digital watermark of the first modification. FIG. 15 illustrates a density histogram before and after the digital watermark embedding in a second modification. FIG. 16 is a flow chart illustrating operation procedures of embedding a digital watermark of the second modification. FIG. 17 illustrates a density histogram before and after the digital watermark embedding in a second embodiment. FIG. 18 is a flow chart illustrating operation procedures for extracting a circumscribed rectangle and setting a reference value in the second embodiment. FIG. 19 is a flow chart illustrating operation procedures of embedding a digital watermark in the second embodiment. FIG. 20 is a flow chart illustrating operation procedures of extracting a digital watermark in the second embodiment. FIG. 21 is a figure illustrating characters before and after the digital watermark embedding in a third embodiment. FIG. 22 is a flow chart illustrating operation procedures of embedding a digital watermark in the third embodiment. FIG. 23 is a flow chart illustrating operation procedures of extracting the digital watermark in the third embodiment. FIG. 24 is a figure illustrating a fixed message bit and a digital watermark message bit in a fourth embodiment. FIG. 25 is a flow chart illustrating operation procedures of extracting the digital watermark in a fourth embodiment. FIG. 26 is a flow chart illustrating operation procedures of calculating a reference value in the fourth embodiment. DESCRIPTION OF THE EMBODIMENTS Exemplary embodiments of the watermark message embedding apparatus of the present invention are described below with reference to the accompanying drawings. First Embodiment FIG. 4 is a block diagram of the digital watermark embedding apparatus of the present invention. As shown in FIG. 4, a document image 100 which is an object that is to have watermark message embedded therein is input into an image inputting unit 101. The image inputting unit 101 provides the document image 100 to an analyzing unit 102. In the document analyzing unit 102, the spatial relationship of the characters in the document image 100 are analyzed. After analyzing the document image 100, the analyzed document image is forwarded from the document analyzing unit 102 to an embedding determination unit 103. In the embedding determination unit 103, it is determined whether a digital watermark can be embedded in the document image 100. If the embedding determination unit 103 determines that a digital watermark can be embedded in the document image 100, the document image 100 is forwarded to an embedding unit 106. Watermark message 104 to be input is input via a watermark message inputting unit 105. The watermark message 104 is forwarded from the watermark message inputting unit 105 to the embedding unit 106. The embedding unit 106 embeds the document image 100 received from the embedding determination unit 103 with the watermark message 104 received from the watermark message inputting unit 105 to generate an output image that includes the embedded watermark. The image is forwarded from the embedding unit 106 to an image outputting unit 107 which outputs the watermarked image 108. FIG. 5 is a block diagram illustrating system components of the digital watermark embedding and extracting apparatus of the present invention. It is not necessary to use all of the components (functions) shown in FIG. 5 in realization of the digital watermark embedding and extracting apparatus. In FIG. 5, a computer 201 is general-purpose message processor, such as a personal computer. The computer 201 can input the image 100 read by a scanner 217. The computer 201 can perform editing and storage. The image 100 obtained by the scanner 217 can be printed by a printer 216. A user can perform various operations by inputting message to the computer 201 via an interface (I/F) 212 using a mouse 213 and/or a keyboard 214. In the computer 201, the various components are connected by a bus 207 which is used for transferring data among the various components. In FIG. 5, a central processing unit (CPU) 202 controls operation of the components in the computer 201. The CPU 202 can execute stored programs. The programs can be stored in a main storage device 203 which includes a random access memory (RAM). In addition to storing programs, the RAM is used to temporarily store image data of an object for the processing performed in the CPU 202. A hard disk drive (HDD) 204 is a device that can store the program and image data which are transmitted to the main storage device 203. The HDD can also be used to save other data, for example, the image data after processing. A scanner interface (I/F) 215 is connected with the scanner 217 which reads a manuscript, a film, etc. and generates image data. The scanner I/F 215 is an interface for inputting the image data obtained with the scanner 217 into the computer 201. A printer I/F 208 is an interface for transmitting the image data to the printer 216. The computer also includes drives for reading data from or writing data to an external storage medium. A compact disk (CD) drive 209 is a device for reading data stored on a CD (e.g., CD-R (CD-recordable) or CD-RW (CD-rewritable)) or writing data to the CD. An FDD drive 211 is a device for reading data stored on a floppy disk (FD) or writing data to the FD. A DVD drive 210 is a device for reading data stored on a DVD (digital versatile disk) or writing data to the DVD. In cases where the program for image editing or the printer driver is stored on a CD, FD, DVD, etc., these programs are installed on an HDD 204 and transmitted to the main storage device 203 if needed. An input device interface (I/F) 212 is an I/F connected to one or more input devices, such as a keyboard 214 and a mouse 213 in order to receive input from the input devices 213, 214. A monitor 206 is a display which can display the extraction result and processing process of watermark message. A video controller 205 is a device for transmitting display data to the monitor 206. Note that the present invention can be applied to an apparatus having a single device (for example, a copying machine, a fax etc.) or to system constituted by a plurality of devices (for example, a host computer, interface devices, scanner, printer etc.). In the above-described arrangement, the computer 201 functions as the digital watermark embedding or extracting apparatus by executing the program loaded to the main storage device 203 with the input designation from the mouse 213 or keyboard 214, by the CPU 202. It is also possible to view an execution condition and its result by the monitor 206. Methods of embedding a digital watermark and extracting the embedded digital watermark are described below. FIG. 6 is a flow chart illustrating operation procedures of embedding a digital watermark using the digital watermark embedding apparatus according to the first embodiment. First, in step S301, the original document image 100 which is the embedding object of watermark message is input into the document analyzing unit 102 via the image inputting unit 101. In the watermark embedding apparatus shown in FIG. 5, an image inputting unit 101 for inputting the document image 100 is represented by the scanner 217. The document image data input into the document analyzing unit 102 may be bitmap data output by reading printed matter with the scanner 217, or electronic data created by using a text editing application program. The document image data can also be bitmap data output by converting electronic data, which is data of a particular form corresponding to the application program or text format, by using image processing software. The application program, text format and image processing software program may be stored on the hard disk 204, an external medium, such as a CD read using the CD drive 209, a DVD read using the DVD drive 210, an FD read using the FDD drive 211, or some combination thereof. In step S302, an extraction of circumscribed rectangle (character area) and a setup of a reference value are performed by the document analyzing unit 102 according to the document image data input at step S301. The procedure of step S302 is explained in more detail below with reference to FIG. 7. FIG. 7 is a flow chart showing processing details of step S302 of FIG. 6 of extracting a circumscribed rectangle setting up of a reference value according to the first embodiment. Each step of the flowchart shown in FIG. 7 is performed for the characters of the whole document. The circumscribed rectangle of a character is a rectangle surrounding the character. In this embodiment, the circumscribed rectangle area shows the character area which is the object of the embedding of digital watermark. A search is performed for a blank part (portion which shows density other than the density of the character) by projecting each pixel value of the document image data to a vertical coordinate axis. The line in which a character exists using the search result is distinguished. Then, a search is performed for the blank part by projecting each pixel value of the document image data to a horizontal-coordinates axis per line. The circumscribed rectangle area of the character is distinguished by using the search result. The above-described processing extracts the circumscribed rectangle of each character (step S302a). Next, a density histogram of each of the extracted characters is calculated, and the most frequent value of each of the characters is determined (step S302b). Here, the density histogram is a plot of the frequency of the density value of all of the pixel data in the circumscribed rectangle area of each character. The frequency shows the number of times of an appearance of the same density value in one character. In this embodiment, since the density histogram is created based on the pixel data of the circumscribed rectangle area of a character, the frequency of the density value of the portion which is not a character becomes the highest. The “most frequent value” in this embodiment is taken as the density value with the highest frequency from the histogram except the density value of the portion which is not a character. For example, in graph 1 of FIG. 9 (described later), the density value of the portion which is not the character is set to 200 to 255. In cases where the density histogram is not created based on the pixel data of the circumscribed rectangle area but is created based on the pixel data of only a character part, it is not necessary to perform processing for removing the pixel data of the portion which is not a character. In this embodiment, the “most frequent value” is a density value with the highest frequency in the character part (regardless of the method used for creating the density histogram). Based on the most frequent value of each character obtained by step S302b, the maximum and the minimum are extracted out of the most frequent values for all the characters in the document image. For example, the minimum of the most frequent value is assumed to have been “a” and the maximum of the most frequent value is assumed to have been “b”. The value from the minimum “a” to the maximum “b” is determined as a correction range (described later) (step S302c). The middle value between the minimum “a” and the maximum “b” is calculated. Let the middle value be reference value “t.” That is, t=(a+b)/2 (step S302d). The reference value is used for the criterion of determination when extracting watermark message. Here, the reference value may be required for watermark extraction, and may be stored in storage device. A user may store the reference value secretly as a key for watermark extraction. The reference value may be embedded as watermark message in the document image data. Although the reference value was set to t=(a+b)/2 in the above, it is not limited to this value. The reference value should be a predetermined value in the correction range. Although the reference value was calculated from the most frequent values of all of the characters in the document image, it is not limited to this. For example, the reference value may be the most frequent value of one character which appeared first, or the reference value may be calculated from the most frequent values of a plurality of characters. After the reference value is calculated (step 302d of FIG. 7), processing returns to FIG. 6. After a circumscribed rectangle is extracted and a reference value is set up in step S302, processing proceeds to step S303 of FIG. 6. In step S303, the watermark message 104 to embed is input from the watermark message inputting unit 105 by using keyboard 214. The watermark message may be selected from data stored in the storage device, e.g., HDD 204. Next, one character is input in step S304. In step S305, the embedding determination unit 103 determines whether the character input in step S304 is a character that can embed the watermark based on the size of the circumscribed rectangle. Characters that are too small to have a watermark embedded are exempted from having a watermark embedded. If it is determined in step S305 that embedding is not possible for the character input at step S304 (no in step S305), processing returns to step S304 and the next character is inputted. If it is determined in step S305 that embedding is possible for the character input at step S304 (yes in step S305), processing proceeds to step S306. In step S306, the embedding unit 106 embeds the digital watermark to the character input at step S304. A process performed by the embedding unit 106 for shifting the position of the most frequent value in the density histogram of a character to right and left according to one of the digital watermark embedding methods in is explained next. FIG. 9 is a figure illustrating the density histogram before and after the digital watermark embedding in the first embodiment. Graph 1 of FIG. 9 is a density histogram of a character 900. In this embodiment, the density value of a character is expressed by 256 gradations, a black density value is set to “0,” and a white density value is set to “255.” In graph 1 of FIG. 9, the left end of a graph shows density value “0” (black), and the right end shows density value “255” (white). The graph shows the minimum of the most frequent value (“a”) and the maximum of the most frequent value (“b”) of the document image calculated in step S302. Furthermore “d” of the graph is a value from “a” to “b” and shows the correction range. The graph also shows the reference value “t” (t=(a+b)/2). Graph 2 in FIG. 9 is a graph which shifted the position of the most frequent value in the density histogram of the character to the left, and shows the case where “0” is embedded. Graph 3 in FIG. 9 is a graph which shifted the position of the most frequent value of the character to the right, and shows the case where “1” is embedded. In cases where the position of the most frequent value in the density histogram of the character is shifted to the left, the watermark is embedded so that it may be shown with the graph 2 of FIG. 9, and the density values of the pixels contained in the correction range “d” are corrected to “a”. The density values of the pixels contained from “0” to “a” and the pixels contained from “b” to “255” are not corrected. Since the density values of the pixels from “a” to “b” which include the reference value “t” is changed into “a,” the most frequent value of the character is set to “a” (the graph 2 in FIG. 9). In cases where the position of the most frequent value in the density histogram of the character is shifted to the right, the watermark is embedded so that it may be shown with the graph 3 of FIG. 9, and the density values of the pixels contained in the correction range “d” are transformed to “b.” The density values of the pixels contained from “0” to “a” and the pixels contained from “b” to “255” are not corrected. Since the density values of the pixels from “a” to “b” which include the most reference value “t” is changed into “b,” the most frequent value of the character is set to “b” (the graph 3 in FIG. 9). The digital watermark message is embedded using the spatial relationship of the most frequent value in the density histogram. For example, in cases where “0” is embedded, the density value is corrected so that it may be set to “most frequent value<t,” and in cases where “1” is embedded, the density value is corrected so that “most frequent value>t.” FIG. 8 is a flow chart illustrating processing of embedding an electronic watermark (step S306 of FIG. 6) in the first embodiment. First, in step S306a, the watermark message bit to embed is selected. For example, “1” is assigned to the first character in cases where it is input, such as is shown in the example in FIG. 24. In step S306b, it is determined whether the watermark message bit to embed is “1.” If it is determined in step S306b that the watermark message bit is “1” (yes in step S306b), processing proceeds to step S306c. In step S306c, the density values from “a” to “b” are corrected to a value larger than “b” so that “most frequent value>t.” Processing then returns to FIG. 6. If it is determined in step S306b that the watermark message bit is not “1” (no in step S306b), processing proceeds to step S306d. In step S306d, the density values from “a” to “b” are corrected to a value smaller than “a” to set “most frequent value<t.” The range from “a” to “b” is the correction range of the density histogram obtained by step S302c (FIG. 7), and “t” is the reference value (calculated in step S302d of FIG. 7) used when extracting watermark message. Processing the returns to FIG. 6. In step S307 of FIG. 6, it is determined whether the character input at step S304 is the last character in the document image. If it is determined that the character input at step S304 is the last character in the document image (yes at step S307), processing proceeds to step S308. In step S308, the watermarked image is output from image outputting unit 107. The output may be to print the image with the embedded watermark message, to store the image with the embedded watermark message as image data, to transmit the image with the embedded watermark message to one or more other terminals, etc. Processing then returns to FIG. 6. On the other hand, if it is determined in step S307 that the character input at step S304 is not the last character in the document image (no at step S307), processing returns to step S304, and the next character is input. FIG. 10 is a block diagram of the digital watermark extracting apparatus in this invention. As shown in FIG. 10, document image 700 which is the extracting object of embedded watermark message is input into image inputting unit 701 represented by the scanner 217 of FIG. 5. The spatial relationship of the character is analyzed by document analyzing unit 702. A determination of whether there is embedding is made in embedding determination unit 703. A watermark message extracting unit 704 extracts a digital watermark, and outputs watermark message 705. FIG. 11 is a flow chart illustrating operation procedures of digital watermark extraction in the first embodiment. First, in step S801, the watermarked image is input. In step S802, the circumscribed rectangle (character) is extracted from the image. The document image 700 used as the object for extraction is input into the document analyzing unit 702 via the image inputting unit 701 represented by the scanner 217 of FIG. 5. The document image data input into the document analyzing unit 702 may be the bitmap data output by reading printed matter with the scanner 217, or electronic data created by using a text editing application program. The document image data may also be bitmap data output by converting electronic data, which is data of a particular form corresponding to the application program or text format stored on the hard disk 204 or on a storage medium connected to a drive, such as a CD read by the CD drive 209, a DVD read by the DVD drive 210 or an FD read by the FDD drive 211, by using image processing software. Next, one character is input in step S803. In step S804, it is determined by the embedding determination unit 703 whether the circumscribed rectangle area of the input character is the area of the character in which the digital watermark is embedded. The embedding determination unit 703 is similar to that of the embedding determination unit 103 of FIG. 4, and the character in which the watermark is embedded correctly can be determined. If it is determined in step S804 that a digital watermark is not embedded (no in step S804), processing returns to step S803 and the next character is input. If it is determined in step S804 that a digital watermark is embedded (yes in step S804), processing proceeds to step S805. In step S805, the watermark message is extracted by the watermark message extracting unit 704. Details of step S805 of extracting the digital watermark are provided next with reference to FIG. 12. FIG. 12 is a flow chart illustrating the digital watermark extraction method in the first embodiment. First, the most frequent value of the density histogram is calculated (step S805a). Next, in step S805b, it is determined whether the calculated most frequent value is larger than the reference value “t.” In cases where the most frequent value is larger than “t” (yes in step S805b), “1” is extracted as watermark message (step S805c). In cases where the most frequent value is not larger than “t” (no in step S805b), “0” is extracted as watermark message (step S805d). The “t” is the reference value which extracts watermark message, i.e., key message. A user may input the reference value “t” using a keyboard, or the reference value “t” may be stored beforehand in the storage device. After extracting the watermark message (in step S805c or step S805d), processing returns to FIG. 11. Next, at step S806 it is determined whether the character input in the step S803 is the last character. in cases where it is not the last character, processing returns to step S803 to input the next character. In cases where it is the last character, watermark message is output (step S807) and processing returns to FIG. 9. <First Modification> FIG. 13 illustrates the density histogram before and after the digital watermark embedding in the first modification. An arrangement and procedure for operation of the first modification are the same as that of the first embodiment (described above) except for step S306. FIG. 14 is a flow chart illustrating operation procedures of step S306 of the first modification. First, in step S306e, the watermark message bit to embed is selected. For example, “1” is assigned to the first character in cases where it is input, such as is shown in the example (digital watermark message bit) in FIG. 24. In step S306f, it is determined whether the watermark message bit to embed is “1.” If it is determined that the bit to embed is “1” (yes in step S306f), processing proceeds to step S306g. In step S306g, the density values from “a” to “b” are corrected to “b” and in step S306i, the density values from “a−x” to “a ” are shifted into the density values from “b−x” to “b,” to set “most frequent value>t.” This can prevent the most frequent value from being generated in the area of the density value from “a−x” to “a” by a gap of the color after a scan. Processing then returns to FIG. 6. If it is determined that the bit is not “1” (no in step S306f), processing proceeds to step S306h. In step S306h, the density values from “a” to “b” are corrected to “a” and in step S306j, the density values from “b” to “b+x” are shifted into the density values from “a” to “a+x,” to set “most frequent value<t.” This can prevent the most frequent value from being generated in the area of the density value from “b” to “b+x” by a gap of the color after a scan. x is 0≦x<a or 0≦x<255−b. “a” and “b” may be decided beforehand to satisfy the above-expression. “a” and “b” may be set by the user. The procedure of extracting the digital watermark is the same extracting processing in the first embodiment. Thus, in the first modification, even if a gap of a color arises at the time of a scan, the watermark message (most frequent value) can be extracted more correctly. <Second Modification> FIG. 15 is a density histogram before and after the digital watermark embedding in a second modification. An arrangement and procedure for operation are the same as that of the first embodiment (described above) except for step S306. FIG. 16 is a flow chart illustrating operation procedures of Step S306 of the second modification. First, in step S306k, the watermark message bit to embed is selected. For example, “1” is assigned to the first character in cases where it is input, such as is shown in the example (digital watermark message bit) in FIG. 24. In step S3061, it is determined whether the watermark message bit to embed is “1.” If it is determined that the watermark message bit to embed is “1” (yes in step S3061), processing proceeds to step S306m. In step S306m, the density values from “a” to “b” are corrected into “b−n” to “b” by the same frequency. That is, the density histogram after change becomes like graph 3 of FIG. 15, and the most frequent value turns into density values from “b−n” to “b.” “n” is an integer with which 0<n<(b−a)/2t is satisfied. If it is determined that the watermark message bit to embed is not “1” (no in step S3061), processing proceeds to step S306n. In step S306n, the density values from “a” to “b” are corrected into “a” to “a+n” by the same frequency. In order to lessen degradation of a color, the data near “a” is changed to a value near “a,” and the data near “b” is changed to a value near “a+n.” The procedure of extracting the digital watermark is performed using the same extracting processing as the first embodiment. Thus, in the second modification, when embedding watermark message, degradation of a color can be lessened by changing into the density near the density of an original image within limits which have substantially the same width (here, it is from “a” to “a+n,” or from “b−n” to “b”). The range of the density after change may not be limited to this, and may be a range from “a−n” to “a+n,” and a range from “a−m” to “a” (where 0<m<t). When embedding the message on “0,” the density values from “a” to “b” are changed into the density values from “a” to “a+n” to exist by the same frequency, but it is not necessary to exist by the same frequency. Although the first embodiment, first modification, and second modification are described with right and left of the moving width value of the density histogram, the width value which shifts to right and left may be taken separately. For example, when shifting “x” of the first modification, and “n” of the second modification to right and left, it does not matter even if it takes a different value. Second Embodiment The first embodiment, first modification, and second modification described how to extract watermark message, by extracting the position of the most frequent value. The second embodiment describes how to extract watermark message by extracting a frequency that does not appear in the range or that is below a certain frequency in a density histogram. FIG. 17 is a density histogram before and after the digital watermark embedding in the second embodiment. Graph 1 of FIG. 17 is a density histogram of a character 1700. In this embodiment, the density value of a character is expressed by 256 gradations, a black density value is set to “0,” and a white density value is set to “255.” In graph 1 of FIG. 17, the left end of a graph shows density value “0” (black), and the right end shows density value “255” (white). The section from “a” to “b” in the graph is the section (it expresses as [a, b]) of the density value which embeds watermark message, and are minimum “a” and maximum “b.” “a” and “b” may be decided beforehand in the range which fulfills the following conditions expression: 0≦a<b≦200 “h” in the graph is the highest frequency among [a, b]. The values for “a” and “b” may be set by the user. Graph 2 of FIG. 17 expresses the case where “0” is embedded, by making a part of larger density value than b of a density histogram change into [a, b], and correcting all of the frequencies of [a, b] with “k.” Graph 3 of FIG. 17 expresses the case where “1” is embedded by making all the density values of [a, b] change into a certain larger value than b, and setting the frequency of [a, b] to “0.” The watermark message is embedded by using the size of the frequency in a certain density section of the density histogram. For example, in cases where “0” is embedded, the density histogram is corrected so that the frequency in the section may become more than “k.” In cases where “1” is embedded, the density histogram is corrected so that the frequency in the section may be set to “0.” An arrangement and procedure for operation in the second embodiment are the same as that of the first embodiment except for steps S302, S306 and S805. FIG. 18 is a flow chart illustrating the operation procedures of Step S302 in the second embodiment. In step S302e, an extraction of circumscribed rectangle (character area) is performed according to the input document image data. Next, the density histogram of each extracted character is calculated (step S302f). In step S302g, the highest frequency h in the density value section ([a, b]) from “a” to “b” is calculated from the calculated density histogram. “a” and “b” may be decided beforehand decided in the range which fulfills the following expression. These values may be decided by a user. 0≦a<b≦200 FIG. 19 is a flow chart illustrating operation procedures of Step S306 in the second embodiment. First, in step S306o, the watermark message bit to embed is selected. For example, “1” is assigned to the first character in cases where it is input, such as is shown in the example (digital watermark message bit) in FIG. 24. In step S306p, it is determined whether the watermark message bit to embed is “0.” If it is determined in step S306p that the bit is “0” (yes in step S306p), processing proceeds to step S306q. In step S306q, it is determined whether frequency “h” is smaller than reference value “k” defined beforehand. The frequency “h” is the highest frequency in [a, b], and is calculated in step S302g. The reference value “k” is k>0, is used for embedding processing of the digital watermark, and is also used as key message of extraction of the digital watermark message. In cases where the “h” is smaller than reference value “k” (yes in step S306q), a part of density value larger than b are shifted to [a, b], and all of the frequency of [a, b] is set to “k” (step S306s). Processing then returns to FIG. 6. If it is determined that the bit is “1” (no in step S306p), processing proceeds to step S306r. In step S306r, it is determined whether frequency “h” is larger than “0.” In cases where the frequency “h” is larger than “0” (yes in step S306r), the density value of [a, b] are corrected to a value larger than “b” (step S306t). Processing then returns to FIG. 6. FIG. 20 is a flow chart illustrating operation procedures of Step S805 in the second embodiment. First, the density histogram is calculated (step S805e). Based on the density histogram, the highest frequency “h” is calculated in [a, b] (step S805f). In step S805g, it is determined whether the calculated frequency “h” is less than “k×e.” In cases where the “h” is less than “k×e” (yes in step S805g), “1” is extracted as the watermark message (step S805h). Processing then returns to FIG. 9. On the other hand, in cases where the frequency “h” is not less than “k×e” (no in step S805g), “0” is extracted as the watermark message (step S805i). Processing then returns to FIG. 9. “k” is the reference value for extracting watermark message, i.e., key message. “k” may be input by the user using a keyboard, or may be stored beforehand in the storage device. “e” is a constant and satisfies 0<e≦1. In step 805g, the watermark message may be extracted using the following formula. ∑ i = a b ⁢ h ⁡ ( i ) < k × e × ( b - a + 1 ) ∑ i = a b ⁢ ⁢ h ⁡ ( i ) becomes the value when all of the frequencies of [a, b] are added (summed). Third Embodiment The density value of the document image influenced by the performance of a printer or a scanner becomes light or wholly dark. Therefore, when the reference value calculated at the time of embedding is extracted, it may change. As a result, the case where correct watermark message cannot be extracted may happen. Therefore, in the third and fourth embodiments, the watermark message is not extracted using the reference value calculated at the time of embedding, and the watermark embedding and the extracting method which cannot be easily influenced by the performance of the printer or the scanner are proposed. In the third embodiment, the watermark message is extracted using the difference of the most frequent values of two characters without using the reference value. FIG. 21 illustrates the character before and after the digital watermark embedding in the third embodiment. In this embodiment, watermark message is embedded using two characters. That is, watermark message “0” is embedded using two characters 2100 and 2101 of group 1 of FIG. 21, and watermark message “1” is embedded using two characters 2102 and 2103 of group 2. In cases where watermark message “0” is embedded (e.g., as in group 1), the position of the most frequent value of the density histogram of the first character 2100 is shifted to the left, and the position of the most frequent value of the density histogram of the second character 2101 is shifted to the right. In cases where watermark message “1” is embedded (e.g., as in group 2), the position of the most frequent value of the density histogram of the first character 2102 is shifted to the right, and the position of the most frequent value of the density histogram of the second character 2103 is shifted to the left. An arrangement and procedure for operation in the second embodiment are the same as that of the first embodiment except for steps S306 and S805. FIG. 22 is a flow chart illustrating operation procedures of Step S306 in the third embodiment. In the third embodiment, since the watermark message is embedded by the difference of the most frequent value of two characters, the input of step S306 in this embodiment is input by two character units. First, in step S306u, the watermark message bit to embed is selected. For example, “1” is assigned to the first character in cases where it is input as the watermark message, for example as shown (digital watermark message) in FIG. 24. In step S306v, it is determined whether the watermark message bit to embed is “1.” If it is determined that the bit is “1” (yes in step S306v), processing proceeds to step S306x. In step S306x, all density values of the correction range of the first character are corrected to “b” so that the most frequent value may be set to “b.” Next, all density values of the correction range of the second character are corrected to “a” so that the most frequent value may be set to “a” (step S306z). Even if gap of a density value occurs after a scan, the watermark message “1” can be extracted by determining the “difference (the most frequent value of the first character−the most frequent value of the second character)>0.” Processing then returns to FIG. 6. If it is determined that the bit is not “1” (no in step S306v), processing proceeds to step S306y. In step S306y, all density values of the correction range of the first character are corrected to “a” so that the most frequent value may be set to “a.” Next, all the density values of the correction range of the second character are corrected to “b” so that the most frequent value may be set to “b” (step S306aa). When extracting, if difference <0 is satisfied, it will be determined that the watermark message “0” is embedded. Processing then returns to FIG. 6. FIG. 23 is a flow chart illustrating operation procedures of Step S805 in the third embodiment. First, the most frequent value of the selected character rectangle is calculated (step S805j). The most frequent value of the next character rectangle is calculated (step S805k). The difference between the most frequent value of the first character rectangle and the next (second) character rectangle is calculated by subtracting the most frequent value of the second character from the most frequent value of the first character (step S8051). In step S805m, it is determined whether the difference calculated in step 8051 is greater than “0”. In cases where the difference is greater than “0” (yes in step S805m), “1” is extracted as the watermark message (step S805n) and processing returns to FIG. 11. On the other hand, in cases where the difference is less than “0” (no in step S805m), “0” is extracted as the watermark message (step S805o) and processing returns to FIG. 11. In this embodiment, since the watermark message is extracted without using the reference value set up by the embedding apparatus side, even if the performance of a printer or a scanner fails, the watermark message can be extracted correctly. In the third embodiment, although one bit of message was embedded using the difference of the most frequent value of two characters, the one bit of information may also be embedded using the sum of the most frequent value of two characters. As a method of embedding the watermark message using the sum of the most frequent value of two characters, in cases where the watermark message is “1,” both the most frequent value of the first character and the most frequent value of the second character are changed into “b.” In cases where the watermark message is “0,” both the most frequent value of the first character and the most frequent value of the second character are changed into “a.” As a method of extracting the watermark message embedded by this method, the sum of the most frequent value of the first character and the most frequent value of the second character is calculated. In cases where the sum is larger than 2t, the watermark message “1” is extracted, and in cases where the sum is less than 2t, the watermark message “0” is extracted. In cases where the sum of the most frequent value of two characters is used, the reference value is used for extraction. Thus, the watermark message can be extracted more correctly than the first embodiment. If the difference and sum are combined, the multiple-value of m bits can be embedded at two characters, and more message can be embedded. Fourth Embodiment In this embodiment, the watermark message is extracted using the reference value calculated by the extracting apparatus side, without using the reference value set up by the embedding apparatus side. The arrangement and procedure required for operation when embedding the watermark message in this embodiment are the same as that of the first embodiment. For example, any one of the first embodiment, the first modification, or the second modification is sufficient for embedding the watermark of this embodiment. However, when embedding watermark message, in step S306a of the first embodiment, the digital watermark message was embedded to the selected character in order. In this embodiment, as shown in the example in FIG. 24, fixed message bits are embedded to the characters of first and second rows and the digital watermark message bit is embedded from the third row. This fixed message bit is used in order to calculate reference value “t”, when extracting the watermark message. The place which the fixed message bit embeds may not be the head of the document but behind a document, or may also be embedded at other places. Although the number of characters that embed the fixed message bit is not restricted to two rows of a document, it is desirable to embed to the character of a certain number when calculating the reference value. The arrangement and procedure for operation at the time of the watermark message extraction in this embodiment are the same as that of the first embodiment except S805 of the first embodiment. However, since the fixed message bit other than the watermark message is embedded, it is necessary to acquire and extract the message on a place that the fixed message bit was embedded. The message may share message between the embedding apparatus and extracting apparatus side beforehand, or the message may be received independently with the document image from the embedding apparatus side. FIG. 25 is a flow chart illustrating operation procedures of Step S805 in the fourth embodiment. Since the processing from step S805a to step S805d of FIG. 25 is the same as FIG. 12 of the first embodiment, explanation thereof is not repeated here. In step S805p, the reference value “t” is calculated from the extracted fixed message bit. The step S805p is described in further detail next with reference to FIG. 26. First, the most frequent values of the characters are calculated based on the characters which exist in the first two rows of the document (step S805p1). In cases where the watermark is embedded, the fixed message bit is embedded in the first two rows. The minimum “b1” of the most frequent value of the character in which “1” was embedded is calculated (step S805p2). The maximum “a1” of the most frequent value of the characters in which “0” was embedded is calculated (step S805p3). Next, the reference value “t” is calculated using the formula of “t=(a1+b1)/2” (step S805p4). The “b1” and “a1” may be set up with the average of the most frequent value. Processing then returns to FIG. 25. In this embodiment, not using the reference value calculated by the embedding apparatus side, the reference value is newly calculated by the extracting apparatus side, and the watermark message is extracted using the new reference value. Therefore, even if the performance of a printer or a scanner fails, the watermark message can be extracted correctly. In cases where the density of the character is changed by a printing or scanning operation, the reference value calculated at the time of embedding will shift. The reference value of this embodiment is a value that corrects the influence. Other Embodiments Note that the present invention can be applied to an apparatus comprising a single device or to system including a plurality of devices. Furthermore, the invention can be implemented by supplying a software program, which implements the functions of the foregoing embodiments, directly or indirectly to a system or apparatus, reading the supplied program code with a computer of the system or apparatus, and then executing the program code. In this case, as long as the system or apparatus has the functions of the program, the mode of implementation need not rely upon a program. As long as the system or apparatus has the functions of the program, the program may be executed in any form, such as object code, a program executed by an interpreter, or script data supplied to an operating system. Example of storage media that can be used for supplying the program are a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM (compact disk-read-only memory), a CD-R (CD-recordable), a CD-RW (CD-rewritable), a magnetic tape, a non-volatile type memory card, a ROM, and a digital versatile disk (e.g., DVD-ROM, DVD-R). As for the method of supplying the program, a client computer can be connected to a website on the Internet using a browser of the client computer, and the computer program of the present invention or an automatically-installable compressed file of the program can be downloaded to a recording medium such as a hard disk. Further, the program of the present invention can be supplied by dividing the program code constituting the program into a plurality of files and downloading the files from different websites. In other words, a WWW (World Wide Web) server may download, to multiple users, the program files that implement the functions of the present invention by computer. It is also possible to encrypt and store the program of the present invention on a storage medium such as a CD-ROM, distribute the storage medium to users, allow users who meet certain requirements to download decryption key message from a website via the Internet, and allow these users to decrypt the encrypted program by using the key message, such that the program is installed in the user computer. Besides the cases where the aforementioned functions according to the embodiments are implemented by executing the read program by computer and an operating system or the like running on the computer may perform all or a part of the actual processing so that the functions of the foregoing embodiments can be implemented by this processing. After the program is read from the storage medium, it can be written to a function expansion board inserted into the computer or to a memory provided in a function expansion unit connected to the computer. A CPU or the like mounted on the function expansion board or function expansion unit performs all or a part of the actual processing so that the functions of the foregoing embodiments can be implemented by this processing. As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claim. This application claims priority from Japanese Patent Application Nos. 2004-193482 filed Jun. 30, 2004 and 2005-127902 filed Apr. 26, 2005, which are hereby incorporated by reference herein. 11151989 canon kabushinki kaisha USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open 382/100 Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment
nyse:caj Canon Dec 20th, 2016 12:00AM Oct 19th, 2012 12:00AM https://www.uspto.gov?id=US09525757-20161220 Information processing apparatus that controls connection of devices, method of controlling the apparatus, and device control system An information processing apparatus which is capable of performing device connection control for connection to more devices than the maximum connectable devices defined by the device interface standard or the SDK. A device server communication module generates communication threads for controlling data communication with device servers according to requests from higher-layer software. Each communication thread generates a device stack for controlling a device via an associated device server connected thereto in such a manner as if the device were directly connected to the apparatus. When a connection notification indicative of connection with the device is received, the communication thread attempts to detect a device stack in a non-data transmission and reception state from the device stacks, and connects to the device server via the detected device stack to perform data transmission and reception. 9525757 1. An information processing apparatus that is connected via a network to device control units having devices locally or internally connected thereto, comprising: a communication thread control unit configured to generate communication threads for controlling data communication with the device control units, according to requests from higher-layer software, the communication threads generating device stacks for controlling the devices via the device control units in such a manner as if the devices were directly connected to the information processing apparatus, wherein upon receipt of a connection notification indicative of connection with one of the devices, one of the communication threads performs detection of a device stack in a non-data transmission and reception state from the device stacks, and connects to an associated one of the device control units connected to the one of the devices via the detected device stack, to thereby perform data transmission and reception. 2. The information processing apparatus according to claim 1, wherein if no device stack in the non-data transmission and reception state exists, the one of the communication threads waits for connection to the device control unit until a device stack in the non-data transmission and reception state is detected. 3. The information processing apparatus according to claim 1, wherein if as many of the device stacks as defined in advance have been generated, the one of the communication threads does not generate a new device stack. 4. The information processing apparatus according to claim 1, wherein if the number of the device stacks is less than the number defined in advance, the one of the communication threads generates a new device stack. 5. The information processing apparatus according to claim 1, wherein if the number of the device stacks is less than the number defined in advance, the one of the communication threads attempts to detect a device stack in the non-data transmission and reception state, and generates a new device stack when no device stack in the non-data transmission and reception state is detected. 6. The information processing apparatus according to claim 1, wherein when terminating the communication thread, said communication thread control unit compares the number of the communication threads and the number of the device stacks, and if the number of the communication threads is not more than the number of the device stacks, said communication thread control unit terminates a device stack used for connection with the communication thread. 7. The information processing apparatus according to claim 1, wherein when terminating the communication thread, said communication thread control unit compares the number of the communication threads and the number of the device stacks, and if the number of the communication threads is not more than the number of the device stacks, said communication thread control unit does not terminate a device stack used for connection with the communication thread. 8. The information processing apparatus according to claim 1, wherein when terminating the communication thread, said communication thread control unit compares the number of the communication threads and the number of the device stacks, and if the number of the communication threads is more than the number of the device stacks, said communication thread control unit does not terminate a device stack used for connection with the communication thread. 9. A method of controlling an information processing apparatus that is connected via a network to device control units having devices locally or internally connected thereto, comprising: generating communication threads for controlling data communication with the device control units, according to requests from higher-layer software, causing the communication threads to generate device stacks for controlling the devices via the device control units in such a manner as if the devices were directly connected to the information processing apparatus; and causing, upon receipt of a connection notification indicative of connection with one of the devices, one of the communication threads to perform detection of a device stack in a non-data transmission and reception state from the device stacks, and connect to an associated one of the device control units connected to the one of the devices via the detected device stack in the non-data transmission and reception state, to thereby perform data transmission and reception. 10. The method according to claim 9, further comprising, if no device stack in the non-data transmission and reception state exists, causing the one of the communication threads to wait for connection to the device control unit until a device stack in the non-data transmission and reception state is detected. 11. The method according to claim 9, wherein if as many of the device stacks as defined in advance have been generated, the one of the communication threads does not generate a new device stack. 12. The method according to claim 9, wherein if the number of the device stacks is less than the number defined in advance, the one of the communication threads generates a new device stack. 13. The method according to claim 9, wherein if the number of the device stacks is less than the number defined in advance, the one of the communication threads attempts to detect a device stack in the non-data transmission and reception state, and generates a new device stack when no device stack in the non-data transmission and reception state is detected. 14. The method according to claim 9, further comprising, when terminating the communication thread, comparing the number of the communication threads and the number of the device stacks, and if the number of the communication threads is not more than the number of the device stacks, terminating a device stack used for connection with the communication thread. 15. The method according to claim 9, further comprising, when terminating the communication thread, comparing the number of the communication threads and the number of the device stacks, and if the number of the communication threads is not more than the number of the device stacks, not terminating a device stack used for connection with the communication thread. 16. The method according to claim 9, further comprising, when terminating the communication thread, comparing the number of the communication threads and the number of the device stacks, and if the number of the communication threads is more than the number of the device stacks, not terminating a device stack used for connection with the communication thread. 16 BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an information processing apparatus that communicates with various devices via a network, a method of controlling the apparatus, and a device control system. Description of the Related Art Conventionally, there has been known a system in which an information processing apparatus, such as a personal computer (hereinafter referred to as a PC), as a client, uses a device (peripheral device), such as a printer, a storage, or a scanner, via a network. In such a system, there has been proposed one in which a client virtually recognizes a device on a network as a device locally connected thereto, whereby it is made possible to access the device from the client on the network. The present assignee has proposed a client apparatus and a device control system in which device connection control is performed such that the client forms device stacks for controlling the devices via device servers, thereby making it possible to connect the devices up to the maximum number of connectable devices defined by a device interface standard or by a SDK (Software Development Kit) provided by a manufacturer or a vendor as a program (function) necessary for device control (see Japanese Patent Laid-Open Publication No. 2011-129111). However, when a user desires to control a lot of devices existing on the network, in spite of a sufficient processing capacity (specifications) of the PC, if the maximum number of connectable devices is limited by the device interface standard or the SDK, it is impossible to control more devices than the limited number of connectable devices. For this reason, it is desired that the client apparatus and the device control system in Japanese Patent Laid-Open Publication No. 2011-129111, which has been proposed by the present assignee, are further expanded and developed in function to thereby make it possible to perform device control connection such that more than the maximum connectable devices defined by the device interface standard or the SDK can be connected. SUMMARY OF THE INVENTION The present invention provides an information processing apparatus which is capable of performing device connection control for connection to more devices than the maximum connectable devices defined by the device interface standard or the SDK, a method of controlling the information processing apparatus, and a device control system. In a first aspect of the present invention, there is provided an information processing apparatus that is connected via a network to device control units having devices locally or internally connected thereto, comprising a communication thread control unit configured to generate communication threads for controlling data communication with the device control units, according to requests from higher-layer software, the communication threads generating device stacks for controlling the devices via the device control units in such a manner as if the devices were directly connected to the information processing apparatus, wherein upon receipt of a connection notification indicative of connection with one of the devices, one of the communication threads performs detection of a device stack in a non-data transmission and reception state from the device stacks, and connects to an associated one of the device control units connected to the one of the devices via the detected device stack, to thereby perform data transmission and reception. In a second aspect of the present invention, there is provided a device control system comprising device control units that are connected to a network, devices that are locally or internally connected to the device control units, respectively, an information processing apparatus connected to the device control units via the network, including a communication thread control unit configured to generate communication threads for controlling data communication with the device control units, according to requests from higher-layer software, the communication threads generating device stacks for controlling the devices via the device control units in such a manner as if the devices were directly connected to the information processing apparatus, wherein upon receipt of a connection notification indicative of connection with one of the devices, one of the communication threads performs detection of a device stack in a non-data transmission and reception state from the device stacks, and connects to an associated one of the device control units connected to the one of the devices via the detected device stack in the non-data transmission and reception state, to thereby perform data transmission and reception, wherein the information processing apparatus performs data communication by controlling any one of the devices via an associated one of the device control units connected thereto. In a third aspect of the present invention, there is provided a method of controlling an information processing apparatus that is connected via a network to device control units having devices locally or internally connected thereto, comprising generating communication threads for controlling data communication with the device control units, according to requests from higher-layer software, causing the communication threads to generate device stacks for controlling the devices via the device control units in such a manner as if the devices were directly connected to the information processing apparatus, and causing, upon receipt of a connection notification indicative of connection with one of the devices, one of the communication threads to perform detection of a device stack in a non-data transmission and reception state from the device stacks, and connect to an associated one of the device control units connected to the one of the devices via the detected device stack in the non-data transmission and reception state, to thereby perform data transmission and reception. According to the present invention, when the information processing apparatus performs data communication with devices via device control units, it is possible to execute device connection control while enabling the information processing apparatus to exhibit effective performance without limiting the number of devices to the maximum number of connectable devices defined by the device interface standard or the SDK. Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a device control system using an example of an information processing apparatus according to a first embodiment of the present invention. FIG. 2 is a sequence diagram useful in explaining data transmission and reception in the device control system shown in FIG. 1. FIG. 3 is a flowchart of a communication process executed by a device server communication module of the information processing apparatus appearing in FIG. 1. FIG. 4 is a flowchart of a communication thread generation and start process in the communication process shown in FIG. 3. FIG. 5 is a flowchart of a connection process in the communication process shown in FIG. 3. FIG. 6 is a flowchart of a data transmission and reception process executed by a device control module appearing in FIG. 1. FIG. 7 is a flowchart of a termination process in the communication process shown in FIG. 3. FIG. 8 is a sequence diagram of a data transmission and reception process executed in a device control system according to a second embodiment of the present invention. FIG. 9 is a flowchart of a communication process executed by a device server communication module of the information processing apparatus according to the second embodiment. FIG. 10 is a block diagram of a device control system according to a third embodiment of the present invention. DESCRIPTION OF THE EMBODIMENTS The present invention will now be described in detail below with reference to the accompanying drawings showing embodiments thereof. In the following description, a personal computer (PC) will be described as an example of an information processing apparatus. FIG. 1 is a block diagram of a device control system using an information processing apparatus according to a first embodiment of the present invention. The information processing apparatus, denoted by reference numeral 11, is configured to be connectable to device servers 12-1 to 12-n (n=4 in the illustrated example) via a network 10, such as a LAN (local area network). The network 10 may be formed by either of a wired communication line and a wireless communication. Further, devices 13-1 to 13-4 as input/output devices each having a general-purpose interface are locally connected to the device servers 12-1 to 12-4 by connection cables 10a to 10d, respectively. Here, although an example will be described in which each device is connected to an associated one of the device servers via a USB (universal serial bus) interface, this is not limitative, but for example, an interface compliant with another type of interface standard, such as the HDMI or Thunderbolt may be used. The devices 13-1 to 13-4 each are an input/output device having a general-purpose interface (USB interface in the present embodiment). For example, the devices 13-1 and 13-4 each may be an input device, such as a keyboard, a mouse, or a card reader, a display device (output device), a single-function peripheral device, such as a printer, or a multifunction peripheral device equipped with not only a print function but also a scan function, a copy function, a storage function, and so forth, or may be an input/output device other than the above-mentioned devices. The device servers 12-1 to 12-4 may be provided integrally, as a single unit, with the devices 13-1 to 13-4 to which the respective device servers are connected. Although in the example shown in FIG. 1, only one information processing apparatus is connected, the device control system may be configured such that a plurality of information processing apparatuses are connected to the network 10. Further, although one device is connected to each device server, the device control system may be configured such that a plurality of devices are connected to each device server. The information processing apparatus 11 shown in FIG. 1 as an example of the information processing apparatus according to the present invention is an apparatus, such as a PC (personal computer), which comprises a CPU, an input section, a display section, a memory, a communication section, and an external storage section (none of which are shown), which are interconnected via an internal bus, and is capable of communicating with the device servers 12-1 to 12-4 via the network 10. The external storage section stores not only software components, such as an operating system (hereinafter referred to as the OS), not shown, an external application 11a, a resident module 11b, a device server communication module 11c, device control modules 11d-1 to 11d-3, device stacks 11f-1 to 11f-3, and a communication controller 11e, but also various kinds of data. Each of the software components and the various kinds of data is read into the memory under the control of the CPU and OS, whereby various control processes are executed. The external application 11a is a software component for issuing a request to one of the devices 13 (such as an activation request, a connection request, or a termination request) or a request responsive to a user's operation, via the resident module 11b, and obtaining a result to the request. The resident module 11b is a software component which is always on standby or in operation during operation of the OS, and activates the device server communication module 11c and the communication controller 11e to transmit and receive data to and from the device servers 12-1 to 12-4 existing on the network. Further, the resident module 11b detects the device servers 12 and the devices 13-1 to 13-4 locally connected to the device servers 12, respectively, and acquires device server information on the device servers 12 and device information on the devices 13. The device server communication module 11c is a software component activated by the resident module 11b, for independently controlling communication with the device servers 12-1 to 12-4 by a main thread 120 and by respective communication threads 121 to 124, and managing the number of communication threads, the number of device stacks (described hereinafter), and information on the communication state of each device stack (e.g. whether or not the device stack is in a data transmission and reception state). The main thread 120 (communication thread control unit) is a software component for generating and starting the communication threads 121 to 124, and controlling communication with the device servers 12 under the control of the external application 11a, the OS, and the resident module 11b, which are the higher-layer software. Each of the communication threads 121 to 124 is a program for independently executing control of communication with an associated one of the device servers 12-1 to 12-4 to which the respective devices 13 (13-1 to 13-4) are connected, and is generated for each device 13 (one of 13-1 to 13-4) to be controlled, using an activation request from the external application 11a as a trigger. Although in FIG. 1, four communication threads are generated, the number of communication threads to be generated is not limited to four, but more than four communication threads may be generated. Each generated communication thread uniquely identifies a device driver 110, a class driver 111, and a virtual bus driver 112 necessary for data transmission and reception to and from the associated device 13 (one of 13-1 to 13-4) based on the device information acquired by the resident module 11b, and sequentially and dynamically generates the identified driver software. These driver software components (the device driver 110, class driver 111, and virtual bus driver 112) are collectively referred to as the device stack. In FIG. 1, the three device stacks 11f-1 to 11f-3 are generated. Note that the number of the generable device stacks is fixed, and it is possible to generate as many device stacks as the upper limit of the number of connectable devices (hereinafter referred to as the maximum connectable number) defined by the device interface standard or the SDK. For example, the maximum connectable number defined by the USB interface standard is equal to 127, and hence 127 device stacks can be generated according to the USB interface standard. In the present embodiment, the description will be given assuming that the maximum number of connectable devices is three, i.e. the three device stacks can be generated, for convenience sake. Further, when the devices are different in model, device stacks compatible with respective models are generated and activated. In this case as well, the number of generable device stacks is equal to the above-mentioned maximum connectable number. Further, when each of the communication threads 121 to 124 receives a connection notification indicative of a change in the operating state (status) of an associated one of the devices 13 via the external application 11a or an associated one of the device servers 12, the communication thread detects a device stack 11f in a state in which data transmission and reception with any of the device servers 12 is not being executed (hereinafter referred to as the non-data transmission and reception state). More specifically, whether or not a device stack is in the non-data transmission and reception state is determined based on information on the states of communication of the device stacks with the device servers, managed by the device server communication module 11c. Then, if a device stack 11f determined to be in the non-data transmission and reception state is detected, switching is notified to the virtual bus driver 112 of the determined device stack 11f, whereby the data communication path is switched so as to connect to the determined device stack 11f. Then, the communication thread starts an independent session with an associated device server 12 connected via the device stack 11f, and when the data transmission and reception is completed, instructs the associated device server 12 to terminate the session. The number of communication threads and the number of device stacks are not always equal to each other, and if the number of device stacks is equal to the maximum connectable number, no communication threads (121 to 124) generate a new device stack. The device control modules 11d-1 to 11d-3 are software components which are started by the device server communication module 11c in association with the device stacks 11f, respectively, for example, such that the device control module 11d-1 is started in association with the device stack 11f-1, the device control module 11d-2 is activated in association with the device stack 11f-2, and the device control module 11d-3 is activated in association with the device stack 11f-3. Then, the device control modules 11d-1 to 11d-3 operate in combination with the device stacks 11f-1 to 11f-3, respectively, forming respective pairs, via the SDK, not shown, to thereby independently control the devices 13-1 to 13-4, respectively. When a session with one of the device servers 12 is started by one of the communication threads (121 to 124), a data transmission and reception start notification is sent to the communication thread to thereby start the control for data transmission and reception to and from the device server 12 (one of 12-1 to 12-4) via a connected one of the device stacks 11f (11f-1 to 11f-3), and when the data transmission and reception is terminated, a data transmission and reception completion notification is sent to the communication thread. Next, the software components forming each device stack 11f (11f-1 to 11f-3) will be described. The device driver 110 is a software component for generating a control command to a connected one of the devices 13 (13-1 to 13-4) according to an instruction from the OS, the external application 11a, or one of the device control modules 11d-1 to 11d-3, to thereby execute data transmission and reception. Further, the device driver 110 waits for a response to the control command (i.e. result of the data transmission and reception) from the connected one of the devices 13 (13-1 to 13-4), and notifies the connected one of the device control modules 11d (11d-1 to 11d-3) of the response. The class driver 111 (USB class driver in the present embodiment) includes a USB port, not shown, for transmitting and receiving a control command, converts the control command generated by an associated one of the device control modules 11d (11d-1 to 11d-3) or the device driver 110 to USB packets, and sends the USB packets to the virtual bus driver 112. Further, the class driver 111 converts USB packets to a control command, and passes the control command to the associated one of the device control modules 11d (11d-1 to 11d-3) or the device driver 110. The virtual bus driver 112 (USB virtual bus driver in the present embodiment) is a software component for controlling an connected one of the devices 13 connected to the device servers 12, respectively, via the communication controller 11e, in such a manner as if the device 13 were directly connected to the information processing apparatus 11 by local connection. The communication controller 11e is connected to the network 10 to control communication with the device servers 12. The device servers 12 (12-1 to 12-4) each, as an example of a device control unit, comprise a CPU, a memory, a communication section, a local interface (USB interface in the present embodiment), and an external storage section, none of which are shown, which are interconnected via an internal bus, and are capable of communicating with the information processing apparatus 11 via the network 10, and transmitting and receiving data to and from the locally connected devices 13-1 to 13-4 via the connection cables, respectively. The external storage section stores not only software components, such as an OS (not shown), a communication controller 12a, a virtualization controller 12b, a device controller 12c, and so forth, but also various kinds of data. Each of the software components and the various kinds of data is read into the memory under the control of the CPU, whereby various control processes are executed. The communication controller 12a is connected to the network 10 to control communication with the information processing apparatus 11. The virtualization controller 12b communicates with an associated one of the virtual bus drivers 112 of the information processing apparatus 11 via the communication controller 12a and controls the associated device controller 12c. The device controller 12c controls an associated one of the devices 13 (13-1 to 13-4) connected via the USB interface. In the device control system configured as above, the information processing apparatus 11 remotely controls the device controller 12c via the virtualization controller 12b of each device server 12 by the associated virtual bus driver 112, whereby the information processing apparatus 11 can control the devices 13 (13-1 to 13-4) in such a manner as if the devices 13 (13-1 to 13-4) were directly connected to the information processing apparatus 11 by local connection (virtualization control). Further, the device servers 12 (12-1 to 12-4) each acquire the device information (device identification information, device configuration, a type, and so forth) from a locally connected one of the devices 13 (13-1 to 13-4) by the device controller 12c, respectively, and transmit the acquired device information and the device server information (server identification information, server setting information, and so forth) which is information on itself to the information processing apparatus 11. The device information and the device server information may be acquired via each device server 12 according to a request from the information processing apparatus 11, or may be sent to the information processing apparatus 11 by transmitting the device information and the device server information from each device server 12 to the information processing apparatus 11 when the device server 12 is turned on or has its settings changed. FIG. 2 is a sequence diagram useful in explaining data transmission and reception in the device control system shown in FIG. 1. Note that it is assumed that the information processing apparatus 11 has already acquired the device server information and the device information. FIG. 3 is a flowchart of a communication process executed by the main thread 120 of the device server communication module 11c of the information processing apparatus 11 appearing in FIG. 1, which will be described hereafter with reference to the sequence diagram in FIG. 2. The resident module 11b activates the device server communication module 11c by a process start notification. When the device server communication module 11c is started, the main thread 120 is started, and the main thread 120 enters a wait state waiting for a request (such as an activation request, a connection request, or a termination request) from the resident module 11b (step S301). Then, upon receipt of a request from the resident module 11b during this wait state, the device server communication module 11c determines whether or not the request is a process termination notification from the resident module 11b (step S302). If it is determined that the request is a process termination notification (YES to the step S302), the process by the device server communication module 11c i.e. the present communication process is terminated. On the other hand, if the request is not a process termination notification (NO to the step S302), the main thread 120 receives a notification of a request (such as a start request, a connection request, or a termination request) sent from the external application 11a to the device 13 (step S303). Hereinafter, the above notification of the request is referred to as the request notification. Next, a process for determining the received request notification is executed. First, the device server communication module 11c determines whether or not the request notification is an activation notification (step S304). If it is determined that the request notification is a “start notification” (YES to the step S304), the main thread 120 generates one of the communication threads (121 to 124) which performs data communication control with the device servers 12 (12-1 to 12-4), and starts the generated communication thread (step S305). Moreover, the main thread 120 performs the data communication control independently for each communication thread. Then, the process returns to the step S301. FIG. 4 is a flowchart of the communication thread generation and start process executed in the step S305 in FIG. 3. When the main thread 120 receives the activation notification sent from the resident module 11b according to the activation request from the external application 11a, the main thread 120 generates the communication thread, and the generated communication thread determines a communication protocol used for communication with an associated one of the device servers 12. For example, it is determined whether to use a protocol for communication via a LAN (local area network) or a protocol e.g. an HTTP (hypertext transfer protocol), for communication via an external network, such as the Internet (step S401). In the present embodiment, communication is assumed to be performed using the protocol for communication via the LAN, and hence, the following description is given assuming that the communication thread is connected to the associated device server 12 via the LAN. Next, in the communication thread, it is determined whether or not the number of existing device stacks 11f is equal to the defined maximum connectable number (three in the present embodiment) (step S402). Then, if the number of existing device stacks 11f is less than the defined maximum connectable number (three) (NO to the step S402), the device stack 11f is newly generated based on the device information acquired by the resident module 11b to start the above-mentioned virtualization control (step S403). Note that before generating a device stack, detection of a device stack in a non-data transmission and reception state, described hereinafter, may be performed, and the device stack may be generated when such a device stack in the non-data transmission and reception state is not detected (does not exist). Next, the communication thread (one of 121 to 124) starts one of the device control modules 11d in accordance with the device stack 11f generated in the step S403 (step S404), and when a notification of an activation completion event is received from the associated device control module 11d, the communication thread instructs the communication controller 11e to establish a session with the associated one of the device servers 12 (step S405). Then, when the session is established, the associated communication thread (one of 121 to 124) transmits device monitoring information (described hereinafter) to the device server 12 (step S406). Then, when the device monitoring information has been transmitted to the device server 12, the associated communication thread (one of 121 to 124) instructs the communication controller 11e to terminate the session (step S407), and when the session with the device server 12 is terminated, the associated communication thread (one 121 to 124) disconnects from the device, and waits for an instruction from the resident module 11b or the main thread 120. In the step S402, if the number of existing device stacks 11f is equal to the maximum connectable number (three) (YES to the step S402), the process directly proceeds to the step S405. On the other hand, the device server 12 having received the device monitoring information from the information processing apparatus 11 in the step S406 monitors the operating state of the device 13 locally connected thereto based on the received device monitoring information. Then, if a change in the operating state of the device 13 (state change) is detected, the device server 12 notifies the information processing apparatus 11 of the detection of the state change (connection notification). Here, the state change is caused e.g. by performing an operation for reading a card (acquiring a user ID) on a card reader (device), or depressing an operation button of the device, but it is not limited to these. Note that the device monitoring information is intended to mean a pair of monitoring programs (monitoring information) for executing monitoring of each device 13, and is different on a device model basis. The device monitoring information may be introduced to the device servers 12 in advance, or only necessary data may be acquired e.g. from the information processing apparatus 11. Referring again to FIG. 3, if it is determined in the step S304 that the request notification is not an “activation notification” (NO to the step S304), the main thread 120 determines whether or not the request notification is a “connection notification” (step S306). If the request notification is a “connection notification” (YES to the step S306), the main thread 120 causes the communication thread (121 to 124) to start a connection process (step S307), and returns to the step S301. FIG. 5 is a flowchart of the connection process executed in the step S307 in FIG. 3. After being generated and started by the main thread 120, the communication thread (the communication thread 124, for example) enters the wait state waiting for an instruction from the resident module 11b or the main thread 120, or a connection notification from one of the device servers 12-1 to 12-4 (step S501). Then, if a connection notification is received from the external application 11a via the resident module 11b, or from one of the device servers (12-1 to 12-4), the communication thread 124 detects whether there is a device stack in the non-data transmission and reception state among the device stacks 11f (step S502). At this time, if all of the device stacks 11f (11f-1 to 11f-3) are in the data transmission and reception state (NO to the step S502), the communication thread 124 enters the wait state until a device stack in the non-data transmission and reception state is detected (step S503), and the process returns to the step S502. On the other hand, if a device stack in the non-data transmission and reception state is detected (YES to the step S502), the communication thread 124 switches the data communication path so as to connect to the detected device stack in the non-data transmission and reception state (e.g. the device stack 11f-2) (step S504). Next, the communication thread 124 causes the device server (12-4, assumed here by way of example) connected via the device stack 11f-2 in the non-data transmission and reception state to start an independent session (step S505), and when the session is started, the device control module 11d-2 sends a data transmission and reception start notification to the communication thread 124 (step S506). Upon receipt of the data transmission and reception start notification, the device control module 11d-2 controls the communication thread 124 to transmit and receive data to and from the device 13-4 via the device server 12-4 through the device stack 11f-2. The communication thread 124 waits until the data transmission and reception controlled by the device stack 11f-2 is completed (step S507), and when a data transmission and reception completion notification is received from the device control module 11d-2, the communication thread 124 causes the communication controller 11e to terminate the session with the device server 12-4 via the device stack 11f-2 (step S508). FIG. 6 is a flowchart of the data transmission and reception control process executed by the device control module 11d in the steps S506 and S507 in FIG. 5. The device control module 11d (11d-2 in the present example) has been activated in association with the device stack 11f (11f-2 in the present example). For example, the device control module 11d-2 opens a communication port for transmitting and receiving data according to an instruction from the communication thread 124 (step S601), waits until a data transmission and reception start notification is received from the communication thread 124 (step S602), and performs determination whether a notification from the communication thread 124 is an instruction for terminating the data transmission and reception wait state (step S603). When the device control module 11d-2 receives not the instruction for terminating the data transmission and reception wait state but the data transmission and reception start notification from the communication thread 124 (NO to the step S603), the device control module 11d-2 causes the device stack 11f-2 to start the data transmission and reception to and from the device 13-4 via the device server 12-4 (step S604), and performs data transmission and reception control such that as long as the data transmission and reception is not completed (NO to the step S605), the step S604 is repeatedly executed. Then, when the data transmission and reception is completed (YES to the step S605), the device control module 11d-2 transmits a data transmission and reception completion notification to the communication thread 124 (step S606). Then, the process returns to the step S602. On the other hand, when the instruction for terminating the data transmission and reception wait state is received from the communication thread 124 (YES to the step S603), the device control module 11d-2 closes the communication port which has been opened (step S607), and sets the device stack 11f-2 to the non-data transmission and reception state. Here, the device stack 11f-2 is only shifted to the non-data transmission and reception state, and is set in a state allowing another communication thread to immediately use the same without being terminated (deleted) until a disconnection request (described hereinafter) is received e.g. as an instruction from the external application 11a. Referring again to FIG. 3, if it is determined in the step S306 that the request notification is not a “connection notification” (NO to the step S306), the main thread 120 of the device server communication module 11c determines whether or not the request notification is a “termination notification” (step S308). If the request notification is a “termination notification” (YES to the step S308), the device server communication module 11c executes a termination process, described hereinafter (step S309), and the process returns to the step S301. FIG. 7 is a flowchart of the termination process executed in the step S309 in FIG. 3. The termination process is executed according to the disconnection request from the external application 11a. The resident module 11b sends a communication thread termination notification (“termination notification”) to the main thread 120 according to the received disconnection request, and the main thread 120 instructs an associated communication thread (the communication thread 124, for example) to terminate the same. Upon receipt of the thread termination instruction, the communication thread 124 compares the number of communication threads being managed by the device server communication module 11c with the number of device stacks, and determines whether or not the number of communication threads is not more than the number of device stacks (step S701). If the number of communication threads is not more than the number of device stacks (YES to the step S701), the communication thread 124 sends a termination notification to the associated device control module 11d (11d-2 in the present example). The device control module 11d-2 executes a process for terminating the device stack 11f (11f-2 in the present example), and sends a termination process completion event notification to the communication thread 124 (step S702). Upon receipt of the termination process completion event notification, the communication thread 124 terminates the virtualization control by the device stack 11f-2 to thereby disconnect from the device 13, and terminates (deletes) the device stack 11f-2 (deletes the virtualization control) (step S703). After sending the termination notification, the resident module 11b sends a process termination notification to the main thread 120, and the main thread 120 having received the process termination notification terminates the communication thread 124. On the other hand, in the step S701, if the number of communication threads is more than the number of device stacks (NO to the step S701), only the communication thread connected with the device stack 11f-2 is terminated without deleting the device stack 11f-2. As described above, in the first embodiment of the present invention, the information processing apparatus detects a device stack in the non-data transmission and reception state according to a connection notification, and communicates with a device server via the detected device stack in the non-data transmission and reception state. This makes it possible to control more devices than the maximum connectable devices, while enabling the information processing apparatus to exhibit effective performance without limiting the number of devices to the maximum number of connectable devices defined by the device interface standard or the SDK. Further, in the first embodiment, when data transmission and reception with a device is terminated, if the number of communication threads becomes less than the number of device stacks, a device stack having been used for the data transmission and reception with the device is deleted, and hence it is possible to reduce consumption of resources of the information processing apparatus by a device stack in the non-data transmission and reception state. As described above, also in controlling more devices than the maximum connectable devices, it is possible to control and manage the devices by one information processing apparatus, and it is unnecessary to newly provide another information processing apparatus. Therefore, the management of the devices is easy to be executed, and the costs can be reduced. Next, a description will be given of a device control system using an information processing apparatus according to a second embodiment of the present invention. The second embodiment differs from the first embodiment in the process executed when a communication thread is terminated, and is characterized in that a device stack is not terminated (deleted) when the communication thread is terminated. Note that the device control system according to the second embodiment has the same configuration as the device control system shown in FIG. 1, and only differs in part of the control in the sequence diagram shown in FIG. 8, surrounded by a thick frame denoted by reference numeral 801, and hence detailed description of the system configuration and the functions of the devices is omitted. FIG. 8 is a sequence diagram useful in explaining data transmission and reception in the device control system according to the second embodiment. Further, FIG. 9 is a flowchart of a communication process executed by a device server communication module of the information processing apparatus according to the second embodiment of the present invention. The communication process in FIG. 9, described hereinafter with reference to FIG. 8, is executed by the main thread 120 of the device server communication module 11c of the information processing apparatus 11 appearing in FIG. 1. The resident module 11b activates the device server communication module 11c by a process start notification. When the device server communication module 11c is started, the main thread 120 is started, and the main thread 120 enters the wait state waiting for a request (such as an activation request, a connection request, or a termination request) from the resident module 11b (step S901). Then, if a request from the resident module 11b is received during the wait state, the main thread 120 determines whether or not the request is a process termination notification from the resident module 11b (step S902). If the request is a process termination notification, the process proceeds to a step S910, described hereinafter, whereas if the request is not a process termination notification (NO to the step S902), the main thread 120 receives a notification of a request (such as a start request, a connection request, or a termination request) sent from the external application 11a to the device 13 (step S903). Hereinafter, the above notification of the request is referred to as the request notification. Next, a process for determining the received request notification is executed. First, the device server communication module 11c determines whether or not the received request notification is an “activation notification” (step S904). If it is determined that the request notification is a “start notification” (YES to the step S904), the main thread 120 executes the same communication thread generation and start process as described with reference to FIG. 4 to generate and start one of the communication threads (121 to 124) which performs independent communication control for each associated one of the device servers (12-1 to 12-4) (step S905). Then, the process returns to the step S901. If it is determined that the request notification is not an “activation notification” (NO to the step S904), it is determined whether or not the request notification is a “connection notification” (step S906). If the request notification is a “connection notification” (YES to the step S906), the main thread 120 executes the same connection process as described with reference to FIGS. 5 and 6 to cause the associated communication thread to start the connection process (step S907), and the process returns to the step S901. If the request notification is not a “connection notification” (NO to the step S906), the main thread 120 of the device server communication module 11c determines whether or not the request notification is a “termination notification” (step S908). If the request notification is a “termination notification” (YES to the step S908), the main thread 120 terminates the corresponding communication thread (step S909). At this time, the main thread 120 does not terminate (delete) the associated device stack 11f, and returns to the step S901. That is, the main thread 120 terminates only the communication thread having being used without terminating the associated device stack 11f. The termination (deletion) of the device stack 11f is executed if a process termination notification is received from the resident module 11b to the main thread 120 of the device server communication module 11c (YES to the step S902). When the process termination notification is received from the resident module 11b, the main thread 120 of the device server communication module 11c sends a termination notification to the associated device control module 11d (assumed to be the device control module 11d-2 in the present example), and the device control module 11d-2 executes the process for terminating the device stack 11f (assumed to be the device stack 11f-2 in the present example), and sends a termination process completion event notification to the communication thread (step S910). Upon receipt of the termination process completion event notification, the communication thread (assumed to be the communication thread 124 in the present example) terminates the virtualization control by the device stack 11f-2 to thereby disconnect from the device 13, and terminates (deletes) the device stack 11f-2 (deletes the virtualization control) (step S911). As described above, also in the second embodiment, the information processing apparatus detects a device stack in the non-data transmission and reception state according to a connection notification, and communicates with a device server via a detected device stack in the non-data transmission and reception state. This makes it possible to more control devices than the maximum connectable devices, while enabling the information processing apparatus to exhibit effective performance without limiting the number of devices to the maximum number of connectable devices defined by the device interface standard or the SDK. Further, in the second embodiment, when data transmission and reception with a device is terminated, a device stack associated with the data transmission and reception is not terminated (deleted), as a device stack in the non-data transmission and reception state. Therefore, for example, if a communication thread which is waiting for connection exists, it is possible to execute data transmission and reception using the device stack in the non-data transmission and reception state without waiting for a new device stack to be generated. The number of device stacks in the non-data transmission and reception state may be limited by performing control such that after a device stack is generated, when a predetermined time period has elapsed, the generated device stack is deleted, to thereby reduce consumption of resources of the information processing apparatus. Next, a description will be given of a device control system using an information processing apparatus according to a third embodiment of the present invention. FIG. 10 is a block diagram of the device control system according to the third embodiment of the present invention, which shows a case where the network appearing in FIG. 1 is an external network, such as the Internet. Note that the information processing apparatus 11 of the device control system in the third embodiment has the same configuration as the information processing apparatus appearing in FIG. 1. In the device control system shown in FIG. 10, the same component elements as those of the device control system shown in FIG. 1 are denoted by the same reference numerals. In FIG. 10, the information processing apparatus 11 is connected to a proxy server 1001 via a network 1000, such as a LAN. Further, the proxy server 1001 is connected to the external network, such as the Internet (hereinafter referred to as the Internet 1002). Further, in the illustrated example, proxy servers 1003 and 1004 are connected to the Internet 1002, and the device servers 12-1 and 12-2 are connected to the proxy server 1003 via a network 1005, such as a LAN. Further, the device servers 12-3 and 12-4 are connected to the proxy server 1004 via a network 1006, such as a LAN. In the present embodiment, the device server communication module 11c included in the information processing apparatus 11 performs control of communication via the Internet connection. The device server communication module 11c communicates with a device server by entering USB packet data generated by a device stack and network packet data for communication with a device server, in a data section defined by a protocol, such as an HTTP (hypertext transfer protocol). This causes the device server communication module 11c to connect to the device server, which is an external server, through a proxy server and a firewall (FW) using the same method as used in connecting to the Web server by a browser. Note that in a system which connects to an external network, such as the Internet, as described above, system configuration may be performed e.g. using a technique of cloud computing (hereinafter referred to as the cloud), e.g. for a server computer hosting service. By using the cloud, it is possible to provide device servers and devices in the present embodiment on the external network, as desired, and a user can use the devices via a network, such as the Internet, without being conscious of locations of the device servers and devices. As described above, the use of the cloud makes it possible to flexibly configure (change or expand) the system. As described above, in the present embodiment, even when the information processing apparatus is connected via an external network, such as the Internet, and cannot check the number of devices (and device servers), the information processing apparatus can control, similarly to the first and second embodiments, devices more than the maximum connectable devices defined by the interface standard or the SDK. This makes it possible to control the devices without being conscious of the number of devices (and device servers) connected on the external network. As is clear from the above description, in the example shown in FIG. 1, the device server communication module 11c (main thread) functions as a communication thread control unit. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiments, and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiments. For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium). This application claims the benefit of Japanese Patent Application No. 2011-232230, filed Oct. 21, 2011, which is hereby incorporated by reference herein in its entirety. 13655564 canon imaging systems inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment
nyse:caj Canon Jun 26th, 2018 12:00AM Oct 6th, 2016 12:00AM https://www.uspto.gov?id=US10007846-20180626 Image processing method An image processing method for a picture of a participant, photographed in an event, such as a marathon race, increases the accuracy of recognition of a race bib number by performing image processing on a detected race bib area, and associates the recognized race bib number with a person included in the picture. This image processing method detects a person from an input image, estimates an area in which a race bib exists based on a face position of the detected person, detects an area including a race bib number from the estimated area, performs image processing on the detected area to thereby perform character recognition of the race bib number from an image subjected to image processing, and associates the result of character recognition with the input image. 10007846 1. An image processing method, comprising: an object detection step of detecting one or a plurality of specific objects from an input image; a first area estimation step of estimating a first area in which identification information for identifying the object exists, from a position of the object detected in said object detection step; a second area detection step of detecting a second area including the identification information, within the first area estimated in said first area estimation step; an image processing step of performing image processing with respect to the second area detected in said second area detection step; and an identification information recognition step of performing character recognition processing of the identification information with respect to a processing result in said image processing step, and associating a result of the character recognition processing with the input image, wherein said object detection step detects an object by detecting a face position of the object, wherein said object detection step is capable of detecting not only the face position of the object, but also an orientation of the face, and wherein said image processing step controls, on an image of the second area, execution of image processing in which an interval of characters assumed to be arranged based on the orientation of the face detected in said object detection step is extended and contracted in a predetermined direction. 2. The image processing method according to claim 1, wherein said object detection step performs processing, using still images cut out from a moving image at predetermined intervals, as the input image. 3. The image processing method according to claim 2, wherein said identification information recognition step further performs association of the result of the character recognition processing with a reproduction time, and the image processing method further includes a moving image reproduction step of reproducing, based on the identification information selected by a predetermined operation of a user or an external input, the moving image from the reproduction time associated with the identification information. 4. The image processing method according to claim 1, further comprising a third area detection step of detecting a third area based on information indicative of a size or an area of the identification information within the first area; and an information count estimation step of estimating the number of information items in the identification information from the third area detected in said third area detection step, wherein said image processing step performs image processing with respect to the second area detected in said second area detection step or each of areas in the third area, which correspond to the number of information items detected in said information count estimation step. 5. The image processing method according to claim 4, wherein said information count estimation step estimates the number of information items based on a width or a height of the third area. 6. The image processing method according to claim 4, wherein in a case where there is an area among areas within the third area, which is different in width or height from other areas, said information count estimation step applies provisional information to the area. 7. The image processing method according to claim 1, wherein said object detection step detects an object by detecting a shape of a head to shoulders of the object. 8. The image processing method according to claim 1, wherein said object detection step detects an object by detecting a skin area of the object. 9. The image processing method according to claim 1, wherein said image processing step performs deformation correction. 10. The image processing method according to claim 1, wherein said image processing step performs inclination correction in which an image of the second area is mapped in a predetermined direction based on an inclination angle with respect to a reference line of the input image, and an interval of characters is adjusted. 11. An image processing method, comprising: an object detection step of detecting one or a plurality of specific objects from an input image; a first area estimation step of estimating a first area in which identification information for identifying the object exists, from a position of the object detected in said object detection step; a third area detection step of detecting a third area based on information indicative of a size or an area of the identification information, within the first area; an information count estimation step of estimating the number of information items in the identification information from the third area detected in said third area detection step; an image processing step of performing image processing with respect to the third area detected in said third area detection step; and an identification information recognition step of performing character recognition processing of the identification information with respect to a processing result in said image processing step, and associating a result of the character recognition processing with the input image, wherein, in a case where there is an area among areas within the third area, which is different in width or height from other areas, said information count estimation step applies provisional information to the area. 12. The image processing method according to claim 11, wherein said object detection step performs processing, using still images cut out from a moving image at predetermined intervals, as the input image. 13. The image processing method according to claim 12, wherein said identification information recognition step further performs association of the result of the character recognition processing with a reproduction time, and the image processing method further comprising a moving image reproduction step of reproducing, based on the identification information selected by a predetermined operation of a user or an external input, the moving image from the reproduction time associated with the identification information. 14. The image processing method according to claim 11, wherein said information count estimation step estimates the number of information items based on a width or a height of the third area. 14 BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an image processing method for a picture photographed in an event, such as a marathon race. Description of the Related Art Conventionally, there has been known a technique for estimating a position of a race bib based on a detected position of a runner's face, and reading a race bib number using an OCR (Optical Character Reader) (see “Racing Bib Number Recognition” written by Idan Ben-Ami, Tali Basha, and Shai Avidan, http://www.eng.tau.ac.il/˜avidan/papers/RBNR.pdf). However, the technique described in Non-PTL 1 has a problem that when a race bib number of a person is read from a photographed image, characters on e.g. a billboard or a road sign in the background within the image are erroneously detected as the race bib number. Further, in a case where a face of a person cannot be detected from an image, and in a case where a race bib is largely deformed, causing deformation of the shape of the race bib number, it is impossible to correctly read the race bib number by character recognition performed using the OCR. Further, a race bib attached to the body of a runner has a characteristic that in a case where the image is photographed from a lateral direction, the race bib is more largely deformed toward an end in a view depth direction, and a character interval is changed. The technique described in Non-PTL 1 assumes a case where a runner is photographed from the front, and hence it is impossible to correctly read the race bib number using the OCR in the above-described case. Further, there is a problem that if a person overlaps the runner or if a hand of the runner is positioned in front of the race bib, part of the race bib is hidden, and this prevents the race bib number from being correctly recognized. If only part of the race bib number is detected in such a case, it is also difficult to determine whether or not the race bib number is correctly detected. SUMMARY OF THE INVENTION The present invention has been made in view of these problems, and provides an image processing method for performing image processing on a race bib area detected in an image of a participant of an event, which has been photographed in the event, to thereby enhance the recognition accuracy of a race bib number and associate the recognized race bib number and the person within the image with each other. To solve the above-described problems, an image processing method as recited in claim 1 is characterized by comprising an object detection step of detecting one or a plurality of specific objects from an input image, a first area estimation step of estimating a first area in which identification information for identifying the object exists, from a position of the object detected in the object detection step, a second area detection step of detecting a second area including the identification information, within the first area estimated in the first area estimation step, an image processing step of performing image processing with respect to the second area detected in the second area detection step, and an identification information recognition step of performing recognition processing of the identification information with respect to a processing result in the image processing step, and associating a result of the recognition processing with the input image. According to the present invention, a race bib area is efficiently detected in a photographed image, and image processing is performed with respect to the race bib area, whereby it is possible to enhance the recognition accuracy of a race bib number, and associate the recognized race bib number and the person image with each other. Further features of the present invention will become apparent from the following description of an exemplary embodiment with reference to the attached drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of an example of an image processing apparatus 100 according to a first embodiment of the present invention. FIG. 2 is a flowchart useful in explaining a process performed by the image processing apparatus 100 from reading of a photographed image to associating of a race bib number with a person image. FIG. 3 is a view useful in explaining areas processed by an object detection section 102. FIGS. 4A to 4C are views useful in explaining inclination correction performed on a race bib character area 304 by an image processing section 105. FIGS. 5A to 5C are views useful in explaining depth correction performed on a race bib character area having a depth by the image processing section 105. FIG. 6 is a block diagram of an example of an image processing apparatus 110 according to a second embodiment of the present invention. FIG. 7 is a flowchart useful in explaining a process performed by the image processing apparatus 110 from reading of a photographed image to associating of a race bib number with a person image. FIGS. 8A to 8C are views useful in explaining detection of a frame and a character area of a race bib. FIG. 9 is a block diagram of an example of an image processing apparatus 900 according to a third embodiment of the present invention. FIG. 10 is a flowchart useful in explaining a process performed by the image processing apparatus 900 from synchronization of moving image reproduction to reproduction of a moving image of a runner having a race bib number selected by a user. DESCRIPTION OF THE EMBODIMENTS The present invention will now be described in detail below with reference to the drawings showing an embodiment thereof. First Embodiment FIG. 1 is a block diagram of an example of an image processing apparatus 100 according to a first embodiment of the present invention. Configuration of Image Processing Apparatus 100 The illustrated image processing apparatus 100 is an apparatus, such as a personal computer (PC). The image processing apparatus 100 may be an apparatus, such as a mobile phone, a PDA, a smartphone, and a tablet terminal. The image processing apparatus 100 includes a CPU, a memory, a communication section, and a storage section (none of which are shown) as the hardware configuration. The CPU controls the overall operation of the image processing apparatus 100. The memory is a RAM, a ROM, and the like. The communication section is an interface for connecting to e.g. a LAN, a wireless communication channel, and a serial interface, and is a function section for data transmission and reception of data to and from an image pickup apparatus for transmitting a photographed image to the image processing apparatus. The storage section stores, as software, an operating system (hereinafter referred to as the OS: not shown), an image reading section 101, an object detection section 102, a race bib area estimation section 103, a race bib character area detection section 104, an image processing section 105, and a character recognition section 106, and stores software associated with other functions. Note that these software items are loaded into the memory, and operate under the control of the CPU. The image reading section 101 reads a photographed image, a display drawing image, and so on, from the memory, and loads the read image into the memory of the image processing apparatus 100. More specifically, the image reading section 101 decompresses a compressed image file, such as a JPEG file, converts the image file to a raster image in an array of RGB values on a pixel-by-pixel, and loads the raster image into the memory of the PC. At this time, in a case where the number of pixels of the read photographed image is not large enough, pixel interpolation may be performed to thereby increase the number of pixels to a sufficiently large number so as to maintain a sufficient accuracy for detection of a person area by the object detection section 102, and recognition by the image processing section 105 and the character recognition section 106. Further, in a case where the number of pixels is larger than necessary, the number of pixels may be reduced by thinning the pixels so as to increase the speed of processing. Further, to correct a width and height relation of a photographed image, the photographed image may be rotated as required. The object detection section 102 detects a person area within a photographed image. A method of detecting a person includes a method of detection based on features of a face of a person and features of organs, such as a mouth and eyes, a method of detection based on an Ω-like shape of a head to shoulders, and a method of detection based on a hue of a skin area or the like of a person, but is not limited to these, and a combination of a plurality of detection methods may be used. The race bib area estimation section 103 estimates, based on the position of a face and a shoulder width, from a person area detected by the object detection section 102 in the photographed image, that a race bib character area exists in a torso in a downward direction from the face. Note that the object of which the existence is to be estimated is not limited to the race bib, but may be a uniform number, or identification information directly written on part of an object. Further, the estimation is not to be performed limitedly in the downward direction, but the direction can be changed according to a posture of a person or composition of a photographed image, on an as-needed basis. The race bib character area detection section 104 detects a race bib character area which can be characters with respect to each area calculated by the race bib area estimation section 103. Here, the characters refer to an identifier which makes it possible to uniquely identify an object, such as numbers, alphabets, hiragana, katakana, Chinese characters, and a pattern of numbers, codes, and barcodes. The image processing section 105 performs image processing with respect to each area detected by the race bib character area detection section 104 as pre-processing for character recognition. The character recognition section 106 recognizes characters with respect to the image processed by the image processing section 105 based on a dictionary database in which image features of candidate characters are described, and associates the recognition result with a person image. The person image refers to part including a person in a photographed image. Processing Flow Performed by Image Processing Apparatus 100 FIG. 2 is a flowchart useful in explaining a process performed by the image processing apparatus 100, shown in FIG. 1, from reading of a photographed image to associating of a race bib number with a person image. Referring to FIG. 2, when a photographed image is designated, the process is started, and the image reading section 101 reads the photographed image as an input image (step S201). Next, the object detection section 102 scans the whole raster image of the read input image, and detects an image area having a possibility of a person (step S202). The object detection section 102 determines whether or not there is an image area having a possibility of a person in the input image, i.e. whether or not a person exists in the input image (step S203), and if a person exists, the process proceeds to a step S204, whereas if no person exists, the process proceeds to a step S205. If it is determined in the step S203 that one or more persons exist, the race bib area estimation section 103 estimates that a race bib character area is included for each person, and determines an area to be scanned (step S204). The area to be scanned is determined based on a size in the vertical direction of the input image and a width of the person area, and is set to an area in the downward direction from the face of the person. In the present example, the size in the vertical direction and the width of the area to be scanned may be changed according to the detection method used by the object detection section 102. If it is determined in the step S203 that no person exists, the race bib area estimation section 103 determines the whole input image as the area to be scanned (step S205). The race bib character area detection section 104 detects a race bib character area from the area to be scanned, which is determined for each person (step S206). As a candidate of the race bib character area, the race bib character area detection section 104 detects an image area which can be expected to be a race bib number, such as numerals and characters, and detects an image area including one or a plurality of characters. Here, although the expression of the race bib number is used, the race bib number is not limited to numbers. The race bib character area detection section 104 determines whether or not race bib character area detection has been performed with respect to all persons included in the input image (step S207), and if there is a person on which race bib character area detection has not been performed yet (NO to the step S207), the process returns to the step S204 so as to perform race bib character area detection with respect to all persons. The areas described in the steps S201 to S207 will be described in detail hereinafter with reference to FIG. 3. When race bib character area detection with respect to all persons is completed (YES to the step S207, including a case where an image area having a possibility of a person is not found in the step S203), the image processing section 105 performs image processing on each detected race bib character area as pre-processing for performing character recognition (step S208). Here, the image processing refers to deformation correction, inclination correction, depth correction, and so forth. Inclination correction and depth correction will be described in detail hereinafter with reference to FIGS. 4A to 4C, and FIGS. 5A to 5C. As for deformation correction, various well-known techniques can be applied, and hence description thereof is omitted. When the image processing has been performed on all of the detected race bib character areas, the character recognition section 106 performs character recognition with respect to each race bib character area (step S209). The character recognition section 106 associates a result of character recognition with the person image (step S210). When character recognition has been performed with respect to all race bib character areas, the process for associating a race bib number with a person image is terminated. As to Detected Areas FIG. 3 is a view useful in explaining areas in which the object detection section 102, the race bib area estimation section 103, and the race bib character area detection section 104 perform processing, on each person within the input image in FIG. 2. An image frame 301 is a frame of a photographed image, and the image reading section 101 loads image data into the memory. A person area 302 is a person area detected by the object detection section 102. A race bib estimated area 303 is a race bib estimated area estimated by the race bib area estimation section 103 with respect to the person area 302. Although the race bib estimated area 303 is shown here as a rectangular shape, by way of example, this is not limitative, but the race bib estimated area 303 may have a sector shape with the person area 302 in the center. A race bib character area 304 is a race bib character area detected by the race bib character area detection section 104 with respect to the race bib estimated area 303. As to Inclination Correction FIGS. 4A to 4C are views useful in explaining inclination correction performed by the image processing section 105 with respect to the race bib character area 304. Referring to FIG. 4A, an image 401 is the race bib character area 304, and is an image including one or a plurality of characters. The race bib number of the image 401 is attached to the clothing of a runner, and hence the image 401 is an image which has each character deformed and is inclined from horizontal as a whole. Therefore, each character cannot be properly extracted directly from the image 401, and hence it is difficult for the character recognition section 106 to perform character recognition. Referring to FIG. 4B, an intermediate image 402, an intermediate image 403, and an intermediate image 404 are intermediate images corrected by the image processing section 105, and are obtained by mapping the image 401 in the horizontal direction using Affine transformation based on an angle inclined from a reference line (horizontal direction), which is calculated from the race bib character area 304. Note that the reference line mentioned here is a reference line based on an X-axis (horizontal direction) or a Y-axis (vertical direction) of the photographed image. The X-axis is used as the reference line for a character string in horizontal writing, the Y-axis is used as the reference line for a character string in vertical writing, and correction processing is performed based on an angle inclined from the reference line. Referring to the intermediate image 402, the intermediate image 403, and the intermediate image 404, in FIG. 4B, characters shown therein are each deformed and have different inclinations. Therefore, there can be a case, as in the case of the intermediate image 404, in which there is little spacing in the vertical direction and characters are very close to each other. Although in the intermediate images 402 and 403, each character can be recognized as one character, the intermediate image 404 is an image in which the plurality of characters are recognized as one character due to respective different inclinations of the characters. Therefore, in such a case of the intermediate image 404, the character recognition section 106 cannot correctly recognize each character. Referring to images 405 to 409, in FIG. 4C, the images are obtained by further correcting the intermediate image 404 in the image processing section 105. The image processing section 105 detects the outline and the position of each character from the intermediate image 402, the intermediate image 403, and the intermediate image 404, respectively. The image processing section 105 adjusts the position of each character in the horizontal direction based on a width of the detected outline of each character such that a spacing in the vertical direction is generated for each character to thereby generate the images 405 to 409. By separating the characters as above, the character recognition section 106 can correctly recognize each character. Depth Correction FIGS. 5A to 5C are views useful in explaining depth correction performed by the image processing section 105 with respect to a race bib character area having a depth. Referring to FIG. 5A, an image frame 501 is a frame of a photographed image. A person area 502 is detected by the object detection section 102. A race bib estimated area 503 is estimated by the race bib area estimation section 103 with respect to the person area 502. A race bib character area 504 is detected by the race bib character area detection section 104 with respect to the race bib estimated area 503. As shown in the race bib character area 504, a race bib of a person who faces in a lateral direction has a depth generated in the image of the race bib character area, and a character width and a character interval become narrower from the near side toward the far side. Thus, in such an image as shown in the race bib character area 504, the character recognition section 106 recognizes the characters as one character due to the influence of different character widths or the combining of characters adjacent to each other, and hence the character recognition section 106 cannot perform correct character recognition. To solve this problem, in a case where the organs on the face of a person, such as a mouth and eyes, exist in the person area 502, not in a front direction, but in a manner unevenly distributed in a right or left direction, the object detection section 102 judges that the person is oriented in a lateral direction. Then, an orientation angle of the face is calculated based on a degree of the uneven distribution. The image processing section 105 corrects the image based on the calculated orientation angle of the face. Referring to FIG. 5B, a torso 505 is a schematic representation of the torso of the person from an upper part. Here, the torso 505 has an elliptical shape, and the orientation of the torso 505 is estimated to be equivalent to the orientation angle of the face, and the angle is indicated by an orientation angle 506. This makes it possible to approximate an image 509 of the race bib character area 504 to the image attached to the front side of the elliptical shape. It is assumed that the race bib draws a curve based on the orientation angle 506 with respect to the torso 505. An interval of characters is calculated with respect to a horizontal axis 507, and this is defined as an assumed density, assuming that the image 509 has been photographed at the calculated interval (ratio). Referring to FIG. 5C, a curve 510 is a curve generated by the orientation angle 506. An inclination of the characters of the image 509 is calculated, and the reciprocal of the assumed density is calculated in a horizontal direction of a center line 508 of the image 509 (the same direction as the horizontal axis 507). A curve for correcting a pixel interval of the image 509 using the reciprocal of the calculated assumed density is the curve 510. The width of a local line segment 511 in the lateral direction for each unit angle of the curve 510 becomes narrower from the near side toward the far side in the image 509. The pixel positions of each character of the image 509 are calculated based on the curve 510 while performing inclination correction, and the character width is corrected by extending or contracting the horizontal direction of each pixel to thereby generate a corrected image 512. Note that in the extension or contraction, a multiple of the reciprocal of the assumed density of each pixel value of the image 509 may be directly transferred, or values each calculated from pixel values at neighboring locations may be used to make the image smooth. The pixel value is a value representing the type of color and brightness of the pixel. The image processing section 105 performs the above-described image processing, whereby it is possible to correct the character width and the character interval even with respect to the race bib character area having a depth, and it is possible for the character recognition section 106 to correctly recognize the characters. Further, although the inclination correction processing in FIGS. 4A to 4C and the depth correction processing in FIGS. 5A to 5C are separately described, inclination correction and depth correction may be performed as one image processing. As described heretofore, according to the first embodiment of the present invention, it is possible to read a race bib number by detecting a race bib of a person from within a photographed image, and performing image correction such as inclination correction and depth correction, and associate the race bib number and the person image with each other. Second Embodiment Next, a description will be given of a second embodiment of the present invention. The second embodiment is characterized in that to solve a problem that part of a race bib is hidden by overlapping of another person or positioning of a hand of a runner himself/herself in front of the race bib, which prevents the race bib number from being correctly recognized, the hidden race bib number is estimated from a detected race bib character area. An example of an image processing apparatus 110 according to the second embodiment will be described. In the present embodiment, a frame area detection section 107 and a character count estimation section 108 are added to the configuration of the image processing apparatus 100 described in the first embodiment. FIG. 6 is a block diagram of an example of the image processing apparatus 110 according to the second embodiment of the present invention. Note that the same component elements as those of the image processing apparatus 100 shown in FIG. 1 are denoted by the same reference numerals, and description thereof is omitted. The frame area detection section 107 detects a frame area which can be a frame of a race bib with respect to each race bib estimated area calculated by the race bib area estimation section 103. The character count estimation section 108 estimates position coordinates of the respective digits which are equally arranged based on a frame width of the frame area detected by the frame area detection section 107, and calculates the number of digits. Note that the frame width mentioned here refers to a direction in which the characters of the race bib are arranged (long side direction). Further, the frame width is not limitative, but it is possible to apply to a frame height. Process Flow of Image Processing Apparatus 110 FIG. 7 is a flowchart useful in explaining a process performed by the image processing apparatus 110 shown in FIG. 6, from reading of a photographed image to associating of a race bib number with a person image. Referring to FIG. 7, when a photographed image is designated, the process is started, and the image reading section 101 reads the photographed image as an input image (step S701). Next, the object detection section 102 scans the whole raster image of the read input image, and detects an image area having a possibility of a person (step S702). The object detection section 102 determines whether or not there is an image area having a possibility of a person in the input image, i.e. whether or not a person exists in the input image (step S703), and if a person exists, the process proceeds to a step S7204, whereas if no person exists, the process proceeds to a step S705. If it is determined in the step S703 that one or more persons exist, the race bib area estimation section 103 estimates that a race bib character area is included on a person-by-person basis, and determines an area to be scanned (step S704). The area to be scanned is determined based on a size in the vertical direction of the input image and a width of the person area, and is set to an area in a downward direction from the face of the person. Here, the size in the vertical direction and the width of the area to be scanned may be changed depending on the detection method used by the object detection section 102. If it is determined in the step S703 that no person exists, the race bib area estimation section 103 determines the whole input image as the area to be scanned (step S705). A step S706 and steps S707 to S709, described hereafter, are executed in parallel. The race bib character area detection section 104 detects a race bib character area from the area to be scanned, which is determined on a person-by-person basis (step S706). As a candidate of the race bib character area, the race bib character area detection section 104 detects an image area which can be expected to be a race bib number, such as numerals and characters, and detects an image area including one or a plurality of characters. In each area to be scanned, the frame area detection section 107 detects edge lines in the vertical direction and the horizontal direction, and detects a frame area of the race bib based on the positional relationship between the detected edge lines (step S707). If one or more bib frame areas are detected (YES to the step S708), the character count estimation section 108 calculates an area of the position coordinates of each character (digit) within the frame area e.g. based on the frame width of the frame area detected in the step S707 (step S709). If no bib frame area is detected (NO to the step S708), the process proceeds to a step S710 without executing the step S709. The race bib character area detection section 104 determines whether or not race bib character area detection has been performed with respect to all persons within the input image (step S710), and if there is a person on which race bib character area detection has not been performed yet (NO to the step S710), the process returns to the step S704, for detection of race bib character area with respect to all persons. If race bib character area detection respect to all persons is completed (YES to the step S710. Note that a case where an image area having a possibility of a person is not found in the step S703 is included), the image processing section 105 performs image processing for performing character recognition with respect to each detected race bib character area and frame area (step S711). Note that if the race bib character area detected in the step S706 and the area calculated in the step S709 are equivalent to each other, the race bib character area and the area indicated by the position coordinates of each character (digit) may be combined to handle these areas as one area. When image processing with respect to all race bib character areas is completed, the character recognition section 106 performs character recognition with respect to each race bib character area (step S712). The character recognition section 106 associates the result of character recognition with the person image (step S713). When character recognition with respect to all race bib character areas is completed, the process for associating the race bib number and the person image is terminated. As to Estimation of Hidden Characters FIGS. 8A to 8C are views useful in explaining detection of a frame and a character area of a race bib. In a photographed image shown in FIG. 8A, an image frame 801 is a frame of the photographed image, and the image reading section 101 loads image data into the memory. A person area 802 is detected by the object detection section 102. A race bib estimated area 803 is estimated with respect to the person area 802 by the race bib area estimation section 103. A race bib character area 804 is detected with respect to the race bib estimated area 803 by the race bib character area detection section 104. In the present example, part of the race bib character area 804 is hidden by overlapping of a person in front, so that part of the characters cannot be read by the character recognition section 106. An image 805 in FIG. 8B is an example of the image of the race bib part of which is hidden. The frame area detection section 107 detects neighboring pixel values continuous in the vertical direction and the horizontal direction within the race bib estimated area 803, and pixels (edge pixels) which form edges of pixel values, each having an amount of change not less than a threshold value. Approximate straight lines which form a frame of the race bib are generated based on the positions of edge pixels in the vertical direction and the horizontal direction and the numbers of continuous pixel values. A bib frame area 806 shown in FIG. 8C is a bib frame area formed by the approximate straight lines generated by the frame area detection section 107. The character count estimation section 108 retrieves an area in the vicinity of the intermediate portion in the vertical direction of the detected bib frame area 806, and sets the area as the character area. A character area 807, a character area 808, a character area 809, a character area 810, and a character area 811 are character areas detected by the character count estimation section 108. Here, the character area 807 is a character area corresponding to a hidden character as in the race bib character area 804, and the character area cannot be correctly detected. On the other hand, in the character areas 808 to 811, where the character areas are correctly detected, the character width and the position in the vertical direction of each digit are equally detected. The character count estimation section 108 can determine that the image is an image having one-digit arbitrary character and four-digit fixed characters based on a relationship between the respective digits of the character areas 808 to 811 which are equal in character width and the character area 807 which is different in character width. The four-digit fixed characters are recognized by the image processing section 105 and the character recognition section 106. As for the one-digit arbitrary character, the character in the bib frame is provisionally generated by applying a character which can be assumed. Here, as the character to be applied, a character, such as numbers 0 to 9, may be applied, or a character may be applied with reference to a character list of all race bib numbers for the event, set in advance. Further, by making use of the fact that persons having the same character string do not exist within the same image, it is also possible to exclusively generate a character. Although the case where hidden characters are estimated based on the character width is described with reference to FIGS. 6 to 8, by way of example, this is not limitative, but hidden characters may be estimated based on a height of characters, which is in a direction orthogonal to the direction indicated as the example. Further, in a case where a specific color is used for each digit of the race bib number, it is possible to determine the characters based on whether or not the specific color is included in a detected character area. As described above, according to the second embodiment of the present invention, it is possible to efficiently detect a race bib of a person from within a photographed image, and estimate the hidden race bib number based on the character width of the character area or the like. Further, although in the second embodiment of the present invention, the processing performed by the race bib character area detection section 104 and the processing operation performed by the frame area detection section 107 and the character count estimation section 108 are performed in parallel, this is not limitative, but the processing operations may be performed in series, or one of the processing operations may be performed. Third Embodiment Next, a description will be given of a third embodiment of the present invention. In the present embodiment, there is shown an example of application to a moving image in which a race bib number of a person appearing in the moving image is caused to be recognized for each reproduction time of the moving image, and the reproduction time of the moving image and the race bib number are associated with each other. In the third embodiment, the image processing apparatus monitors a moving image reproduction application (not shown) which is reproducing a moving image, and sequentially cuts out the moving image as a still image for character recognition. Next, a reproduction time of the cut-out still image during reproduction of the moving image and the recognized characters are recorded. This makes it possible to start reproduction of a moving image from a reproduction time at which a person having a specific race bib number designated by the user appears. FIG. 9 is a block diagram of an example of an image processing apparatus 900 according to the third embodiment of the present invention. A moving image reproduction section 901 is added to the configuration of the image processing apparatus 100 (FIG. 1) in the first embodiment. The same component elements as those in FIG. 1 are denoted by the same reference numerals. The moving image reproduction section 901, the image reading section 101, and the character recognition section 106, which are different between the first and second embodiments, will be described in the following. Referring to FIG. 9, the image reading section 101 is provided with not only the function described in the first and second embodiments, but also a function for cutting out (generating) still images from a moving image. As a method of cutting out still images from a moving image, still images are cut out at intervals of a predetermined time period or frames of the moving image, for example. The moving image reproduction section 901 is a function section that handles information necessary for moving image reproduction. The necessary information includes reproduction time information, information designated by a user, and so forth. The reproduction time information is information indicative of relative time from the start time to the termination time of a moving image. The information designated by a user is a race bib number of an object. The image processing apparatus 900 designates or detects a reproduction time of a moving image to be recognized (target moving image), whereby the moving image reproduction section 901 causes the reproduction time information held therein to match moving image reproduction time. The moving image reproduction time is reproduction time of a moving image as the target. The reproduction time information is information held by the moving image reproduction section 901 of the image processing apparatus 900. The moving image reproduction time is information held by the moving image reproduction application, and is information on reproduction time from the leading end of the moving image as the target being reproduced. Here, designation of the reproduction time is by estimating the moving image reproduction time through causing the moving image reproduction application to be started from the image processing apparatus 900 to cause reproduction to be started. Further, detection of the reproduction time is recognition of an elapsed time displayed on a screen of the moving image reproduction application, or detection of the reproduction time information e.g. according to notification from the moving image reproduction application, by the moving image reproduction section 901. The moving image reproduction section 901 measures the reproduction elapsed time in the image processing apparatus 900 to thereby sequentially update the reproduction time information, and estimate the current reproduction time of the moving image. The character recognition section 106 is provided with not only the function described in the first and second embodiments, but also a function for recording characters recognized, for each reproduction time information calculated by the moving image reproduction section 901, through the processing operations performed by the function sections of the image reading section 101, the object detection section 102, the race bib area estimation section 103, the race bib character area detection section 104, the image processing section 105, and the character recognition section 106, in a storage section, such as a database (not shown), in association with the reproduction time. The moving image reproduction section 901 refers to the reproduction time information recorded in the database or the like by the character recognition section 106, calculates a reproduction time at which is recognized a race bib number selected by a user's predetermined operation or an external input, and reproduces the moving image while designating the reproduction time to the target moving image. Here, designation of the reproduction time is by an argument or the like passed to the moving image reproduction application, together with designation of an target moving image, and for example, when performing moving image reproduction of YouTube (registered trademark) on the Internet, it is possible to specify a reproduction time to be started to a browser application by describing # t=(reproduction start time) together with a path to the target moving image. Process Flow of Image Processing Apparatus 900 FIG. 10 is a flowchart useful in explaining a process performed by the image processing apparatus 900 shown in FIG. 9 from performing synchronization of moving image reproduction to reproducing a moving image of a runner having a race bib number selected by a user. Here, synchronization refers to causing the reproduction time information to match the moving image reproduction time. Referring to FIG. 10, the moving image reproduction section 901 performs synchronization of moving image reproduction, by reproducing the target moving image by designating the reproduction start time, or detecting the current reproduction time of the moving image being reproduced, to thereby cause the reproduction time information of the moving image reproduction section 901 to match the moving image reproduction time (step S1001). For example, it is possible to perform synchronization of moving image reproduction by causing the image processing apparatus 900 to start moving image reproduction from the start, by setting reproduction time information=0. Here, the target moving image may be a moving image file reproduced by a single application. For example, the target moving image may be a streaming moving image distributed from a server on the Internet, and is only required to be reproduced as the moving image within a display window area of the moving image reproduction application. Next, before performing sequential reading of images from the moving image as still images and character recognition processing, the current reproduction time information of the moving image is calculated, by recognizing the reproduction time from within the moving image displaying area and counting the reproduction time information after synchronization, so as to associate the result of character recognition in the database with the reproduction time information (step S1002). The image reading section 101 detects the display window area of the specific moving mage reproduction application, copies the content of an image being displayed from the moving image reproducing screen into the memory or a file, and generates an input image (still image) for recognition processing (step S1003). Character recognition is performed with respect to the input image generated in the step S1003 (step S1004). As for details of the character recognition, the flowchart (steps S201 to S210) in FIG. 2 in the first embodiment or the flowchart (steps S701 to S713) in FIG. 7 in the second embodiment is applied. The moving image reproduction section 901 records the reproduction time information calculated in the step S1002 and the characters recognized in the step S1004 (step S1005). Here, a recording destination is a memory or a file disposed in the image processing apparatus 900, or may be notification to a server on the Internet. It is determined whether or not reproduction of the target moving image is terminated, and if reproduction of the target moving image is continued (NO to a step S1006), the process returns to the step S1002, wherein calculation of the next reproduction time information and character recognition of the input image are performed. If reproduction of the target moving image is terminated (YES to the step S1006), the recognized characters are displayed on a selection dialog or the like based on the information of the recognized characters recorded in the step S1005 to prompt a user to select recognized characters by a predetermined operation (step S1007). Note that recognized characters may be selected by an external input, and in this case, for example, the recognized characters desired to be reproduced may be designated from another application. If the user does not select specific recognized characters (NO to a step S1008), the present process is terminated. If the user selects specific recognized characters out of the recognized characters displayed within the dialog (YES to the step S1008), the reproduction time of the recognized characters selected in the step S1008 is detected by referring to the reproduction time information recorded in the step S1005, and moving image reproduction is performed while designating reproduction time for the target moving image (step S1009). Here, an option of designating the reproduction time is performed for a application for reproducing a moving image file or an Internet server that reproduces a streaming moving image. In the process flow in FIG. 10, the step in which recognized characters are selected by a user (step S1007) and the step for reproducing a moving image, while accompanying time designation (step S1009) are provided after moving image reproduction is terminated. However, in such a case of a streaming moving image on the Internet, which is formed by a server and a plurality of client computers, before moving image reproduction is terminated in the step S1006 of the present image processing apparatus on one client computer, the recognized character selection step in the step S1007 and the moving image reproduction step in the step S1009 can be performed by another client computer based on reproduction time information and recognized character information notified to the server in the step S1005. In a case where the recognized characters selected by the user appear in a plurality of scenes within the moving image, moving image reproduction may be performed using an option for designating a plurality of reproduction times in the step S1009. As described above, according to the third embodiment of the present invention, a race bib of a person is detected from a reproduced moving image, and a reproduction time and a race bib number are stored in association with each other. By designating the race bib number, it is possible to reproduce a moving image in which appears the person with the specific race bib number, out of the reproduced moving image. It should be noted that the present invention is not limited to the above-described embodiments, but it can be practiced in various forms, without departing from the spirit and scope thereof. Although in the present embodiments, an object is described as a person, the object is not limited to a person, but may be an animal, a vehicle, or the like. Further, although in the description given above, the result of character recognition is associated with a person image within the photographed image, it may be associated with the photographed image itself. In addition, although a character string in horizontal writing is described by way of example, this is not limitative, but the present embodiment may be applied to a character string in vertical writing and a character string extending in an oblique direction. Further, it is to be understood that the present invention may also be accomplished by supplying a system or an apparatus with a storage medium in which is stored a program code of software, which realizes the functions of the above described embodiments, and causing a computer (or a CPU, an MPU or the like) of the system or apparatus to read out and execute the program code stored in the storage medium. In this case, the program code itself read out from the storage medium realizes the functions of the above-described embodiments, and the computer-readable storage medium storing the program code forms the present invention. Further, an OS (operating system) or the like operating on a computer performs part or all of actual processes based on commands from the program code, and the functions of the above-described embodiments may be realized by these processes. Further, after the program code read out from the storage medium is written into a memory provided in a function expansion board inserted in the computer or a function expansion unit connected to the computer, a CPU or the like provided in the function expansion board or the function expansion unit executes part or all of the actual processes based on commands from the program code, and the above-described embodiments may be realized according to the processes. Other Embodiments Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD™), a flash memory device, a memory card, and the like. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application is a bypass continuation application of PCT International Application PCT/JP2015/084585 filed on Dec. 3, 2015 which is based on and claims priority from Japanese Patent Application No. 2014-259258, filed Dec. 22, 2014, and Japanese Patent Application No. 2015-193735, filed Sep. 30, 2015, the contents of which are hereby incorporated by reference herein in their entirety. 15287066 canon imaging systems inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment
nyse:caj Canon Sep 30th, 2014 12:00AM May 4th, 2012 12:00AM https://www.uspto.gov?id=US08849999-20140930 Device control apparatus and method for monitoring device, client apparatus, and device control system A device control apparatus which monitors a state change of a device independently without communication with a client apparatus. A device server as the device control apparatus monitors the state of a device locally connected thereto, using a definition file and a trigger detection algorithm for monitoring the state of the device, and detects a state change of the device. When a state change of the device is detected, the device server transmits a trigger notification indicative of the detection of the state change to the client apparatus. The device server starts a session with the client apparatus having received the trigger notification and relay data communication with the device, of which the state change has been detected. When the session with the client apparatus is disconnected, the device server restarts monitoring of the state of the device. 8849999 1. A device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising: a first detection unit configured to monitor a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device and detect a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure for monitoring a state of the device; a transmission unit configured to be operable when the state change of the device is detected, to transmit a trigger notification indicative of the detection of the state change to the client apparatus; a data communication control unit configured to start a session with the client apparatus having received the trigger notification and relay data communication with the device, of which the state change has been detected; a second detection unit configured to detect termination of the data communication with the device and resulting disconnection of the session with the client apparatus; and a restart unit configured to be operable when the session with the client apparatus is disconnected, to cause said first detection unit to restart monitoring of the state of the device. 2. The device control apparatus according to claim 1, wherein the definition file is formed by a first definition file containing response information sent from the device when a state change of the device occurred, and wherein said first detection unit includes a first determination unit configured to determine whether or not response information from the device and the response information contained in the first definition file match each other, and wherein when the response information from the device and the response information contained in the first definition file match each other, said transmission unit transmits the trigger notification to the client apparatus. 3. The device control apparatus according to claim 1, wherein the definition file is formed by a second definition file containing response information sent from the device when no state change of the device occurred, and said first detection unit includes a second determination unit configured to determine whether or not response information from the device and the response information contained in the second definition file match each other, and wherein when the response information from the device and the response information contained in the second definition file do not match each other, said transmission unit transmits the trigger notification to the client apparatus. 4. The device control apparatus according to claim 1, wherein when no response is received from the client apparatus, said restart unit causes said first detection unit to restart monitoring of the state of the device. 5. The device control apparatus according to claim 1, wherein when a request for a session with the device control apparatus is received from the client apparatus during monitoring of the state of the device, said first detection unit terminates the monitoring of the state of the device, and when the session is disconnected, said restart unit causes said first detection unit to restart monitoring of the state of the device. 6. A client apparatus connected, via a network, to a device control apparatus to which a device is to be locally connected, comprising: a generation unit configured to generate a definition file containing request information to a device locally connected to the device control apparatus and response information from the device, which were accumulated during polling for checking an operating state of the device; a transmission unit configured to transmit the generated definition file to the device control apparatus; a reception unit configured to receive a trigger notification indicative of a state change of the device from the device control apparatus having detected the state change of the device; a session control unit configured to start a session with the device control apparatus in response to the trigger notification received by said reception unit; and a virtualization control unit configured to virtually control the device of which the state change has been detected, via the device control apparatus with which the session has been started. 7. The client apparatus according to claim 6, further comprising a judgment unit configured to judge whether or not a definition file applicable to the device is held in the client apparatus itself, and wherein when said judgment unit judges that a definition file applicable to the device is held in the client apparatus, said generation unit does not generate a definition file, but said transmission unit transmits the definition file held in the client apparatus. 8. A device control system in which a device control apparatus to which a device is to be locally connected and a client apparatus are connected to each other via a network, wherein the device control apparatus comprises: a first detection unit configured to monitor a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device and detect a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure for monitoring a state of the device; a transmission unit configured to be operable when the state change of the device is detected, to transmit a trigger notification indicative of the detection of the state change to the client apparatus; a data communication control unit configured to start a session with the client apparatus having received the trigger notification and relay data communication with the device, of which the state change has been detected; a second detection unit configured to detect termination of the data communication with the device and resulting disconnection of the session with the client apparatus; and a restart unit configured to be operable when the session with the client apparatus is disconnected, to cause said first detection unit to restart monitoring of the state of the device, and wherein the client apparatus comprises: a generation unit configured to generate a definition file containing request information to a device locally connected to the device control apparatus and response information from the device, which were accumulated during polling for checking an operating state of the device; a transmission unit configured to transmit the generated definition file to the device control apparatus; a reception unit configured to receive a trigger notification indicative of a state change of the device from the device control apparatus having detected the state change of the device; a session control unit configured to start a session with the device control apparatus in response to the trigger notification received by said reception unit; and a virtualization control unit configured to virtually control the device of which the state change has been detected, via the device control apparatus with which the session has been started. 9. A method of controlling a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising: monitoring a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device to detect a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure for monitoring a state of the device; transmitting, when the state change of the device is detected, a trigger notification indicative of the detection of the state change to the client apparatus; starting a session with the client apparatus having received the trigger notification and relaying data communication with the device, of which the state change has been detected; detecting termination of the data communication with the device and resulting disconnection of the session with the client apparatus; and causing, when the session with the client apparatus is disconnected, monitoring of the state of the device to be restarted. 10. The method according to claim 9, wherein the definition file is formed by a first definition file containing response information sent from the device when a state change of the device occurred, and said detecting of a state change of the device includes determining as to whether or not response information from the device and the response information contained in the first definition file match each other, and wherein said transmitting of the trigger notification includes transmitting of the trigger notification to the client apparatus when the response information from the device and the response information contained in the first definition file match each other. 11. The method according to claim 9, wherein the definition file is formed by a second definition file containing response information sent from the device when no state change of the device occurred, and said detecting of a state change of the device includes determining as to whether or not response information from the device and the response information contained in the second definition file match each other, and wherein said transmitting of the trigger notification includes transmitting of the trigger notification to the client apparatus when the response information from the device and the response information contained in the second definition file do not match each other. 12. The method according to claim 9, wherein said restarting of monitoring of the state of the device includes restarting of monitoring of the state of the device when no response is received from the client apparatus. 13. The method according to claim 9, wherein when a request for a session with the device control apparatus is received from the client apparatus during monitoring of the state of the device, the monitoring of the state of the device is terminated, and when the session is disconnected the monitoring of the state of the device is restarted. 13 BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a device control apparatus and method, a client apparatus, and a device control system, and more particularly to a device control apparatus equipped with a function for controlling devices via a network, a device control method, a client apparatus, and a device control system. 2. Description of the Related Art With the widespread use of networks, there has been disclosed a device server configured to enable a device (peripheral device), which has conventionally been used by local connection e.g. to a personal computer (PC), to be used by a client PC on a network. For example, there have been proposed some methods for enabling a client PC on a network to use a device, such as a printer, a storage, or a scanner, as a shared device via a device server. As one of the methods, a method has been proposed in which dedicated application software (hereinafter referred to as “the utility”) is preloaded in a client PC, and in the case of accessing a device, a user operates the preloaded utility, thereby causing the client PC to virtually recognize the device to be accessed, as a locally connected device, so that the user can access the device as if it is a locally connected device, from the client PC on a network. In this method, which requires session (connection) start and end operations by a user, the session with a device server is occupied until the user executes an operation for terminating the session with the device using the utility, which disables use of the device by another client PC. To solve the above-mentioned problem, there has been disclosed a network file management system in which a device server permits a specific client PC to perform data transmission with a device only for a time period during which block data having a data length specified by a block header is transmitted, regarding that the device server is in a data transmission occupation state (see e.g. Japanese Patent Laid-Open Publication No. 2007-317067). Certainly, the network file management system disclosed in Japanese Patent Laid-Open Publication No. 2007-317067 makes it possible for a plurality of client PCs to share a device without execution of manual operation on the client PCs. However, in a case where a device is connected which very frequently necessitates the state of occupation by a client PC, it is difficult for the client PC to use another device simultaneously due to a technical restriction that in a state where a client PC occupies one device connected thereto via a network, the client PC cannot use another device. For example, in the case of using a device, such as an IC card reader, it is required to periodically make a query (polling) as to whether or not the IC card has been detected, i.e. to carry out a device monitoring process (change-of-state detection process) periodically. In general, the device monitoring process is executed by a device driver installed in a client PC. For this reason, the device is frequently occupied by the client PC via the network, and traffic on the network markedly increases during the occupation of the device. Therefore, it is desirable that the occupation of the device is minimized. Further, in a state where a device is frequently occupied and data is always flowing over the network, data is vulnerable to hacking. This is undesirable in terms of security. In addition, when the above-mentioned device monitoring process (change-of-state detection process) is configured such that a device server stores only trigger detection algorithms applicable to specific devices, so as to eliminate model-dependence which makes processing different on a device-by-device basis, the device server loses its flexibility. On the other hand, when trigger detection algorithms applicable to various devices existing in the system are all stored in a device server, it is possible to maintain the flexibility of the device server. However, the device server needs a large-capacity storage area, which causes an increase in costs. SUMMARY OF THE INVENTION The present invention provides a device control apparatus and method, a client apparatus, and a device control system, in which the device control apparatus is provided with a device monitoring process (change-of-state detection process) function conventionally implemented in a client apparatus, whereby the device control apparatus monitors a state change of a device independently without communication with a client apparatus, and when a state change of the device is detected, the device control apparatus notifies the client apparatus of the detection of the state change, thereby dispensing with the need for device monitoring (polling) by the client apparatus and making it possible to reduce traffic on the network. The present invention provides a device control apparatus and method, a client apparatus, and a device control system, in which communication between the client apparatus and the device is performed using a state change of the device as a trigger, whereby the client apparatus occupies the device only when necessary, to thereby reduce the vulnerability of security, and is capable of using a plurality of devices simultaneously even if occupation of each device is frequently required. The present invention provides a device control apparatus and method, a client apparatus, and a device control system, in which a trigger detection algorithm and a definition file applicable to a device currently monitored are dynamically installed or downloaded into the device server, whereby the device server is capable of executing detection processing for various devices while maintaining its flexibility. In a first aspect of the present invention, there is provided a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising a first detection unit configured to monitor a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device and detect a state change of the device, a transmission unit configured to be operable when the state change of the device is detected, to transmit a trigger notification indicative of the detection of the state change to the client apparatus, a data communication control unit configured to start a session with the client apparatus having received the trigger notification and relay data communication with the device, of which the state change has been detected, a second detection unit configured to detect termination of the data communication with the device and resulting disconnection of the session with the client apparatus, and a restart unit configured to be operable when the session with the client apparatus is disconnected, to cause the first detection unit to restart monitoring of the state of the device. In a second aspect of the present invention, there is provided a client apparatus connected, via a network, to a device control apparatus to which a device is to be locally connected, comprising a generation unit configured to generate a definition file containing request information to a device locally connected to the device control apparatus and response information from the device, which were accumulated during polling for checking an operating state of the device, a transmission unit configured to transmit the generated definition file to the device control apparatus, a reception unit configured to receive a trigger notification indicative of a state change of the device from the device control apparatus having detected the state change of the device, a session control unit configured to start a session with the device control apparatus in response to the trigger notification received by the reception unit, and a virtualization control unit configured to virtually control the device of which the state change has been detected, via the device control apparatus with which the session has been started. In a third aspect of the present invention, there is provided a device control system in which a device control apparatus to which a device is to be locally connected and a client apparatus are connected to each other via a network, wherein the device control apparatus comprises a first detection unit configured to monitor a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device and detect a state change of the device, a transmission unit configured to be operable when the state change of the device is detected, to transmit a trigger notification indicative of the detection of the state change to the client apparatus, a data communication control unit configured to start a session with the client apparatus having received the trigger notification and relay data communication with the device, of which the state change has been detected, a second detection unit configured to detect termination of the data communication with the device and resulting disconnection of the session with the client apparatus, and a restart unit configured to be operable when the session with the client apparatus is disconnected, to cause the first detection unit to restart monitoring of the state of the device, and wherein the client apparatus comprises a generation unit configured to generate a definition file containing request information to a device locally connected to the device control apparatus and response information from the device, which were accumulated during polling for checking an operating state of the device, a transmission unit configured to transmit the generated definition file to the device control apparatus, a reception unit configured to receive a trigger notification indicative of a state change of the device from the device control apparatus having detected the state change of the device, a session control unit configured to start a session with the device control apparatus in response to the trigger notification received by the reception unit, and a virtualization control unit configured to virtually control the device of which the state change has been detected, via the device control apparatus with which the session has been started. In a fourth aspect of the present invention, there is provided a method of controlling a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising monitoring a state of the device using a definition file and a trigger detection algorithm for monitoring the state of the device to detect a state change of the device, transmitting, when the state change of the device is detected, a trigger notification indicative of the detection of the state change to the client apparatus, starting a session with the client apparatus having received the trigger notification and relaying data communication with the device, of which the state change has been detected, detecting termination of the data communication with the device and resulting disconnection of the session with the client apparatus, and causing, when the session with the client apparatus is disconnected, monitoring of the state of the device to be restarted. With the above-described configuration, the device control apparatus monitors the device based on the trigger detection algorithm and the definition file, and sends a trigger notification when a state change of the device is detected. The client apparatus having received the trigger notification starts a session with the device control apparatus, and data communication is performed between the client apparatus and the device. When the data communication is completed and the session is disconnected, the device control apparatus restarts monitoring of the device. Thus, the device monitoring process conventionally performed by the client apparatus is executed by the device control apparatus, and hence the client apparatus need not perform the device monitoring process. Data communication is performed between the client apparatus and the device only when necessary, and when the data communication is completed, monitoring of the device is restarted. Further, when the client apparatus detects connection of a device thereto, it performs polling for checking the operating state of the device, accumulates packets transmitted and received during the polling, and sends these packets to the device control apparatus. The device control apparatus can perform the monitoring process by using these packets transmitted and received during the polling for a definition file, which makes it possible to monitor even a device the connection of which is newly detected. Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic block diagram of a device control system according to a first embodiment of the present invention. FIG. 2 is a sequence diagram useful in explaining an operation sequence executed by the device control system in FIG. 1. FIG. 3 is a diagram useful in explaining the data structure of a definition file stored in a data storage section appearing in FIG. 1. FIG. 4 is a flowchart of a trigger detection process executed by a device server appearing in FIG. 1. FIGS. 5A and 5B are a sequence diagram useful in explaining an operation sequence executed by a device control system according to a second embodiment of the present invention. FIG. 6 is a diagram useful in explaining the data structure of a packet transmitted and received in the operation sequence in FIG. 5. FIG. 7 is a diagram useful in explaining the data structure of a request packet and that of a response packet transmitted the operation sequence in FIG. 5 and a definition file generated by accumulating these packets. FIG. 8 is a flowchart of a trigger detection process executed by a device server in the device control system according to the second embodiment. DESCRIPTION OF THE EMBODIMENTS The present invention will now be described in detail below with reference to the accompanying drawings showing embodiments thereof. FIG. 1 is a schematic block diagram of a device control system according to a first embodiment of the present invention. As shown in FIG. 1, the device control system of the first embodiment comprises a client PC 100 (client apparatus) and a device server 200 (device control apparatus) to which devices 300 (300A, 300B) are locally connected. The client PC 100 and the device server 200 are connected to each other via a network 500. The network 500 may be formed by a wired communication line or a wireless communication. The device 300 is connected (locally connected) to the device server 200 by a connection cable 400 compliant with a USB (universal serial bus) interface. The connection cable 400 is not limited to the USB interface, but it may be compliant with another type of interface, such as the IEEE 1394. Next, the configuration of each of apparatuses forming the device control system will be sequentially described. The client PC 100 comprises, although not shown, a CPU, an input section, a display section, a memory, a communication section, and an external storage section, which are connected to one another via an internal bus. The client PC 100 is capable of communicating with the device server 200 from the communication section 115 via the network 500. The external storage section stores software components, such as an operating system (hereinafter referred to as OS), not shown, an application program 101, a resident module 102, a device driver 103, a virtualization control section 104, and a communication control section 105, and includes a data storage section 106 that stores various kinds of data. Each of the software components and the various kinds of data is read into the memory under the control of the CPU, whereby various control processes are executed. The application program 101 is a software program for controlling a device 300 by instructing data input/output request to the resident module 102 and the device driver 103. The resident module 102 is equipped with the following functions: (1) a function of acquiring individual device information for identifying each individual device 300 locally connected to the device server 200; (2) a function of identifying the model and the like of each individual device based on the individual device information; (3) a function of uniquely specifying the device driver 103 and the virtualization control section 104 necessitated for data transmission/reception to and from a device 300 and sequentially and dynamically generating and starting those software components; (4) a function of instructing the start and disconnection of a session with the device server 200 via the communication control section 105; and (5) a function of controlling data transmission/reception to and from the device 300 using the device driver 103 and the virtualization control section 104, after the start of a session with the device server 200. The device driver 103 is a software component that converts a data input/output request from the OS (not shown) or the application program 101 (hereinafter referred to as “the higher-layer software program”) to data in a data format applicable to the device 300 (hereinafter referred to “the control command”), sends the control command to the virtualization control section 104, and transfers a response to the control command from the device 300 to the higher-layer software program. The virtualization control section 104 is a software component that converts the data input/output request converted to the control command by the device driver 103, to packet data conforming to the USB data format (hereinafter referred to as “USB data”), and converts USB data sent from the communication control section 105 to the same data format as that of the control command to transmit the data to the device driver 103. The virtualization control section 104 is also equipped with a function of simulating, in response to a request for data transmission/reception to or from a device 300, a behavior exhibited when the device 300 is directly connected (locally connected) to the client PC 100 (this function will be hereinafter referred to as “the virtualization control”). The “virtualization control” enables data transmission/reception in the same state as where the device 300 is locally connected to the client PC 100. The communication control section 105 is a software component that performs protocol conversion between USB data received via the virtualization control section 104 and network packets during communication with the device server 200 via the network 500 to thereby control data transmission/reception to and from the device server 200. Further, the communication control section 105 performs control for starting or disconnecting a session with the device server 200 in response to a data transmission/reception request sent from the higher-layer application program or the device driver 103 via the virtualization control section 104. The data storage section 106 stores various kinds of data including definition files 107 and trigger detection algorithms 108, etc., described hereinafter with reference to FIG. 3. Each of the definition files 107 is a data file storing commands, information, etc. necessary for an associated trigger detection algorithm 108 in a case where the device server 200 monitors a device 300. Each of the trigger detection algorithms 108 is a program code describing an execution procedure for the device server 200 to monitor a device 300 and detect a state change of the device 300. The device server 200 reads in a definition file 107 associated with the device 300, whereby a device monitoring process (hereinafter referred to as “the trigger detection process”) for monitoring the device 300 is executed according to the above-mentioned execution procedure. The trigger detection process will be described hereinafter with reference to FIG. 4. The definition file 107 and the trigger detection algorithm 108 are a pair of “monitoring information” (monitoring programs) for monitoring the device 300. Each of the definition file 107 and the trigger detection algorithm 108 differs according to the model of a device 300. FIG. 3 is a diagram useful in explaining the data structure of the definition file stored in the data storage section 106 appearing in FIG. 1. The definition file in FIG. 3 is generated based on the functions (specifications) of a device 300. The definition file contains device identification information including a vendor ID (VID) and a product ID (PID), data patterns of a request packet sent to the device 300, and data patterns of a response packet received when the state of the device 300 changed. The data patterns of the response packet are the data patterns of only response packets sent from the device 300 when the state of the device 300 changed. The device server 200 comprises, although not shown, a CPU, a memory, a communication section, a USB interface, and an external storage section, which are connected to one another via an internal bus. The device server 200 is capable of communicating with the client PC 100 via the network 500 and performing data transmission/reception to and from a device 300 (300A, 300B) locally connected to the USB interface 218 thereof by an associated connection cable 400. The external storage section stores software components, such as an OS (not shown), a communication control section 201, and a device control section 202. Each of these software components and various kinds of data stored in a data storage section 203 of the external storage section is read into the memory and executed under the control of the CPU. The communication control section 201 has a function of controlling (starting and disconnecting) a session with the client PC 100 connected from the communication section 215 via the network 500, under the control of the OS. The device control section 202 has a function of controlling the device 300. Further, the device control section 202 has the following functions of acquiring individual device information 204, a definition file 205, and a trigger detection algorithm 206 via the communication control section 201 that controls communication performed from the communication section 215, and executing processing described below based on the acquired information (data): (1) a function of performing conversion between network packets for communication with the client PC 100 and “USB data” transmitted/received to and from a device 300 and cooperating with the device control section 202 to thereby intermediate (relay) data transmission/reception between the client PC 100 and the device 300; (2) a function of transmitting the individual device information 204 acquired from a device 300 to the client PC 100; (3) a function of receiving (acquiring) a definition file 107 and a trigger detection algorithm 108 from the client PC 100; (4) a function of executing the trigger detection process for monitoring (polling) the device 300 at predetermined time intervals, using the definition file 205 and the trigger detection algorithm 206 (described hereinafter), and upon detection of a state change of the device 300, sending information indicative of the detection of the state change (hereinafter referred to as “the trigger notification”) to the client PC 100; and (5) a function of detecting disconnection of a session with the client PC 100 by the communication control section 201, and restarting the trigger detection process mentioned in (4). The individual device information 204 is information for identifying a device 300 on an individual device basis. The individual device information 204 includes a vendor ID (VID) assigned on a device manufacturer basis so as to identify each manufacturer, a product ID (PID) assigned on a device model basis so as to identify each model, and a serial number assigned on a device basis so as to identify each individual device. The individual device information 204 is acquired from a device 300 by the device control section 202 e.g. when the device 300 is connected to the device server 200. The definition file 205 and the trigger detection algorithm 206 are information necessitated by the device server 200 for monitoring (polling) of a device 300 connected to the device server 200 itself. The device server 200 acquires the definition file 205 and the trigger detection algorithm 206 from the client PC 100 based on the individual device information 204. The device 300 (300A, 300B) is a flexible input/output device having a USB interface. The device 300 is e.g. an input device, such as a keyboard, a mouse, or a card reader, a display (output) device, such as a display, a single-function peripheral (SFP), such as a printer, or a multi-function peripheral (MFP) equipped with not only a print function, but also a scan function, a copy function, a storage function, and so forth. However, this is not limitative, but the device 300 may be another kind of device. For example, in a case where the device 300 is an IC card reader, assuming that an IC card is held over the IC card reader and the IC card reader performs an operation for reading the IC card, the device server 200 detects the reading operation as a state change of the device 300 and sends the trigger notification to the client PC 100. In the present embodiment, the term “state change of a device” is intended to mean a change in the operating state of the device. A state change occurs e.g. when an IC card reader is operated for IC card reading (user ID acquisition) or when an operation button is pressed, but is not limited to these, either. Although in the above-described embodiment, the device server 200 and the device 300 are provided separately from each other, this is not limitative, but the device server 200 and the device 300 may be provided integrally with each other. FIG. 2 is a sequence diagram useful in explaining an operation sequence executed by the device control system in FIG. 1. The sequence diagram in FIG. 2 shows a process of data transmission/reception performed between the client PC 100 and the device 300A connected to the device server 200. In FIG. 2, when the device 300A is connected to the device server 200, the device server 200 acquires individual device information on the device 300A (step S201) and stores the acquired individual device information as individual device information 204 in the data storage section 203 (step S202). At this time, the device server 200 determines whether or not the acquired individual device information 204 stores a serial number. If the acquired individual device information 204 does not store a serial number, the device server 200 generates unique information corresponding to a serial number, based on unique information held by the device server 200 itself (e.g. a MAC address) and unique information on a port to which the device is connected (e.g. a port number) and adds the generated unique information to the individual device information 204. This makes it possible to identify a device even when a plurality of devices of the same model, none of which has a serial number stored in individual device information 204 associated therewith, are connected to the device server 200. Then, the resident module 102 of the client PC 100 sends a search packet to the device server 200 via the communication control section 105 so as to identify the device 300A locally connected to the device server 200. For example, the resident module 102 sends a search packet to (queries) the device server 200 using a protocol, such as UDP (user datagram protocol). Upon receipt of the search packet, the device server 200 sends the individual device information 204 stored in the data storage section 203 to the client PC 100 (step S203). When the individual device information 204 is acquired from the device server 200, the resident module 102 of the client PC 100 identifies the device 300A based on a vendor ID (VID), a product ID (PID), a serial number, etc. contained in the individual device information 204, and uniquely specifies a device driver 103 and a virtualization control section 104 based on individual device-identifying information contained in the individual device information 204, sequentially generates these software components dynamically, and sequentially starts the software components. The software components make it possible for the client PC 100 to virtually control the device 300A (step S204). The client PC 100 specifies a definition file 107 and a trigger detection algorithm 108 associated with the model of the device 300A identified based on the individual device information 204, from the definition files 107 and the trigger detection algorithms 108 stored in the data storage section 106. Then, the client PC 100 generates an installation packet containing the specified definition file 107 and trigger detection algorithm 108, causes the communication control section 105 to start a session with the device server 200 (step S205), and sends the generated installation packet to the device server 200 (step S206). When it is impossible to start the session due to no response (timeout) or connection rejection, or for some other reason, the device server 200 performs error handling (e.g. by sending an error notification to the device 300A or performing alarm notification in its own apparatus), followed by terminating the present process. Upon receipt of the installation packet, the device server 200 installs the definition file 107 and the trigger detection algorithm 108 contained in the packet and stores these in the data storage section 203, as a definition file 205 (first definition file) and a trigger detection algorithm 206, respectively (step S207). Note that when the device connected to the device server 200 is determined as a device other than the device 300A as a result of the identification based on the individual device information 204, the application program 101 of the client PC 100 does not execute the processing for generating the software components. When the session between the client PC 100 and the device server 200 is disconnected (step S208), the device control section 202 of the device server 200 starts the trigger detection process (monitoring process), described hereinafter with reference to FIG. 4, associated with the device 300A, using the definition file 205 and the trigger detection algorithm 206 stored in the data storage section 203 (step S209). The trigger detection process is also started when the session is not started by the client PC 100 within a predetermined time period after issue of the trigger notification. FIG. 4 is a flowchart of the trigger detection process executed by the device server appearing in FIG. 1. A state change of the device 300A is monitored by the trigger detection process in FIG. 4 (the steps S209 to S214 in FIG. 2). The trigger detection process in FIG. 4 is suspended when an interrupt from the client PC 100 occurs during execution of this process, and is resumed upon termination of the interrupt. Referring to FIG. 4, first, when monitoring (polling) is started because of disconnection of a session between the client PC 100 and the device server 200 or timeout in which no session has been started by the client PC 100 within a predetermined time period after sending of the trigger notification (YES to a step S401), the device server 200 sends a request packet from the device control section 202 to the device 300A via the communication control section 201 and the communication section according to a request data pattern (request information) contained in the definition file 205 (see FIG. 3)(step S402). An interval of the monitoring (polling) can be set based on the definition file 205 (see FIG. 3). This interval is set so as to avoid occupation of the device server 200 by the trigger detection process and enable the use of another function (device). Then, it is determined whether or not a response packet has been received from the device 300A (step S403). If a response packet has been received, it is determined whether or not error information is contained in the response packet (e.g. when the device 300A is disconnected) (step S405). If it is determined in the step S405 that error information has been received, the present process is immediately terminated. On the other hand, if the response packet does not contain error information (NO to the step S405), the device control section 202 performs comparison between the response packet from the device 300A and each of response packet data patterns (response information) contained in the definition file 205 (step S406 (step S210 in FIG. 2)). If it is determined by the comparison in the step S406 that the response packet from the device 300A matches one of the response packet data patterns contained in the definition file 205, it is judged that a state change has occurred in the device 300A (step S211 in FIG. 2), and hence this state change is detected (step S212 in FIG. 2), and that a trigger notification indicative of the state change is required to be transmitted (YES to a step S407), and the trigger notification is sent to the client PC 100 via the communication control section 201 and the communication section (step S409 (step S213 in FIG. 2)), followed by terminating the present process (step S214 in FIG. 2). On the other hand, if it is determined by the comparison in the step S406 that the response packet from the device 300A does not match any of the response packet data patterns contained in the definition file 205 (no match), it is judged that no state change has occurred in the device 300A (NO to the step S407), and it is determined whether or not monitoring (polling) is terminated (step S408). As long as monitoring (polling) is not terminated (NO to a step S408), the steps S402 et seq. are repeatedly carried out to send a next request packet to the device 300A. If it is determined in the step S403 that no response packet has been received from the device within a predetermined time period after transmission of the request packet (timeout) (YES to a step S404), the process proceeds to the step S408, wherein it is determined whether or not monitoring (polling) is terminated. Referring again to FIG. 2, upon receipt of the trigger notification from the device server 200, the client PC 100 starts a session with the device server 200 (step S215) and starts data transmission/reception (relay of data communication) to and from the device 300A via the device server 200 (step S216) so as to achieve data transmission/reception (step S217). In the step S213, the device server 200 may transmit the trigger notification to the client PC 100 and request the client PC 100 to start a session. Then, when the completion of the data transmission/reception (device control) is detected by the communication control section 105 of the client PC 100 (e.g. when an end operation by a user is detected) (step S218), the session with the device server 200 is disconnected (step S219). When the disconnection is detected, the device server 200 starts the trigger detection process (step S220) and restarts monitoring (polling) of the device 300A. Similarly, when a session is not started by the client PC 100 within the predetermined time period after transmission of the trigger notification, the device server 200 starts the trigger detection process (step S220) and restarts monitoring (polling) of the device 300A. As described above, according to the first embodiment of the present invention, a control (communication) right over the device 300A is switched from the device server 200 to the client PC 100, in response to the trigger notification, and is switched again from the client PC 100 to the device server 200 when the session is disconnected. Thus, the client PC 100 and the device 300A can perform data communication only when required, and when the data communication is completed, monitoring of the device 300A by the device server 200 is restarted. FIGS. 5A and 5B are sequence diagrams useful in explaining an operation sequence executed by a device control system according to a second embodiment of the present invention. FIGS. 5A and 5B show a process of data transmission/reception performed between the client PC 100 and the device 300B connected to the device server 200. The device control system according to the second embodiment is identical in system configuration to the device control system in FIG. 1. Differently from the above-described first embodiment, the second embodiment has a feature that the client PC 100 polls the device 300B detected to have been connected to the device server 200, accumulates request and response packets sent during the polling, and sends these as definition files to the device server 200. Specifically, the virtualization control section 104 of the client PC 100 polls the device 300B via the communication control section 105 and the communication section 115 according to an instruction from the application program 101 so as to check a response sent when no state change has occurred in the device 300B, e.g. a response in an “execution wait state” where the device 300B is not performing processing or control, and causes the data storage section 106 to store request and response packets transmitted and received during the polling. Further, in the operation sequence in the operation sequence in FIGS. 5A and 5B, in a trigger detection process in the present embodiment, which will be described hereinafter with reference to FIG. 8, a response packet from the device 300B is compared with response packet data patterns contained in a definition file, and when it is determined by the comparison that the response packet from the device 300B does not match any of the response packet data patterns contained in the definition file (no match), a trigger notification is transmitted to the client PC 100. The second embodiment is distinguished by this point from the first embodiment in which a trigger notification is transmitted when a response packet from a device matches one of response packet data patterns contained in a definition file. In FIG. 5A, an operation sequence up to a step where the client PC 100 starts a session with the device server 200 (steps S501 to S505) is the same as the steps S201 to S205 in FIG. 2. When instructed by the application program 101 to start polling the device 300B detected to have been newly connected to the device server 200 (step S511), the client PC 100 instructs the virtualization control section 104, via the device driver 103, to start accumulation of packets, described hereinafter with reference to FIG. 6, transmitted and received during the polling (step S512) and transmission of request packets for checking the operating state of the device 300B (step S513). The virtualization control section 104 causes the communication control section 105 to sequentially transmit a request packet for checking the operation state (“execution wait state” in the present steps) of the device 300B (step S514) and accumulate the request packet in the data storage section 106 at the same time (step S515). Further, the virtualization control section 104 causes the communication control section 105 to accumulate received response packet in the data storage section 106 (step S516). The operation is repeatedly carried out until a sequence of polling is completed (steps S517 to S519). Note that the virtualization control section 104 may instruct the communication control section 105 to record the packet transmission/reception process (e.g. the order of the packets) in the data storage section 106. FIG. 6 is a diagram useful in explaining the data structure of a packet transmitted and received in the operation sequence in FIGS. 5A and 5B. Referring to FIG. 6, a header contains signature data for identifying the protocol of the present system, an electronic message size indicative of the data size of the present packet, a command ID indicative of a packet type, and device information for identifying a device (“vendor ID”, “product ID”, and “serial number”). A data portion stores data corresponding to a packet type (command ID). Discrimination between a request packet and a response packet is performed based on the command ID. Referring again to FIG. 5, when the application program 101 notifies the virtualization control section 104 of completion of polling (step S520), the client PC 100 instructs the virtualization control section 104 to terminate accumulation of the packets shown in FIG. 6 via the device driver 103. The virtualization control section 104 sends the request packets and the response packets accumulated in the data storage section 106 as a definition file (second definition file) described hereinafter with reference to FIG. 7 (step S521) and a trigger detection algorithm, to the device server 200. FIG. 7 is a diagram useful in explaining the data structure of a request packet and that of a response packet transmitted in the operation sequence in FIG. 5 and the definition file generated by accumulating these packets. The request packet and the response packet separately illustrated in FIG. 7 have the same data structure as that of the request packet and the response packet described with reference to FIG. 6. Therefore, discrimination between the request packet and the response packet is performed based on difference in the command ID, and the data portion (USB transfer data) of the packet stores data corresponding to a request or a response. For example, in a case where n request packets are used for polling, n request packets and n response packets are accumulated in the data storage section 106, and these packets are sent as a definition file to the device server 200. Referring again to FIG. 5A, upon receipt of the definition file (see FIG. 7) from the client PC 100 via the communication control section 201, the device server 200 stores the definition file as a definition file 205 in the data storage section 106 (step S522). Upon completion of the transmission of the definition file 205, the session between the client PC 100 and the device server 200 is disconnected (step S523), and the trigger detection process in FIG. 8 is executed (steps S524 to S530). The device control section 202 of the device server 200 starts polling similar to the polling by the client PC 100, using the definition file 205. In short, the trigger detection process in FIG. 8, which is associated with the device 300B, is started. FIG. 8 is a flowchart of the trigger detection process executed by the device server 200 in FIG. 1. The trigger detection process in FIG. 8 is basically the same as the trigger detection process in FIG. 4, described hereinbefore, in the first embodiment, and the steps S401 to S409 in FIG. 4 correspond to steps S801 to S809 in FIG. 8. The trigger detection process in FIG. 8 is distinguished from the trigger detection process in FIG. 4 by the following points including the step S807. Specifically, the device server 200 sequentially transmits request packets from the device control section 202 to the device 300B via the communication control section 201 and the communication section according to a request data pattern contained in the definition file 205 (step S525; step S802 in FIG. 8) and receives a response packet from the device 300B (step S526; step S803 in FIG. 8)). The device control section 202 compares between the response packet from the device 300B and each of the response packet data patterns contained in the definition file 205 (step S806 in FIG. 8) to thereby determine whether or not there is a match between them. Whether or not the response packet from the device 300B and each of the response packet data patterns contained in the definition file 205 match each other is determined e.g. by a method in which a comparison is made in the number of bytes from a packet head is performed between them, and when the difference is larger (e.g. by 3 bytes or more) than a threshold value set in advance, it is determined that there is no match, or by a method in which the received packet is compared with all the n response packet data patterns, and when the received packet is different in the number of bytes from all of the n response packet data patterns, it is determined that there is no match. If it is determined that there is a match (YES to the step S807 in FIG. 8), the device server 200 judges that no state change has occurred, and sends a next request packet to the device 300B. As long as it is determined that there is a match, the device server 200 sequentially transmits the request packets contained in the definition file 205. Then, when it is determined that there is a match in data pattern between the final request packet and a response packet in response to the final request packet, the device server 200 determines that the polling is terminated (YES to the step S808 in FIG. 8), followed by terminating the present trigger detection process. Note that when no response has been received within a predetermined time period after transmission of a request packet (timeout) (YES to the step S804 in FIG. 8), the process proceeds to the step S808, and since the answer to the question of the step S808 is negative (NO) in this case, the process returns to the step S802, wherein a next request packet is transmitted. If error information is sent from the device 300B in response to a request packet (e.g. when the device 300B is disconnected) (YES to the step S805 in FIG. 8), the trigger detection process is terminated. After termination of the trigger detection process, the definition file 205 may be deleted from the data storage section 203 or the definition file 205 may be stored in the data storage section 203 so that it can be used when the device 300B is connected again (which is determined based on the PID and the VID). On the other hand, when it is determined that there is no match (NO to the step S807 in FIG. 8), the device server 200 judges that a state change (step S527 in FIG. 5B) has occurred in the device 300B and has been detected, and sends a trigger notification (step S528 in FIG. 5B) indicative of this detection to the client PC 100 via the communication control section 201 and the communication section. The device server 200 terminates the trigger detection process in FIG. 8 and the client PC 100 receives the trigger notification (step S529 in FIG. 5B). Upon receipt of the trigger notification from the device server 200, the client PC 100 starts a session with the device server 200, using the reception of the trigger notification as a trigger (step S531 in FIG. 5B). The following operation sequence (steps S532 to S536 in FIG. 5B) is identical to the operation sequence (steps S216 to S220) in FIG. 2, and therefore description thereof is omitted. As described above, the client PC 100 polls the device 300B in a state where a trigger notification has not been sent, accumulates request and response packets transmitted and received during this polling, and sends the accumulated request and response packets to the device server 200 as a definition file. Thus, the monitoring of a state change of a device (trigger detection process) can be achieved. In the second embodiment of the present invention, the client PC 100 may be further provided with a function of determining whether or not a definition file appropriate to an associated device 300 is held (stored) in the storage section 106. In this case, when a definition file generated in advance according to the functions (specifications) of the device 300 (see FIG. 3 in the first embodiment of the present invention) or a definition file generated by polling a device of the same model as that of the device 300 (see FIG. 7) has already been stored, the held definition file is sent to the device server 200 without generating a definition file by polling as described above, and only when a definition file applicable to the device 300 has not been stored, a definition file is generated by polling. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures and functions. In the operation sequence of FIG. 2 or 5A/5B, when the device 300 is connected to the device server 200, the device server 200 acquires individual device information and starts the operation sequence. However, e.g. when the power of the device server 200 or the device 300 is turned on, when an instruction is sent from the application program of the client PC 100, or when a predetermined connection operation is performed by one or both of the device server 200 or the device 300, the device server 200 may acquire the individual device information and execute the operation sequence in FIG. 2 or 5 based on the individual device information. Further, the client PC 100 may send a simplified program (e.g. a shell or a script) which operates as a part of a trigger detection algorithm to the device server 200 together with a definition file, so that the device server 200 can execute the trigger detection process using the definition file and the simplified program. In this case, the trigger detection algorithm causes the device server 200 to perform basic processing of the trigger detection process, such as polling of the device 300 and transmission of a trigger notification to the client PC 100, based on the definition file, and to carry out a part of the process, such as unique processing corresponding to the model of a device, based on the simplified program. For example, a condition determined (detected) as a state change differs from device model to device model, and therefore the simplified program can be used for execution of the determination. With the above-described interoperation with the client PC 100 and the device server 200, the trigger detection algorithm is only required to describe the basic execution procedure, which is independent of any device model, of the trigger detection process, so that it is possible to handle a device of any type simply by generating a simplified program for executing processing corresponding to the model of the device. The device server 200 may acquire the definition file and the trigger detection algorithm not from the client PC 100, but from a portable storage medium. Further, if a device 300 of the same model as that of a device which has been connected before is connected to the device server 200 and a definition file and a trigger detection algorithm associated with the model have already been stored (installed) in the device server 200, the device server 200 is not required to acquire and store the definition file and the trigger detection algorithm. Further, the device server 200 may notify the client PC 100 that it is not necessary to transmit the definition file and the trigger detection algorithm. Furthermore, it is possible to dispose a plurality of client PCs 100 in the system. In this case, the device server 200 can send a trigger notification to the client PCs 100 and permit a client PC 100 which first sends a request for starting connection for a session to establish connection (data exchange) with a device 300. Alternatively, the device server 200 may be configured to permit a predetermined number of client PCs of all client PCs having sent the connection start request to establish connection with the device 300. Further, when a specific client PC 100 cannot receive a trigger notification e.g. due to power-off or a failure, control may be performed such that the trigger notification can be sent to another client PC 100 as an alternative transmission destination. In the above-described embodiments, the method (configuration) is described in which a definition file 107 and a trigger detection algorithm 108 associated with a device 300 are both stored in the client PC 100 appearing in FIG. 1 and the device server 200 receives the definition file 107 and the trigger detection algorithm 108 from the client PC 100. However, in the present invention, the following methods (configurations) can also be employed: (1) Necessary trigger detection algorithms 108 are stored in the device server 200 in advance, and only definition files 107 are stored in the client PC 100. In this case, the device server 200 receives from the client PC 100 only a definition file 107 associated with the model of the device 300 identified based on device information. This configuration can be applied e.g. to a case where access to the device server 200 is limited e.g. due to dependency on the specifications and design of software and hardware or a reason related to the operation and management of the system, and hence it is impossible to receive and execute (or install) a trigger detection algorithm (program code). This configuration is advantageous in that a trigger detection algorithm (program code) is stored in the device server in advance, which makes tampering difficult. (2) The device control system may be configured such that only when a trigger detection algorithm or a definition file associated with the model of an identified device 300 is not stored in the device server 200, the device server 200 acquires the necessary trigger detection algorithm or the necessary definition file e.g. from the client PC 100. Further, the device server 200 or the client PC 100 may manage trigger detection algorithms and definition files and may be caused to determine whether or not it is required to add or update a trigger detection algorithm or a definition file. With this configuration, the device server 200 can acquire all or part of the trigger detection algorithms and the definition files only when addition or update is required. Further, the device control system may be configured such that the device server 200 accesses to the client PC 100 to download (acquire) a trigger detection algorithm and/or a definition file instead of receiving the same from the client PC 100 as in the above-described embodiments. In this case, the client PC 100 is only required to notify the device server 200 that the client PC 100 stores the associated trigger detection algorithm and/or definition file. It is also possible to associate a plurality of definition files with one above-mentioned trigger detection algorithm to thereby control a plurality of devices 300 in a manner linked to each other. For example, in a case where a device A and a device B are controlled in a manner linked to each other, control can be performed such that after trigger notifications have been received from both of the two devices, the operation of the device A is started. Note that the present invention can also be applied to a case where a plurality of devices 300 different in model are connected to the device server 200. In this case, the device server 200 stores definition files and trigger detection algorithms (a plurality of pairs) associated with the respective devices on a model-by-model basis. Then, a trigger detection process is executed based on each combination of a trigger detection algorithm and a definition file applicable to each device, whereby state changes of the respective devices 300 can be detected on a device-by-device basis. Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiments, and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiments. For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium). This application claims priority from Japanese Patent Application No. 2011-103755 filed May 6, 2011, which is hereby incorporated by reference herein in its entirety. 13464159 canon imaging systems inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open 709/224 Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment
nyse:caj Canon May 16th, 2017 12:00AM Nov 2nd, 2010 12:00AM https://www.uspto.gov?id=US09654588-20170516 Device control apparatus, client apparatus, device control method, and device control system There is provided a device control apparatus which makes it possible to dispense with device monitoring (polling) by a client apparatus to thereby reduce traffic on a network. A device server 200 acquires, according to device information for identifying a device locally connected to the device server 200, at least one of a trigger detection algorithm and a definition file for monitoring a state change of the device identified based on the device information, and monitors the locally connected device based on at least one of the acquired trigger detection algorithm and definition file. Then, when a state change of the device is detected, the device server 200 sends a trigger notification indicative of the detection of the state change to a client PC 100 via a network 500, and starts a session with the client PC 100 in response to a connection request from the client PC 100 having received the trigger notification. 9654588 1. A device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, the device control apparatus comprising: a processor configured to execute: a device information acquisition task that acquires device information for identifying a device locally connected to the device control apparatus; a monitoring information acquisition task that starts a first session with the client apparatus on the network and acquires, using the first session, at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired by said device information acquisition task, according to the device information; a monitoring information storage task that stores the at least one of the trigger detection algorithm and the definition file acquired by said monitoring information acquisition, wherein, when the at least one of the trigger detection algorithm and the definition file is stored, the first session is disconnected; a device monitoring task that monitors the locally connected device based on the at least one of the trigger detection algorithm and the definition file stored in said monitoring information storage task, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure monitoring the state of the device; a trigger notification transmission task that is operable when a state change of the device is detected by said device monitoring task, to transmit a trigger notification indicative of the detection of the state change to the client apparatus via the network; and a session control task that starts a second session with the client apparatus on the network in response to a connection request from a driver software component of the client apparatus having received the trigger notification, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus, wherein the driver software components of the client apparatus is configured to send, in response to the notified trigger notification, the connection request to start the second session, and wherein, in response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected by the client apparatus. 2. The device control apparatus according to claim 1, wherein when connection of the device is detected, said device information acquisition task acquires the device information from the detected device. 3. The device control apparatus according to claim 1, further comprising a communication control task that performs protocol conversion between a first data format packet transmitted or received to or from the client apparatus and a second data format packet transmitted or received to or from the device, to thereby relay data transmission/reception between the client apparatus and the device. 4. The device control apparatus according to claim 1, wherein the second session is started to perform a virtualization control, and wherein, by the virtualization control, the client apparatus communicates with the device so that the device as if the device were directly connected the client apparatus. 5. A client apparatus connected, via a network, to a device control apparatus to which a device is to be locally connected, comprising: a processor configured to execute: a device information acquisition task that acquires, from the device control apparatus, device information on the device which is locally connected to the device control apparatus; a device identification task that identifies a device based on the acquired device information by the device information acquisition task; a driver software components generation task that identifies driver software components required for transmission/reception of data with the device, based on the received device information by said device information acquisition task, and generates the driver software components, based on information concerning the identified device, the driver software components being comprised of a device driver, a USB class driver, and a USB virtual bus device; a monitoring information storage task that stores at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is program code describing an execution procedure monitoring the state of the device; a monitoring information identification task that identifies at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired by said device information acquisition task, from the at least either of the trigger detection algorithms and the definition files stored in said monitoring information storage task; a monitoring information transmission task that starts a first session with device control apparatus on the network and transmits, using the first session, the at least one of the trigger detection algorithm and the definition file identified by said monitoring information identification task to the device control apparatus, wherein, when the at least one of the trigger detection algorithm and the definition file is transmitted, the first session is disconnected; a trigger notification reception task that receives a trigger notification indicative of detection of a state change of the device according to the transmitted at least one of the trigger detection algorithm and the definition file, from the device control apparatus having detected the state change of the device; and a session control task that starts a second session with the device control apparatus on the network, when the driver software components sends a connection request in response to the trigger notification received by said trigger notification reception task, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus, and wherein, in response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected. 6. The client apparatus according to claim 5, wherein the virtualization control task further performs protocol conversion between a first data format packet transmitted or received to or from the device control apparatus and a second data format packet transmitted or received to or from the driver software component. 7. The client apparatus according to claim 5, wherein the second session is started to perform a virtualization control, and wherein, by the virtualization control, the client apparatus communicates with the device so that the device as if the device were directly connected the client apparatus. 8. A device control method executed by a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, the device control method comprising: a device information acquisition step of starting a first session with the client apparatus on the network and acquiring, using the first session, device information for identifying a device locally connected to the device control apparatus; a monitoring information acquisition step of acquiring at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired in said device information acquisition step, according to the device information; a monitoring information storage step of storing the at least one of the trigger detection algorithm and the definition file acquired in said monitoring information acquisition step, wherein, when the at least one of the trigger detection algorithm and the definition file is stored, the first session is disconnected; a device monitoring step of monitoring the locally connected device based on the at least one of the trigger detection algorithm and the definition file stored in said monitoring information storage step, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure monitoring the state of the device; a trigger notification transmission step of transmitting, when a state change of the device is detected in said device monitoring step, a trigger notification indicative of the detection of the state change to the client apparatus via the network; and a session control step of starting a second session with the client apparatus on the network in response to a connection request from a driver software component of the client apparatus having received the trigger notification, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus, wherein the driver software components of the client apparatus is configured to send, in response to the notified trigger notification, the connection request to start the second session, and wherein in response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected by the client apparatus. 9. The device control method according to claim 8, wherein in said device information acquisition step, when connection of the device is detected, the device information is acquired from the detected device. 10. The device control method according to claim 8, further comprising a communication control step of performing protocol conversion between a first data format packet transmitted or received to or from the client apparatus and a second data format packet transmitted or received to or from the device, to thereby relay data transmission/reception between the client apparatus and the device. 11. The device control method according to claim 8, wherein the second session is started to perform a virtualization control, and wherein, by the virtualization control, the client apparatus communicates with the device so that the device as if the device were directly connected the client apparatus. 12. A device control method executed by a client apparatus connected, via a network, to device control apparatuses to which a device is to be locally connected, comprising: a device information acquisition step of acquiring, from the device control apparatus, device information on the device which is locally connected to the device control apparatus; a device identification step of identifying a device based on the acquired device information by the device information acquisition step; a driver software components generation step of identifying driver software components required for transmission/reception of data with the device, based on the received device information by said device information acquisition step, and generating the driver software components, based on information concerning the identified device, the driver software components being comprised of a device driver, a USB class driver, and a USB virtual bus device; a virtualization control step of virtually controlling the device as if the device were directly connected to the client apparatus by the generated driver software components; a monitoring information storage step of storing at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is program code describing an execution procedure monitoring the state of the device; a monitoring information identification step of identifying at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired in said device information acquisition step, from the at least either of the trigger detection algorithms and the definition files stored in said monitoring information storage step; a monitoring information transmission step of starting a first session with device control apparatus on the network and transmitting, using the first session, the at least one of the trigger detection algorithm and the definition file identified in said monitoring information identification step to the device control apparatus, wherein, when the at least one of the trigger detection algorithm and the definition file is transmitted, the first session is disconnected; a trigger notification reception step of receiving a trigger notification indicative of detection of a state change of the device according to the transmitted at least one of the trigger detection algorithm and the definition file, from the device control apparatus having detected the state change of the device; and a session control step of starting a second session with the device control apparatus on the network, when the driver software components sends a connection request in response to the trigger notification received in said trigger notification reception step, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus and wherein, in a response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected. 13. The device control method according to claim 12, wherein the virtualization control step further performs protocol conversion between a first data format packet transmitted or received to or from the device control apparatus and a second data format packet transmitted or received to or from the driver software components. 14. The device control method according to claim 12, wherein the second session is started to perform a virtualization control, and wherein, by the virtualization control, the client apparatus communicates with the device so that the device as if the device were directly connected the client apparatus. 15. A device control system including a device control apparatus and a client apparatus connected to each other via a network and configured such that a device is to be locally connected to the device control apparatus, wherein the device control apparatus virtually controls the client apparatus and the device as if they were directly connected to each other via the device control apparatus, wherein the device control apparatus comprises: a first processor configured to execute: a device information acquisition task that acquires device information for identifying a device locally connected to the device control apparatus; a monitoring information acquisition task that starts a first session with the client apparatus on the network and acquires, using the first session, at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired by said device information acquisition task, according to the device information; a first monitoring information storage task that stores the at least one of the trigger detection algorithm and the definition file acquired by said monitoring information acquisition task, wherein, when the at least one of the trigger detection algorithm and the definition file is stored, the first session is disconnected; a device monitoring task that monitors the locally connected device based on the at least one of the trigger detection algorithm and the definition file stored in said first monitoring information storage task, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure monitoring the state of the device; a trigger notification transmission task that is operable when a state change of the device is detected by said device monitoring task, to transmit a trigger notification indicative of the detection of the state change to the client apparatus via the network; and a session control task that starts a second session with the client apparatus on the network in response to a connection request from a driver software component of the client apparatus having received the trigger notification, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus, wherein the driver software components of the client apparatus is configured to send, in response to the notified trigger notification, the connection request to start the second session, and wherein in response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected by the client apparatus, and wherein the client apparatus comprises: a second processor configured to execute: a virtualization control task that virtually controls the device as if the device were directly connected to the client apparatus by the generated driver software components; a device information acquisition task that acquires, from the device control apparatus, device information on the device which is locally connected to the device control apparatus; a device identification task that identifies a device based on the acquired device information by the device information acquisition task; a driver software components generation task that identifies driver software components required for the transmission/reception of data with the device, based on the received device information by said device information acquisition task, and generates the driver software components, based on information concerning the identified device, the driver software components being comprised of a device driver, a USB class driver, and a USB virtual bus device; a second monitoring information task that stores at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, wherein the definition file is a data file storing information necessary for the trigger detection algorithm, and wherein the trigger detection algorithm is a program code describing an execution procedure monitoring the state of the device; a monitoring information identification task that identifies at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired by said device information acquisition task, from the at least either of the trigger detection algorithms and the definition files stored in said second monitoring information storage task; a monitoring information transmission task that starts a first session with device control apparatus on the network and transmits, using the first session, the at least one of the trigger detection algorithm and the definition file identified by said monitoring information identification task to the device control apparatus, wherein, when the at least one of the trigger detection algorithm and the definition file is transmitted, the first session is disconnected; a trigger notification reception task that receives a trigger notification indicative of detection of a state change of the device, according to the transmitted at least one of the trigger detection algorithm and the definition file from the device control apparatus having detected the state change of the device; and a session control task that starts a second session with the device control apparatus on the network, when the driver software components sends a connection request in response to the trigger notification received by said trigger notification reception task, wherein the second session performs an occupied communication that the client apparatus temporarily occupies the device locally connected to the device control apparatus, and wherein, in response to a determination of a completion of transmission/reception of data which includes information uniquely identifying the device, through the occupied communication, the second session is disconnected. 16. The device control system according to claim 15, further comprising a communication control task of the device control apparatus that performs protocol conversion between a first data format packet transmitted or received to or from the client apparatus and a second data format packet transmitted or received to or from the device, to thereby relay data transmission/reception between the client apparatus and the device, and wherein the virtualization control task of the client apparatus further performs protocol conversion between a first data format packet transmitted or received to or from the device control apparatus and a second data format packet transmitted or received to or from the driver software components. 17. The device control system according to claim 15, wherein the second session is started to perform a virtualization control, and wherein, by the virtualization control, the client apparatus communicates with the device so that the device as if the device were directly connected the client apparatus. 17 This application is a U. S. National Phase Application of PCT International Application PCT/JP2010/069871 filed on Nov. 2, 2010 which is based on and claims priority from JP 2009-253395 filed on Nov. 4, 2009 the contents of which is incorporated herein in its entirety by reference. TECHNICAL FIELD The present invention relates to a device control apparatus, a client apparatus, a device control method, and a device control system, and more particularly to a device control apparatus equipped with a function for controlling devices via a network, a client apparatus, a device control method, and a device control system. BACKGROUND ART With the widespread use of networks, there has been disclosed a device server configured to enable a device (peripheral device), which has conventionally been used by local connection e.g. to a personal computer (PC), to be used by a client PC on a network. For example, there have been proposed some methods for enabling a client PC on a network to use a device, such as a printer, a storage, or a scanner, as a shared device via a device server. As one of the methods, a method has been proposed in which dedicated application software (hereinafter referred to as “the utility”) is preloaded in a client PC, and in the case of accessing a device, a user operates the preloaded utility, thereby causing the client PC to virtually recognize the device to be accessed, as a locally connected device, so that the user can access the device as if it is a locally connected device, from the client PC on a network. In this method, which requires session (connection) start and end operations by a user, the session with a device server is occupied until the user executes a device termination operation using the utility, which disables use of the device by another client PC. To solve the above-mentioned problem, there has been disclosed a network file management system in which a device server permits a specific client PC to perform data transmission with a device, as a data transmission occupation state, only for a time period during which block data having a data length specified by a block header is transmitted (see e.g. Japanese Patent Laid-Open Publication No. 2007-317067). SUMMARY OF INVENTION Technical Problem Certainly, the network file management system disclosed in Patent Literature 1 makes it possible for a plurality of client PCs to share a device without execution of manual operation on the client PCs. However, in a case where a device is connected which very frequently necessitates the state of occupation by a client PC, it is difficult for the client PC to use another device simultaneously due to a technical restriction that in a state where a client PC occupies one device connected thereto via a network, the client PC cannot use another device. Particularly when the device is an IC card reader, it is required to periodically make a query (polling) as to whether or not the IC card has been detected, i.e. to carry out a device monitoring process (change-of-state detection process) periodically. In general, the device monitoring process is executed by a device driver installed in a client PC. For this reason, the IC card reader is frequently occupied by the client PC via the network, and traffic on the network considerably increases during the occupation of the device. Therefore, it is desirable that the occupation of the device is minimized. Further, in a state where a device is frequently occupied and data is constantly flowing on a network, data is vulnerable to hacking. This is undesirable in terms of security. In addition, when the above-mentioned device monitoring process (change-of-state detection process) is configured such that a device server stores only trigger detection algorithms applicable to specific devices, so as to eliminate model-dependence which makes processing different on a device-by-device basis, the device server loses its flexibility. On the other hand, when trigger detection algorithms applicable to various devices existing in the system are all stored in a device server, it is possible to maintain the flexibility of the device server. However, the device server needs a large-capacity storage area, which causes an increase in manufacturing costs. It is a first object of the present invention to provide a device control apparatus, a client apparatus, a device control method, and a device control system, in which the device control apparatus is provided with a device monitoring process (change-of-state detection process) conventionally implemented in a client apparatus, whereby the device control apparatus monitors a state change of a device independently without communication with a client apparatus, and when a state change of the device is detected, the device control apparatus notifies the client apparatus of the detection of the state change, thereby dispensing with the need for device monitoring (polling) by the client apparatus and making it possible to reduce traffic on the network. It is a second object of the present invention to provide a device control apparatus, a client apparatus, a device control method, and a device control system, in which communication between the client apparatus and the device is performed using a state change of the device as a trigger, whereby the client apparatus is enabled to occupy the device only when the client apparatus needs to occupy the device and thus the vulnerability of security is reduced, and use a plurality of devices simultaneously even if occupation of each device is frequently required. It is a third object of the present invention to provide a device control apparatus, a client apparatus, a device control method, and a device control system, in which a trigger detection algorithm and a definition file applicable to a device currently monitored are dynamically installed into or downloaded from the device server, whereby the device server is capable of executing detection processing for various devices while maintaining its flexibility. Solution to Problem To attain the above objects, according to a first aspect of the present invention, there is provided a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising device information acquisition means configured to acquire device information for identifying a device locally connected to the device control apparatus, monitoring information acquisition means configured to acquire at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired by the device information acquisition means, according to the device information, monitoring information storage means configured to store at least one of the trigger detection algorithm and the definition file acquired by the monitoring information acquisition means, device monitoring means configured to monitor the locally connected device based on at the least one of the trigger detection algorithm and the definition file stored in the monitoring information storage means, trigger notification transmission means configured to be operable when a state change of the device is detected by the device monitoring means, to transmit a trigger notification indicative of the detection of the state change to the client apparatus via the network, and session control means configured to start a session with the client apparatus in response to a connection request from the client apparatus having received the trigger notification. To attain the above objects, according to a second aspect of the present invention, there is provided a client apparatus connected, via a network, to a device control apparatus to which a device is to be locally connected, comprising device information acquisition means configured to acquire, from the device control apparatus, device information on the device which is locally connected to the device control apparatus, monitoring information storage means configured to store at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, monitoring information identification means configured to identify at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired by the device information acquisition means, from the at least either of the trigger detection algorithms and the definition files stored in the monitoring information storage means, monitoring information transmission means configured to transmit the at least one of the trigger detection algorithm and the definition file identified by the monitoring information specification means to the device control apparatus, trigger notification reception means configured to receive a trigger notification indicative of detection of a state change of the device from the device control apparatus having detected the state change of the device, and session means configured to start a session with the device control apparatus in response to the trigger notification received by the trigger reception means. To attain the above objects, according to a third aspect of the present invention, there is provided a device control method executed by a device control apparatus connected to a client apparatus via a network and to which a device is to be locally connected, comprising a device information acquisition step of acquiring device information for identifying a device locally connected to the device control apparatus, a monitoring information acquisition step of acquiring at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired in the device information acquisition step, according to the device information, a monitoring information storage step of storing the at least one of the trigger detection algorithm and the definition file acquired in the monitoring information acquisition step, a device monitoring step of monitoring the locally connected device based on at least one of the trigger detection algorithm and the definition file stored in the monitoring information storage step, a trigger notification transmission step of transmitting, when a state change of the device is detected in the device monitoring step, a trigger notification indicative of the detection of the state change to the client apparatus via the network, and a session control step of starting a session with the client apparatus in response to a connection request from the client apparatus having received the trigger notification. To attain the above objects, according to a fourth aspect of the present invention, there is provided a device control method executed by a client apparatus connected, via a network, to device control apparatuses to which a device is to be locally connected, comprising a device information acquisition step of acquiring, from the device control apparatus, device information on the device which is locally connected to the device control apparatus, a monitoring information storage step of storing at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, a monitoring information identification step of identifying at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired in the device information acquisition step, from at least either of the trigger detection algorithms and the definition files stored in the monitoring information storage step, a monitoring information transmission step of transmitting the at least one of the trigger detection algorithm and the definition file identified in the monitoring information specification step to the device control apparatus, a trigger notification reception step of receiving a trigger notification indicative of detection of a state change of the device from the device control apparatus having detected the state change of the device, and a session step of starting a session with the device control apparatus in response to the trigger notification received in the trigger reception step. To attain the above objects, according to a fifth aspect of the present invention, there is provided a device control system including a device control apparatus and a client apparatus connected to each other via a network and configured such that a device is to be locally connected to the device control apparatus, wherein the device control apparatus comprises device information acquisition means configured to acquire device information for identifying a device locally connected to the device control apparatus, monitoring information acquisition means configured to acquire at least one of a trigger detection algorithm and a definition file for monitoring a state change of a device identified based on the device information acquired by the device information acquisition means, according to the device information, first monitoring information storage means configured to store the at least one of the trigger detection algorithm and the definition file acquired by the monitoring information acquisition means, device monitoring means configured to monitor the locally connected device based on the at least one of the trigger detection algorithm and the definition file stored in the first monitoring information storage means, trigger notification transmission means configured to be operable when a state change of the device is detected by the device monitoring means, to transmit a trigger notification indicative of the detection of the state change to the client apparatus via the network, and session control means configured to start a session with the client apparatus in response to a connection request from the client apparatus having received the trigger notification, and wherein the client apparatus comprises device information acquisition means configured to acquire, from the device control apparatus, device information on the device which is locally connected to the device control apparatus, second monitoring information storage means configured to store at least either of one or more trigger detection algorithms and one or more definition files for monitoring a state change of the device, monitoring information identification means configured to identify at least one of a trigger detection algorithm and a definition file appropriate to a device identified based on the device information acquired by the device information acquisition means, from the at least either of the trigger detection algorithms and the definition files stored in the second monitoring information storage means, monitoring information transmission means configured to transmit the at least one of the trigger detection algorithm and the definition file identified by the monitoring information specification means to the device control apparatus, trigger notification reception means configured to receive a trigger notification indicative of detection of a state change of the device from the device control apparatus having detected the state change of the device, and session means configured to start a session with the device control apparatus in response to the trigger notification received by the trigger reception means. Advantageous Effects of Invention According to the present invention, the device control apparatus is provided with a device monitoring process (change-of-state detection process) conventionally implemented in a client apparatus, whereby a state change of a device is monitored independently without communication with a client apparatus, and when a state change of the device is detected, the device control apparatus notifies the client apparatus of the detection of the state change. Therefore, the need for device monitoring (polling) by the client apparatus is dispensed with, which makes it possible to reduce traffic on a network. According to the present invention, communication between the client apparatus and the device is established using a state change of the device as a trigger, whereby the client apparatus is enabled to occupy the device only when the client apparatus needs to occupy the device and thus the vulnerability of security is reduced, and use a plurality of devices simultaneously even if occupation of the devices is frequently required. According to the present invention, at least one of a trigger detection algorithm and a definition file applicable to a device currently monitored is dynamically installed into or downloaded from the device control apparatus, whereby the device control apparatus can execute detection processing for various devices while maintaining its flexibility. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a schematic block diagram of a device control system according to a first embodiment of the present invention. FIG. 2 is a block diagram useful in explaining the hardware configuration and software configuration of a client PC 100 appearing in FIG. 1. FIG. 3 is a block diagram useful in explaining the hardware configuration and software configuration of a device server 200 appearing in FIG. 1. FIG. 4 is a flowchart of a device information acquisition process executed by the device server 200 in FIG. 3 when a device 300 is connected to the device server 200. FIG. 5 is a flowchart of a virtualization control process associated with the device 300 and a transmission process for transmitting a definition file 115 and a trigger detection algorithm 116, which are executed by the client PC 100 appearing in FIG. 1. FIG. 6 is a diagram useful in explaining the data structure of an electronic message (packet) of an installation protocol, which contains the definition file 115 and the trigger detection algorithm 116 transmitted in a step S511 in FIG. 5. FIG. 7 is a diagram useful in explaining the data structure of the definition file 115 appearing in FIG. 6. FIG. 8 is a diagram useful in explaining the data structure of a command 770 appearing in FIG. 7. FIG. 9 is a flowchart of a control process after device information acquisition executed by the device server 200 appearing in FIG. 3. FIG. 10 is a flowchart of a trigger detection process executed by the device server 200 in a step S910 in FIG. 9. FIG. 11 is a flowchart of a data transmission/reception process executed by the client PC 100 appearing in FIG. 1. FIG. 12 is a diagram useful in explaining the data structure of a data transmission/reception packet transmitted/received in a step S907 in FIG. 9 or in a step S1104 in FIG. 11. FIG. 13 is a schematic block diagram of a device control system according to a second embodiment of the present invention. FIG. 14 is a block diagram useful in explaining the hardware configuration and software configuration of a network device 250 appearing in FIG. 13. DESCRIPTION OF EMBODIMENTS The present invention will now be described in detail below with reference to the drawings showing embodiments thereof. In the following, a description will be given of a first embodiment of the present invention. <1. Configuration of Device Control System> FIG. 1 is a schematic block diagram of a device control system according to the first embodiment of the present invention. As shown in FIG. 1, the device control system comprises client PCs 100 (100A, 100B), device servers 200 (200A, 200B), and devices 300 (300A, 300B). The device server 200 and the device 300 are connected to each other via a connection cable 400 compliant with an interface, such as USB (universal serial bus) or IEEE 1394. Further, the device server 200 and the client PCs (PC 100A, PC 100B) are connected to each other via a wired or wireless network 500. Next, the apparatuses forming the device control system in FIG. 1 will be sequentially described. <2. Configuration of Client PC 100> FIG. 2 is a block diagram useful in explaining the hardware configuration and software configuration of the client PC 100 appearing in FIG. 1. Referring to FIG. 2, the client PC 100 is an example of a client apparatus according to the first embodiment of the present invention. The client PC 100 includes a CPU 101, an input section 102, a display section 103, a memory 104, a communication section 105, and an external storage section 106, which are connected to each other via an internal bus 107. The CPU 101 functions as a central processing control unit, and executes predetermined programs stored in the memory 104 or the external storage section 106 to thereby perform overall control of the operation of the client PC 100. The input section 102 functions as an operating section via which various input operations, instruction operations, and so forth are performed. The input section 102 includes a keyboard, a mouse, etc. The display section 103 functions as a display for displaying various screens, etc. The display section 103 is incorporated in the client PC 100 or externally connected to the same. The memory 104 functions as a storage area comprising a ROM (read only memory) and a RAM (random access memory). The memory 104 stores predetermined programs and data. The communication section 105 provides interface for network packet transmission/reception and communication control compatible with a communication method employed by the network 500 implemented e.g. by a wired network, such as an Ethernet (registered trademark), or a wireless network using IEEE 802.11a or IEEE 802.11g. The client PC 100 can perform data transmission/reception to and from the device server 200 via the communication section 105. The external storage section 106 stores various software programs and various kinds of data, such as an OS 108, an application program 109, a resident module 110, a device driver 111, a USB class driver 112, a USB virtual bus device 113, a communication control section 114, definition files 115, and trigger detection algorithms 116. Under the control of the CPU 101, the software program(s) and/or data stored in the external storage section 106 are/is read into the memory 104 and are/is executed. The device driver 111, the USB class driver 112, and the USB virtual bus device 113 are driver software components dynamically generated through acquisition and registration of device information on the device 300 by the resident module 110. The application program 109 is a software component for delivering a data transmission/reception request to the device 300 via the driver software components (the device driver 111, the USB class driver 112, and the USB virtual bus device 113) and the communication control section 114. In the following, detailed descriptions will be sequentially given of the resident module 110, the device driver 111, the USB class driver 112, the USB virtual bus device 113, the communication control section 114, the definition files 115, and the trigger detection algorithms 116. The resident module 110 is a software component constantly on standby or operating when the OS 108 is active. The resident module 110 performs data transmission/reception to and from the device server 200 on the network 500 to thereby recognize a device 300 connected to the device server 200 and receive device information on the device 300. Then, the resident module 110 uniquely identifies driver software components (the USB virtual bus device 113, the USB class driver 112, and the device driver 111) required for data transmission/reception to and from the device 300, based on the received device information, and sequentially generates the driver software components dynamically. The device driver 111 is a software component that generates a control command to be issued to a device 300, in response to an instruction e.g. from the OS 108 or the application program 109 (hereinafter referred to as “the higher-layer software”), and sends a response to the control command from the device 300 to the higher-layer software. The USB class driver 112 is a software component that generates a plug-and-play event and generates a USB port for use in transmission/reception of a control command, thereby loading the device driver 111 in an upper layer. Further, the USB class driver 112 is a software component that converts a control command generated by the device driver 111 to a USB packet to deliver the USB packet to the USB virtual bus device 113, and converts a USB packet received from the USB virtual bus device 113 to a control command to deliver the control command to the device driver 111. The USB virtual bus device 113 is a software component that provides, when a data transmission/reception request is received from the application program 109 via the device driver 111 and the USB class driver 112, the same behavior (virtualization control) as in a case where a device 300 is directly connected (locally connected) to the client PC 100. This “virtualization control” enables data transmission/reception in a state similar to a state where the device 300 is locally connected to the client PC 100. The communication control section 114 is a software component that performs protocol conversion between a USB packet received from the USB virtual bus device 113 and a network packet for communication performed via the device server 200 and the network 500, to thereby control data transmission/reception to and from the device server 200 via the communication section 105. Upon receipt of a data transmission/reception request sent from the application program 109 via the USB virtual bus device 113, the communication control section 114 starts a session (connection) with the device server 200, and disconnects the session after completion of data transmission/reception. Each of the definition files 115 is a data file storing commands, information, etc. associated with the trigger detection algorithm 116, which is necessitated when the device server 200 executes monitoring of a device 300 (FIG. 7). Each of the trigger detection algorithms 116 is a program code describing an execution procedure in which the device server 200 monitors a target device 300 and detects a state change of the device 300. An associated definition file 115 is read in for execution of change-of-state detection, and a monitoring process (hereinafter referred to as “the trigger detection process”) associated with the device 300 is executed according to the above-mentioned execution procedure. The definition file 115 and the trigger detection algorithm 116 are a pair of monitoring programs (monitoring information) for monitoring the device 300. Each of the definition file 115 and the trigger detection algorithm 116 differs according to the model of a device 300. For this reason, the client PC 100 stores one or more definition files 115 and one or more trigger detection algorithms 116 corresponding to respective devices 300. FIG. 2 shows that definition files 115 and trigger detection algorithms 116 for N models in which N=1 to N. <3. Configuration of Device Server 200> FIG. 3 is a block diagram useful in explaining the hardware configuration and software configuration of the device server 200 appearing in FIG. 1. Referring to FIG. 3, the device server 200 is an example of the device control apparatus according to the first embodiment of the present invention. The device server 200 includes a CPU 201, a memory 202, a communication section 203, a USB interface 204, and an external storage section 205, which are connected to each other via an internal bus 206. The CPU 201, the memory 202, the communication section 203, and the internal bus 206 are identical in configuration to those of the client PC 100. The USB interface 204 provides interface for connection to a device 300. The USB interface 204 functions as an input and output interface compliant e.g. with USB (universal serial bus) specifications. The external storage section 205 stores software functional units, such as a communication control section 207 and a device control section 208, and data. The software functional unit(s) and/or data stored in the external storage section 205 are/is read into the memory 202 and are/is executed under the control of the CPU 201. In the following, the communication control section 207 and the device control section 208 will be described sequentially in detail. The communication control section 207 controls (starts and disconnects) a session with the client PC 100 connected thereto via the communication section 203 and the network 500. The communication control section 207 performs protocol conversion between a network packet transmitted or received to or from the client PC 100 and a USB packet transmitted or received to or from a device 300, to thereby intermediate (relay) data transmission/reception between the client PC 100 and the device 300. The device control section 208 is a functional unit that stores a definition file 209, a trigger detection algorithm 210, and device information 211, and detects a state change of a device 300 while monitoring the device 300, to thereby notify the client PC 100 of the detection of the state change. The definition file 209 and the trigger detection algorithm 210 stored in the device control section 208 are identical in configuration to the definition file 115 and the trigger detection algorithm 116 stored in the external storage section 106 of the client PC 100. However, the device server 200 stores a definition file 209 and a trigger detection algorithm 210 needed only for monitoring (polling) of a device 300 connected to the device server 200. The device information 211 is information for identifying a device 300. The device information 211 includes a vender ID (VID) assigned on a device manufacturer basis so as to identify each manufacturer, a product ID (PID) assigned on a device model basis so as to identify each model, and a serial number assigned on a device basis so as to identify each device. This device information is acquired from a device 300 by the device control section 208 e.g. when the device 300 is connected to the device server 200. The device control section 208 identifies the model of a connected device 300 based on the device information 211 acquired from the device 300. Further, the device control section 208 receives a definition file 115 and a trigger detection algorithm 116 associated with the identified model of the device 300 from the client PC 100, and stores the received definition file 115 and trigger detection algorithm 116 in the memory 202, as a definition file 209 and a trigger detection algorithm 210. Then, the device control section 208 executes a monitoring (polling) process, described hereinafter with reference to FIG. 10, on the connected device 300 at predetermined time intervals, using the stored definition file 209 and trigger detection algorithm 210, to thereby detect a state change of the device 300 and notify the client PC 100 of the detected state change. A state change of a device includes, for example, execution of a card reading operation on a card reader, depression of an operation button of a printer or a scanner, etc. The client PC 100 starts a session with the device server 200 using detection of a state change of the device 300 as a trigger. <4. Configuration of Device 300> The devices 300 (300A, 300B) is a general-purpose input and output device having a USB interface, and is e.g. a single function peripheral (SFP), such as a card reader or a printer, or a multi-function peripheral (MFP) equipped with not only a print function, but also a scan function, a copy function, and a storage function. However, this is not limitative, but the device 300 may be any other kind of device. Further, although in the present embodiment, the device server 200 and the device 300 are formed as separate apparatuses, this is not limitative, but the device server 200 and the device 300 may be integrated into a single apparatus such that the device server 200 is accommodated in the casing of the device 300. In the device control system in FIG. 1, formed by the above-described apparatuses, the device server 200 acquires device information on the device 300 connected thereto, and sends the device information to the client PC 100. The client PC 100 reads out a definition file 115 and a trigger detection algorithm 116 for detecting a state change of the device 300 from the external storage section 106, based on the acquired device information on the device 300, and sends the definition file 115 and the trigger detection algorithm 116 to the device server 200. The device server 200 stores the definition file 115 and the trigger detection algorithm 116 received from the client PC 100 as a definition file 209 and a trigger detection algorithm 210 in the device control section 208, and monitors (polls) the device 300 using the definition file 209 and the trigger detection algorithm 210. Upon detecting the state change of the device 300, the device control section 208 sends information indicative of detection of the state change (hereinafter referred to as “the trigger notification”) to the client PC 100 via the communication control section 207 and the communication section 203. Upon receipt of the trigger notification from the device server 200, the client PC 100 starts a session with the device server 200 and performs data transmission/reception to and from the device 300 via the device server 200. <5. Process Executed when Device 300 is Connected to Device Server 200> FIG. 4 is a flowchart of a device information acquisition process executed by the device server 200 when a device 300 is connected to the device server 200 in FIG. 3. Referring to FIG. 4, when the device 300 is connected to the device server 200, the device server 200 executes the present device information acquisition process. First, the device control section 208 acquires device information for identifying the device 300 from the device 300 via the USB interface 204 and stores the device information in the device control section 208 (step S401). Device information includes a vendor ID (VID) on a device manufacturer basis for identification of a manufacturer, a product ID (PID) assigned on a device model basis for identification of the model, and a serial number assigned on a device basis for identification of the device. Then, the device control section 208 determines whether or not the device information acquired from the device 300 stores a serial number (step S402). If it is determined in the step S402 that the acquired device information does not store a serial number (NO to the step S402), a serial number is generated from unique information of the device server 200 and connection port-unique information of the device server 200, and the serial number is added to the device information (step S403), followed by terminating the present process. On the other hand, if the acquired device information stores a serial number (YES to the step S402), the present process is immediately terminated. Thus, even when a plurality of devices 300 of the same type, which do not store respective serial numbers, are connected to the device server 200, it becomes possible to identify each of the devices 300. In the present process, in a case where a plurality of devices 300 are connected to the device server 200, the present device information acquisition process is repeatedly carried out on a device-by-device basis. The unique information of the device server 200 is information for identifying the device server 200. The unique information includes an IP address, a MAC address, a serial number (manufacture number) assigned to the device server 200, etc., for example. However, the unique information is not limited to one of these information items, but may be any combination of them. The connection port-unique information of the device server 200 is information for identifying a connection port of the device server 200. The connection port-unique information includes the number of a USB port and the number of an IEEE 1394 port provided in the device server 200, for example, but is not limited to these. <6. Virtualization Control Process Associated with Device 300 and Transmission Process for Transmission of Definition File, Etc., which are Executed by Client PC 100> FIG. 5 is a flowchart of a virtualization control process associated with a device 300 and a transmission process for transmitting a definition file 115 and a trigger detection algorithm 116, which are executed by the client PC 100 appearing in FIG. 1. <6-1. Virtualization Control Process Associated with Device 300, which is Executed by Client PC 100> Referring to FIG. 5, in order to recognize a device 300 connected to the network 500 via the device server 200, the resident module 110 in the client PC 100 broadcasts a search packet to the device server 200 via the communication section 105 (step S501). Specifically, the resident module 110 searches via (queries) the device server 200 using UDP (user datagram protocol) or the like protocol. The resident module 110 awaits a response from the device server 200 (step S502). If there is no response from the device server 200 (NO to the step S502), the resident module 110 terminates the present process without executing the virtualization control process. On the other hand, if there is a response from the device server 200 (YES to the step S502), the resident module 110 acquires device information (descriptor) contained in a response electronic message from the device server 200 (step S503). The resident module 110 identifies the device based on a vender ID (VID) and a product ID (PID) described in a device descriptor and a serial number and a device name described in a string descriptor, which are contained in the acquired device information. Further, the resident module 110 identifies an interface number described in an interface descriptor. Based on the information concerning the individual device identified as above, the resident module 110 uniquely indentifies driver software components (the USB virtual bus device 113, the USB class driver 112, and the device driver 111) needed for the virtualization control process (step S504), and then sequentially generates the driver software components dynamically (steps S505 to S507). Thereafter, the resident module 110 activates the application program 109, and activates an interface for controlling the driver software components from the application program 109 (step S508). Thus, the virtualization control process associated with the device 300 is started. <6-2. Transmission Process for Transmission of Definition File, Etc., which is Executed by Client PC 100> Then, the client PC 100 identifies the type (model) of the device 300 based on the acquired device information and determines whether or not a definition file 115 and a trigger detection algorithm 116 associated with the device 300 are stored in the external storage section 106 (step S509). If the definition file 115 and the trigger detection algorithm 116 associated with the device 300, which are identified based on the acquired device information, are stored in the external storage section 106 (YES to the step S509), the client PC 100 starts a session with the device server 200 (step S510). Then, the client PC 100 encodes an electronic message (packet) for installation, described hereinafter with reference to FIG. 6, which contains the definition file 115 and the trigger detection algorithm 116 associated with the device 300, and sends the electronic message (packet) to the device server 200 to which is connected the device 300 (step S511). After transmission of the electronic message, the client PC 100 disconnects the session with the device server 200 (step S512), followed by terminating the present process. On the other hand, if the definition file 115 and the trigger detection algorithm 116 associated with the device 300, which are identified based on the acquired device information, are not stored in the external storage section 106 (NO to the step S509), the client PC 100 performs error notification (step S513), followed by terminating the present process. The error notification is realized by notifying a user e.g. via the display section 103 that the associated definition file 115 and trigger detection algorithm 116 are not stored and prompting the user to install the definition file 115 and the trigger detection algorithm 116. <7. Data Structure of Packet for Installation> FIG. 6 is a diagram useful in explaining the data structure of an electronic message (packet) of an installation protocol, which contains the definition file 115 and the trigger detection algorithm 116 transmitted in the step S511 in FIG. 5. The packet comprises signature data 610, an electronic message size 620, a command ID 630, a vendor ID 640, a product ID 650, a serial number 660, a definition file 115, and a trigger detection algorithm 116. Based on the vendor ID 640, the product ID 650, and the serial number 660 included in the above-mentioned items, it is possible to uniquely identify the type (model) of the device 300. From N pairs of definition files 115 and trigger detection algorithms 116 stored in the external storage section 106, an associated pair of a definition file 115 and a trigger detection algorithm 116 are selected for each device 300 identified based on the acquired device information, and are stored in the packet. The number of pairs that can be selected is not limited to 1, but a plurality of pairs may be selected. <7-1. Data Structure of Definition File 115> FIG. 7 is a diagram useful in explaining the data structure of the definition file 115 appearing in FIG. 6. Referring to FIG. 7, the definition file 115 contains commands and information required for execution of a trigger detection algorithm 116. The definition file 115 comprises a data length 710, a vendor ID 720 and a product ID 730 assigned to a device 300 associated with the trigger detection algorithm 116, an interface number 740, a command count (n) 750, key information 760 for use in determining, based on a response from the device 300, whether or not the state of the device 300 has changed, and one or more commands 770 describing a procedure necessary for the trigger detection process (monitoring process) associated with the device 300. The commands 770 have indexes 1 to n assigned thereto (n: value of the command count (n) 750), respectively, in the mentioned order. <7-2. Data Structure of Command> FIG. 8 is a diagram useful in explaining the data structure of the command 770 appearing in FIG. 7. As shown in FIG. 8, the command 770 comprises a command size 810, a transfer type 820 defined by the USB standard necessary for issuing the command, a transfer parameter 830 to be sent by Setup Token of control transfer, an endpoint address 840, a key judgment flag 850 indicative of whether or not the response from the device 300 contains change-of-state information, and a device-specific command 860 to be issued to the device 300. The command 770 stores parameters for issuing one command. <8. Flow of Control by Device Server 200> FIG. 9 is a flowchart of a control process after device information acquisition executed by the device server 200 appearing in FIG. 3. Referring to FIG. 9, the device server 200 is equipped with the following four functions: (1) a function of notifying device information acquired from a device 300 to the client PC 100; (2) a function of starting a session with the client PC 100 and receiving a definition file 115 and a trigger detection algorithm 116 necessary for monitoring the device 300; (3) a function of starting a session with the client PC 100 and performing data transmission/reception between the client PC 100 and the device 300; and (4) a function of monitoring (polling) the device 300 at predetermined time intervals, detecting a state change of the device 300, and sending the trigger notification to the client PC 100. <8-1. Processing for Transmitting Device Information> The function (1) corresponds to steps S901 to S903 in FIG. 9. The device server 200 determines whether or not a connection request has been received from the client PC 100 (step S901). When a connection request has been received (YES to the step S901), if the connection request is not a TCP connection request, but e.g. a UDP connection request (query) (NO to the step S902), the device server 200 notifies device information to the client PC 100 (step S903), and repeatedly carries out the steps S901 et seq. <8-2. TCP Session Processing> The function (2) and the function (3) correspond to the steps S901 to S902 and steps S904 to S908, in FIG. 9 respectively. The device server 200 determines whether or not a connection request has been received from the client PC 100 (step S901). When a connection request has been received (YES to the step S901), if the connection request is a TCP connection request (YES to the step S902), the device server 200 starts a session with the client PC 100 (step S904). Then, the communication control section 207 determines whether an electronic message (packet) received from the client PC 100 is for installation of a definition file 115 and a trigger detection algorithm 116 or for data transmission/reception (step S905). If it is determined in the step S905 that the electronic message is for installation, the definition file 115 and the trigger detection algorithm 116 contained in the electronic message are stored in the device control section 208, as a definition file 209 and a trigger detection algorithm 210, respectively (step S906), and then the process proceeding to the step S908. If it is determined in the step S905 that the electronic message is for data transmission/reception, data is transmitted/received to/from the device 300 (step S907), and then the process proceeds to the step S908. The data structure of a data transmission/reception packet of the data transmitted/received in the step S907 will be described in detail hereinafter with reference to FIG. 12. In the following step S908, the session with the client PC 100 is disconnected (step S908), and the steps S901 et seq. are repeatedly carried out. <8-3. Trigger Detection Process> The function (4) corresponds to the step S901 and the steps S909 to S910 in FIG. 9. When no connection request has been received from the client PC 100 and the device server 200 is on standby for reception (NO to the step S901), if a definition file 209 and a trigger detection algorithm 210 associated with a connected device 300 are stored in the device control section 208 (YES to the step S909), a trigger detection process (monitoring process) associated with the device 300, described hereinafter with reference to FIG. 10, is executed using the definition file 209 and the trigger detection algorithm 210 (step S910), and then the steps S901 et seq. are repeatedly carried out. FIG. 10 is a flowchart of the trigger detection process executed by the device server 200 in the step S910 in FIG. 9. Referring to FIG. 10, when the trigger detection algorithm 210 is started, the device control section 208 reads in the definition file 209, decodes the same into the form illustrated in FIG. 7, and then sets information specific to the device 300, which is necessary for the trigger detection process (monitoring process), in the memory 202 (step S1001). Then, the device control section 208 determines whether or not an index assigned to the command 770 is smaller than the value of a command count (n) 750 (step S1002) to thereby determine whether or not the index has reached the value of the command count (n) 750 described in the definition file 209 (step S1002). If it is determined in the step S1002 that the index has not reached the value of the command count (n) 750 described in the definition file 209 (YES to the step S1002), one of the commands 770 set in the memory 202 is read out, and the command 770 is decoded into the form illustrated in FIG. 8 (step S1003). The device control section 208 determines a transfer type 820 described in the decoded command 770 (step S1004). The device control section 208 sets a transfer parameter 830 based on the result of the determination (step S1005), and then sends a electronic message having the transfer parameter 830 and a command 860 set therein to the device 300 (step S1006). The device control section 208 awaits a response from the device 300 to the electronic message transmitted in the step S1006. Upon receipt of the response from the device 300 (step S1007), the device control section 208 determines whether or not the key judgment flag 850 of the command 770 is valid (step S1008). If it is determined in the step S1008 that the key judgment flag 850 of the command 770 is valid (YES to the step S1008), the device control section 208 further determines whether or not the received data contains data matching the key information 760 (step S1009). If it is determined in the step S1008 that the received data contains data matching the key information 760 (YES to the step S1009), the device control section 208 judges that a state change of the device 300 has been detected, and sends the trigger notification to the client PC 100 (step S1010), followed by terminating the present process. On the other hand, if it is determined in the step S1008 that the key judgment flag 850 of the command 770 is invalid (NO to the step S1008) or if it is determined in the step S1009 that the received data does not contain data matching the key information 760, the index is incremented to read out a next command (step S1011), whereafter the steps S1002 et seq. are repeatedly carried out. Then, when it is determined in the step S1002 that the index has reached the value of the command count (n) 750 described in the definition file 209, which means that all the commands 770 have been read out (i.e. no state change has been detected), the present process is terminated. <9. Control by Client PC 100 for Data Transmission/Reception> FIG. 11 is a flowchart of a data transmission/reception process executed by the client PC 100 appearing in FIG. 1. The present process is executed via a device stack (the device driver 111, the USB class driver 112, the USB virtual bus device 113, and the communication control section 114). Referring to FIG. 11, the client PC 100 waits until the resident module 110 receives the trigger notification from the device server 200 (step S1101). Upon receipt of the trigger notification from the device server 200 (YES to the step S1101), the resident module 110 notifies the application program 109 that the trigger notification has been received. When the application program 109 determines that data transmission/reception to and from the device 300 is required, the application program 109 starts a TCP session with the device server 200 via the communication control section 114 (step S1102). When the start of the session with the device server 200 fails (NO to a step S1103), the present process is terminated. When the start of the session with the device server 200 is successful (YES to the step S1103), the application program 109 performs data transmission/reception to and from the device server 200 via the device stack (the device driver 111, the USB class driver 112, the USB virtual bus device 113, and the communication control section 114) (step S1104). The data structure of a data transmission/reception packet transmitted/received in the step S1104 will be described in detail hereinafter with reference to FIG. 12. The step S1104 is repeatedly carried out until data transmission/reception is completed (NO to a step S1105). When data transmission/reception is all completed (YES to the step S1105), the client PC 100 disconnects the TCP session with the device server 200 (step S1106), followed by terminating the present process. <10. Data Structure of Packet> FIG. 12 is a diagram useful in explaining the data structure of the data transmission/reception packet transmitted/received in the step S907 in FIG. 9 or in the step S1104 in FIG. 11. Referring to FIG. 12, the data transmission/reception packet comprises a protocol header 1200 and USB transfer data 1210. The device control section 208 analyzes the packet to thereby identify a device 300. The protocol header 1200 includes signature data 1201 for identification of a protocol used in the present system, an electronic message size 1202, a command ID 1203 (bulk-in transfer request) assigned to a command issued to the device server 200, a vendor ID (VID) 1204, a product ID (PID) 1205, and a serial number 1206, etc. The device 300 can be uniquely identified by the vendor ID 1204, the product ID 1205, and the serial number 1206 included in the above-mentioned items of the protocol header 1200. As described hereinabove, the device server 200 of the present embodiment is equipped with the function for the device monitoring process (change-of-state detection process), which has conventionally been implemented in a client PC, so that the device server 200 can independently monitor a state change of a device 300 without communication with the client PC 100, and when the state change of the device 300 is detected, sends the detected state change to the client PC 100 as a trigger notification. Therefore, monitoring of the device 300 by the client PC 100 can be dispensed with, which makes it possible to reduce traffic on the network 500. Further, according to the device server 200 of the present embodiment, since communication between the client PC 100 and the device 300 is established using a state change of the device 300 as a trigger, the client PC 100 can occupy the device 300 only when necessary, which makes it possible not only to reduce the vulnerability of security, but also to use a plurality of devices 300 simultaneously even if the occupation of each device 300 is frequently required. Furthermore, according to the device server 200 of the present embodiment, the client PC 100 dynamically installs or downloads a trigger detection algorithm compatible with a device 300 to be monitored, into the device server 200, whereby it is possible to execute the trigger detection process for various devices 300 while maintaining the flexibility of the device server 200. In the following, a description will be given of a second embodiment of the present invention. The second embodiment is distinguished from the above-described first embodiment, in which the device server 200 is equipped with the trigger detection function, in that not the device server 200, but a network device equipped with the trigger detection function corresponds to the device control apparatus and detects a state change of the other device. The network device includes a device (e.g. a network printer) which is connected to a network, such as a LAN, such that it can be used by a plurality of users. The configuration of the first embodiment described with reference to FIGS. 2 and 4 to 12 is all applicable to the second embodiment by interpreting description of the device server 200 as that of a network device 250. FIG. 13 is a schematic block diagram of a device control system according to the second embodiment of the present invention. As shown in FIG. 13, the device control system of the second embodiment comprises a client PC 100 (100A, 100B), a network device 250, a device server 200, and a device 300 (300A, 300B). The device control system of the present embodiment is identical in configuration to the device control system of the first embodiment, described with reference to FIG. 1, except that the device server 200A is replaced by the network device 250. FIG. 14 is a block diagram useful in explaining the hardware configuration and software configuration of the network device 250 appearing in FIG. 13. Referring to FIG. 14, the network device 250 is characterized by being equipped with the trigger detection function and having a device functional section 262 provided in an external storage section 255. Except for the additionally provided device functional section 262, each of a CPU 251, a memory 252, a communication section 253, a USB interface 254, an internal bus 256, a communication control section 257, a device control section 258, a definition file 259, a trigger detection algorithm 260, and device information 261 is identical to the corresponding one of the CPU 201, the memory 202, the communication section 203, the USB interface 204, the internal bus 206, the communication control section 207, the device control section 208, the definition file 209, the trigger detection algorithm 210, and the device information 211. As described above, according to the network device 250 of the present embodiment, through equipment of a trigger detection function equivalent to that of the device server 200 of the first embodiment, the network device 250 is not only capable of performing functions of its own as a device, e.g. similarly to a network printer, but also capable of monitoring a device 300 (e.g. a card reader) locally connected thereto, and upon detection of a state change of the device 300, transmitting the detected state change to the client PC 100 as a trigger notification. Note that the present invention is not limited to the above-described embodiments, but it can be practiced in various forms, without departing from the spirit and scope thereof. Although in the above, a definition file 115 and a trigger detection algorithm 116 received from the client PC 100 are stored (installed) as a definition file 209 and a trigger detection algorithm 210 in the device server 200 or the network device 250 (hereinafter both collectively represented by “the device server 200”), the definition file 115 and the trigger detection algorithm 116 may be acquired from a portable storage medium connected via the USB interface 204 and be stored in the device control section 208. Alternatively, a management server for managing the whole system may be additionally provided such that the definition file 115 and the trigger detection algorithm 116 are acquired from the management server via the network and are stored in the device control section 208. Further, if a device 300 of the same model as that of a device which has been connected before is connected to the device server 200 and a definition file 209 and a trigger detection algorithm 210 associated with the model have already been stored (installed) in the device server 200, the device server 200 is not required to store the definition file 115 and the trigger detection algorithm 116 received from the client PC 100. Moreover, the device server 200 may notify the client PC 100 that it is not necessary to transmit the definition file 115 and the trigger detection algorithm 116. Although in the step S1010 in FIG. 10, the trigger notification is sent to a single client PC 100, this is not limitative, but the trigger notification may be sent to a plurality of client PCs 100. In this case, the device server 200 can permit a first client PC 100 of plurality of client PCs 100, which has issued a connection request, to establish connection to the device 300. Alternatively, the device server 200 can perform control such that a predetermined number of client PCs 100 of all that have issued a connection request are permitted to establish connection to the device 300. Further, when a specific client PC 100 is incapable of receiving a trigger notification due to power-off or a failure, control may be performed such that the trigger notification can be sent to another client PC 100 as an alternative transmission destination. In the above-described embodiments, the method (configuration) is described in which a definition file 115 and a trigger detection algorithm 116 associated with a device 300 are both stored in a client PC 100 and the device server 200 receives the definition file 115 and the trigger detection algorithm 116 from the client PC 100. However, the present invention can also employ the following methods (configurations): (1) Necessary trigger detection algorithms 116 are stored (preloaded) in the device server 200 in advance, and only definition files 115 are stored in the client PC 100. In this case, the device server 200 receives from the client PC 100 only a definition file 115 associated with the model of a device 300 identified based on device information. This configuration can be applied e.g. to a case where access to the device server 200 is limited e.g. due to dependency on the specifications and design of software and hardware or a reason related to the operation and management of the system, and hence it is impossible to receive and execute (or install) a trigger detection algorithm (program code). The present configuration is advantageous in that a trigger detection algorithm (program code) is stored in the device server in advance, which makes tampering difficult. In this case, in the step S509 in FIG. 5, the client PC 100 does not determine whether or not the external storage section 106 stores trigger detection algorithms 116, but determines only whether or not the external storage section 106 stores definition files 115. Then, only an associated definition file 115 is sent to the device server 200. In other words, the packet which does not contain the item “trigger detection algorithm 116” in FIG. 6 is transmitted. On the other hand, the device server 200 having received this packet stores only the definition file 115 as a definition file 209 in the device control section 208 in the step S906 in FIG. 9. (2) The device control system may be configured such that only when a trigger detection algorithm or a definition file associated with the model of an identified device 300 is not stored in the device server 200, the device server 200 acquires the necessary trigger detection algorithm or the necessary definition file e.g. from the client PC 100. Further, the device server 200 or the client PC 100 may manage trigger detection algorithms and definition files and determine whether or not it is required to add or update a trigger detection algorithm or a definition file. With this configuration, the device server 200 can acquire all or part of the trigger detection algorithms and the definition files only when addition or update is required. Moreover, the device control system may be configured such that the device server 200 accesses to the client PC 100 to download (acquire) a trigger detection algorithm and/or a definition file instead of receiving the same from the client PC 100 as in the above-described embodiments. In this case, the client PC 100 is only required to notify the device server 200 that the client PC 100 stores the associated trigger detection algorithm and/or definition file. It is also possible to associate a plurality of definition files with one of the above-described trigger detection algorithms to thereby control a plurality of devices 300 in a manner associated with each other. For example, in a case where a device A and a device B are controlled in a manner associated with each other, control can be performed such that after trigger notifications have been received from both of the two devices, the operation of the device A is started. Note that the present invention can also be applied to a case where a plurality of devices 300 different in model are connected to the device server 200. In this case, the device server 200 stores definition files and trigger detection algorithms (a plurality of pairs) associated with the respective devices on a model-by-model basis. Then, a trigger detection process is executed based on each combination of a trigger detection algorithm and a definition file appropriate to each of the devices, whereby state changes of the respective devices can be detected. It is to be understood that the object of the present invention may also be accomplished by supplying a system or an apparatus with a storage medium in which a program code of software, which realizes the functions of either of the above-described embodiments, is stored, and causing a computer (or CPU or MPU) of the system or apparatus to read out and execute the program code stored in the storage medium. In this case, the program code itself read from the storage medium realizes the functions of either of the above-described embodiments, and therefore the computer-readable storage medium in which the program code is stored constitutes the present invention. Further, the functions of either of the above-described embodiments may also be accomplished by causing an OS (operating system) or the like which operates on the computer to perform a part or all of the actual operations based on instructions of the program code. Moreover, the functions of either of the above-described embodiments may be accomplished by writing a program code read out from the storage medium into a memory provided on an expansion board inserted into a computer or a memory provided in an expansion unit connected to the computer and then causing a CPU or the like provided in the expansion board or the expansion unit to perform a part or all of the actual operations based on instructions of the program code. Examples of the storage medium for supplying the program code include a floppy (registered trademark) disk, a hard disk, a magnetic-optical disk, an optical disk, such as a CD or a DVD, a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded via a network. REFERENCE SIGNS LIST 100 (100A, 100B) client PC 101 CPU 102 input section 103 display section 104 memory 105 communication section 106 external storage section 107 internal bus 108 OS 109 application program 110 resident module 111 device driver 112 USB class driver 113 USB virtual bus device 114 communication control section 115 definition file 116 trigger detection algorithm 200 (200A, 200B) device server 201 CPU 202 memory 203 communication section 204 USB interface 205 external storage section 206 internal bus 207 communication control section 208 device control section 209 definition file 210 trigger detection algorithm 211 device information 250 network device 251 CPU 252 memory 253 communication section 254 USB interface 255 external storage section 256 internal bus 257 communication control section 258 device control section 259 definition file 260 trigger detection algorithm 261 device information 262 device functional section 300 (300A, 300B) DEVICE 400 connection cable 500 network 13505537 canon imaging systems inc. USA B2 Utility Patent Grant (with pre-grant publication) issued on or after January 2, 2001. Open Mar 25th, 2022 06:01PM Mar 25th, 2022 06:01PM Technology Technology Hardware & Equipment

Request a demo to view additional historical data, and much more.

Make fast
queries

No longer rely on engineers to access data; build and share queries with intuitive tools, and derive insights in real time.

Bookmark queries with your team

Save and share custom queries using a private bookmarking feature, accessible to your whole team.

Be first to know with alerts

Scan data points across millions of companies and receive emails when relevant metrics cross critical thresholds.

Visualize data for quick insights

Create custom keyword-based word clouds, charts, and advanced visualizations to quickly analyze the data.

Map competitor locations

Analyze competitor presence and growth over time by overlaying store locations with the Nearby Competitor feature.

Add widgets to your dashboards

Access existing maps, charts, word clouds, and other visualizations to understand your data quickly. Or build custom widgets to view data just the way you want it.