PaddleOCR service deployment - and calling via Java

The previous article talked about the simple use of PaddleOCR, but the ultimate goal must be to deploy it as a service for us to call. Here is an introduction to its service deployment method

Choose a deployment method

The official recommendations are as follows:
Python reasoning
C++ reasoning
Serving service deployment (Python/C++)
Paddle-Lite end-to-end deployment (ARM CPU/OpenCL ARM GPU)
Paddle.js deployment

The advantages and disadvantages of each method are as follows

Since I am doing Java development and don't know Python, I use Serving service deployment
PaddleOCR provides 2 service deployment methods:

based on PaddleHub Serving deployment;
based on PaddleServing deployment of

I chose to deploy through PaddleHub Serving

Install Hub Serving

prepare the environment

pip install paddlehub -i https://mirror.baidu.com/pypi/simple

Check it out after installation

Download the inference model

Create a new 'inference' folder under PaddleOCR, prepare the inference model and put it in the 'inference' folder, the default is the ultra-lightweight model of v1.1

https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/quickstart.md

The default model path is:
Detection model: ./inference/ch_ppocr_mobile_v1.1_det_infer/
Recognition model: ./inference/ch_ppocr_mobile_v1.1_rec_infer/
Orientation classifier: ./inference/ch_ppocr_mobile_v1.1_cls_infer/
The model path can be viewed and modified in params.py. More models can be downloaded from the model library provided by PaddleOCR, or replaced with models that have been trained and converted by themselves.

Install the service module

#In the Linux environment, the installation example is as follows:
# Install the detection service module:  
hub install deploy/hubserving/ocr_det/

# Or, install the recognition service module:  
hub install deploy/hubserving/ocr_rec/

# Or, install the Detect+Identify tandem service module:  
hub install deploy/hubserving/ocr_system/
#In the Windows environment (the folder separator is \), the installation example is as follows:
# Install the detection service module:  
hub install deploy\hubserving\ocr_det\

# Or, install the recognition service module:  
hub install deploy\hubserving\ocr_rec\

# Or, install the Detect+Identify tandem service module:
hub install deploy\hubserving\ocr_system\

It is best to install all these modules here, otherwise an error will be reported when starting

start service

There are two ways to start, one is to start globally, and the other is to start by specifying the path

#global start
hub serving start -m ocr_system

I use the specified path to start here, and I need to switch to the hubserving directory through the command

hub serving start -c deploy\hubserving\ocr_system\config.json

For other parameters of startup, refer to the official documentation

**Note:** If the startup error xxx path cannot be found, go to the params.py files of ocr_system, ocr_det, and ocr_rec under PaddleOCR\deploy\hubserving, and replace all model_dir
Replace it with an absolute path that conforms to the win format;

This completes the deployment of a service API, using the default port number 8868.
Access example:
python tools/test_hubserving.py --server_url=http://127.0.0.1:8868/predict/ocr_system --image_dir=img/22.jpg
Output result:

Java call

We can call the service through Java code, the code is as follows:

/**
 * @author: fueen
 * @createTime: 2022/11/28 10:01
 */
@RestController
@RequestMapping("/paddleocr")
public class PaddleOCRController {

    @PostMapping("/upload")
    public String fileUpload(@RequestParam("file") MultipartFile file, HttpServletRequest req, Model model){
        try {
            //Receive uploaded files
            //Receiving uploaded files
            String fileName = System.currentTimeMillis()+file.getOriginalFilename();
            String destFileName=req.getServletContext().getRealPath("")+"uploaded"+ File.separator+fileName;
            File destFile = new File(destFileName);
            destFile.getParentFile().mkdirs();
            System.out.println(destFile);
            file.transferTo(destFile);
            //Pass the address of the uploaded file to the front-end template engine
            //The address of the uploaded file is passed in to the front-end template engine
            model.addAttribute("fileName","uploaded\\"+fileName);
            model.addAttribute("path",destFile);
            //Start preparing to request API
            //Start preparing the request API
            //Create request header
            //Create request header
            HttpHeaders headers = new HttpHeaders();
            //Set request header format
            //Set the request header format
            headers.setContentType(MediaType.APPLICATION_JSON);
            //Build request parameters
            //Build request parameters
            MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
            //Read static resource files
            //Read the static resource file
            InputStream imagePath = new FileInputStream(destFile);
            //Add the request parameter images, and pass in the Base64-encoded image
            //Add the request parameter Images and pass in the Base64 encoded image
            map.add("images", ImageToBase64(imagePath));
            //build request
            //Build request
            HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
            RestTemplate restTemplate = new RestTemplate();
            //send request
            //Send the request
            Map json = restTemplate.postForEntity("http://127.0.0.1:8868/predict/ocr_system", request, Map.class).getBody();
            System.out.println(json);
            //Parse Json return value
            //Parse the Json return value
            List<List<Map>> json1 = (List<List<Map>>) json.get("results");
            //Get the file directory to prepare for drawing later
            //Get the file directory to prepare for later drawing
            String tarImgPath = destFile.toString();
            File srcImgFile = new File(tarImgPath);
            System.out.println(srcImgFile);
            //file stream to image
            //The file flows into images
            Image srcImg = ImageIO.read(srcImgFile);
            if (null == srcImg){
                return "nothing,Finish!";
            }
            //Get the width of the image
            //Gets the width of the image
            int srcImgWidth = srcImg.getWidth(null);
            //Get the height of the picture
            //Get the height of the image
            int srcImgHeight = srcImg.getHeight(null);
            //Start the main process of drawing, create a drawing board, set the brush color, etc.
            //Start drawing main flow, create artboard, set brush color, etc
            BufferedImage bufImg = new BufferedImage(srcImgWidth, srcImgHeight, BufferedImage.TYPE_INT_RGB);
            Graphics2D g = bufImg.createGraphics();
            g.setColor(Color.red);
            g.drawImage(srcImg, 0, 0, srcImgWidth, srcImgHeight, null);
            //Loop through all content
            //Loop through everything
            for (int i = 0; i < json1.get(0).size(); i++) {
                System.out.println("The current text is:" + json1.get(0).get(i).get("text"));
                System.out.println("The possible probabilities are:" + json1.get(0).get(i).get("confidence"));
                List<List<Integer>> json2 = (List<List<Integer>>) json1.get(0).get(i).get("text_region");
                System.out.println("text coordinates" + json2);
                int x = json2.get(0).get(0);
                int y = json2.get(0).get(1);
                int w = json2.get(1).get(0)-json2.get(0).get(0);
                int h = json2.get(2).get(1)-json2.get(0).get(1);
                g.drawRect(x,y,w,h);  //Draw the watermark Draw the watermark
            }
            //Submit content to the front-end template engine
            //Submit the content to the front-end template engine
            model.addAttribute("z",json1.get(0));
            g.dispose();
            // output image
            //The output image
            FileOutputStream outImgStream = new FileOutputStream(tarImgPath);
            ImageIO.write(bufImg, "png", outImgStream);
            System.out.println("finished drawing");
            outImgStream.flush();
            outImgStream.close();
        } catch (FileNotFoundException e) {
            e.printStackTrace();
            return "upload failed," + e.getMessage();
        } catch (IOException e) {
            e.printStackTrace();
            return "upload failed," + e.getMessage();
        }
        return "OK";
    }
    private String ImageToBase64(InputStream imgPath) {
        byte[] data = null;
        // read image byte array
        //Read the image byte array
        try {
            InputStream in = imgPath;
            System.out.println(imgPath);
            data = new byte[in.available()];
            in.read(data);
            in.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
        // Base64 encode byte array
        //Base64 encoding of byte array
        BASE64Encoder encoder = new BASE64Encoder();
        // Returns a Base64-encoded byte array string
        //Returns a Base64 encoded byte array string
        //System.out.println("Image conversion Base64:" + encoder.encode(Objects.requireNonNull(data)));
        return encoder.encode(Objects.requireNonNull(data));
    }

}

Then run, call the interface through postman to test

console output

Finish! Later, you can make different processing modifications according to your own business needs.

Tags: Java image processing Deep Learning

Posted by discostudio on Sat, 03 Dec 2022 09:11:55 +1030