“With the continuous expansion of the application scope of face recognition technology, how to implement face recognition application development based on free SDK on Windows C# is becoming one of the topics that developers care about.
With the continuous expansion of the application scope of face recognition technology, how to implement face recognition application development based on free SDK on Windows C# is becoming one of the topics that developers care about.
For this requirement, I recommend ArcFace 3.0 of the Arcsoft Vision Open Platform. It is free, offline and commercially available. It includes rich functions such as face recognition, liveness detection, age detection, and gender detection. The algorithm is robust and has low access thresholds. At the same time, it supports Windows, iOS, Android (including Android10), and Linux. It is a powerful tool for developers to implement AI applications.
In order to help developers get started quickly, Arcsoft’s engineering team has polished a course for C# development, helping developers solve problems in the form of technical analysis and special Q&A. It is recommended that C# developers who are interested in trying ArcFace 3.0 can learn in advance and achieve more with less in actual development.
The main points of this course are as follows. Developers who are interested in the full course video can watch it through Baidu search “Arcsoft Technology Open Course”.
1. Point 1[Run through C# demo in 3 minutes]
Based on the sample code provided in the ArcSoft face recognition SDK package, the course introduces how to quickly access and use it in the form of practical operation. The access process is described in detail in the course video. It is recommended to try the configuration by yourself after watching the video, so that you can have a preliminary understanding of Arcsoft’s face recognition technology.
The demo configuration process is as follows:
1. Download Demo
2. Check whether the local system environment meets the requirements:
a. .NET Framework 4.5.1 and above
b. Microsoft Visual C++ 2013 Runtime Libraries
3. Download the SDK on the Arcsoft Vision Open Platform and obtain the APPID and SDKKEY
4. Configure and run Demo:
a. Configure APPID and SDKKEY in the App.config file
b. Copy the dll file in the SDK lib folder to the running root directory
c. If the version of the local .NET Framework is higher than 4.5.1, directly modify the target framework of the project
2. Point 2[ArcFace key interface and parameter introduction]
1. Engine initialization interface: ASFInitEngine()
The engine initialization interface is used to initialize the engine, and the parameter settings in the initialization are used to define the combination of engine attributes and algorithm functions, which are closely related to the effects that the algorithm can present.
When initialization fails, you can query the cause according to the error code. Parameter setting is the core issue that developers are most concerned about, and it is also the key to ensuring the maximum effect of the algorithm in practical application scenarios. The video mainly introduces how the following parameters are applied in actual scenarios.
【Explanation of important parameters】
ASF_DETECT_MODE_VIDEO video mode: suitable for camera preview, video file recognition
ASF_DETECT_MODE_IMAGE image mode: suitable for still image recognition.
・ detectFaceScaleVal (minimum face size): represents the proportion of face size relative to the long side of the picture. The larger the value, the smaller the face can be detected. The valid value range[2,32], the recommended value for video mode is 16, and the recommended value for image mode is 32.
・ combinedMask (algorithm function combination): It is recommended to choose a reasonable algorithm function combination according to the specific business. The more functions you choose, the larger the memory occupied.
2. Face feature extraction interface: ASFFaceFeatureExtract()
[Description]When the face detection is completed, the feature extraction of the corresponding face can be performed through this interface according to the obtained face information.
【Explanation of important parameters】
・ faceInfo: Feature extraction requires a single accurate face position and angle, otherwise 81925 may be reported.
・ featureFeature: The face feature value returned by the algorithm, including the feature byte array and feature length. The feature database access operation should access the feature byte array.
3. Face attribute detection: ASFProcess()
[Description]After successful face detection, you can use this interface to detect face attributes, such as age and gender, based on face information.
【Explanation of important parameters】
・ combinedMask: Only supports the functions that need to be detected when the engine is initialized. For example, when the mask is specified as age and gender when the engine is initialized, other attributes such as 3D angle cannot be detected.
・ Supported attributes: ASF_AGE (age), ASF_GENDER (gender), ASF_FACE3DANGLE (3D angle), ASF_LIVENESS (RGB living body).
・ After the Process() detection, the corresponding attribute detection results can be obtained according to the ASFGetXXX() interface.
3. Point 3[Video Stream Recognition and Liveness Detection]
The following figure shows the logic flow chart of common video stream recognition and liveness detection:
・ Main thread: including face tracking and screen preview functions.
・FR thread: for face feature extraction and feature search.
・ Live Thread: used for face live detection.
・ FaceID: Identify a person according to FaceID. From entering the screen to leaving the screen, the face can be recognized only once, which greatly reduces the occupation of system resources.
・ Number of attempts: For the same face, if the feature extraction fails, a limited number of attempts can be made to improve the interaction effect.
4. Point 4[Frequently Asked Questions and Answers]
1. Multi-threaded call problem
a. The same engine can use multiple threads to call different algorithms.
b. Multiple threads calling the same algorithm interface need to enable different engines.
2. Difference between VIDEO and IMAGE mode
a. Track the face in the video stream, and the face frame transitions smoothly without frame jumping.
b. Face tracking for previewing data, with fast processing speed, can avoid the problem of freezing.
c. Introduce the faceId frame in the video mode. This value is used to mark a face. When a face enters the screen until it leaves the screen, the faceId value remains unchanged. Can be used in business to optimize program performance.
a. The accuracy of face detection for a single image is higher.
b. When registering the face database, we recommend using the IMAGE mode with higher precision.
3. Operation of unmanaged memory
C# calls the C++ SDK interface, and some parameters need to be passed in the form of Intptr.
a. Apply for memory for Intptr before copying;
b. When IntPtr is not in use, it needs to be released manually in time.
4. Storage of face feature database
The featureFeature returned by the face feature extraction interface corresponds to ASF_FaceFeature, and the database storage should convert ASF_FaceFeature .feature into byteThen perform database storage, featureSize represents bytelength.
5. Reference method of SDK dll file (using DllImport method)
a. Use a relative path and put the dll directly in the execution directory (not recommended for web programs);
b. Use absolute paths;
c. Put the dll file in the System32 folder under Windows on the system disk;
d. Add environment variables for the folder where the dll file is located.
The Links: NL2432HC17-02B 7MBR50UA120-50