“With the advent of the digital and networking era, especially the development of broadband wireless networks, it has provided an opportunity for the application of large-volume data transmission services such as audio and video on wireless networks. At the same time, due to the unique sensory characteristics of audio and video, its related application requirements have become more and more urgent. Wireless multimedia is the product of the fusion of technologies in the fields of multimedia and mobile communication, and has become a hot spot in the field of communication today.
With the advent of the digital and networking era, especially the development of broadband wireless networks, it has provided an opportunity for the application of large-volume data transmission services such as audio and video on wireless networks. At the same time, due to the unique sensory characteristics of audio and video, its related application requirements have become more and more urgent. Wireless multimedia is the product of the fusion of technologies in the fields of multimedia and mobile communication, and has become a hot spot in the field of communication today. In view of the open source nature of the Linux kernel, it is adopted as the operating system, so that the whole system has better real-time performance and stability. The whole system uses ARM11 as the core processor, adopts the new generation video codec standard H.264 for encoding and decoding, and transmits audio and video through wireless network. It makes full use of the multimedia codec (Multi-FormatvideoCodec, MFC) integrated in the S3C6410 microprocessor, which effectively improves the cost performance of the system. The whole system provides a good solution for the transmission of wireless multimedia audio and video, which can be widely used in various fields such as remote monitoring and video telephony, and has good practical value and application prospects.
1 Overall design of the system
The audio and video acquisition modules in the communication systems of both parties are responsible for collecting analog signals, and send the collected audio and video data to the audio and video management module, which is compressed and sent to the other party by WiFi together with the data packet header; after the other party receives the data , after relevant processing, determine the audio and video frame type, and then send it to the decompression processing module to restore the audio and video data. The devices on both sides of the communication include embedded audio and video management modules and wireless transceiver modules. The wireless WiFi transceiver module operates in the 2.4GHz frequency band and conforms to the IEEE802.11b wireless local area network protocol standard.
2 System hardware design
The system hardware design takes ARM11 as the core microprocessor, the main frequency is 532MHz, which can meet the requirements of real-time processing. It integrates 256MBSDRAM, 2GBFLASH, audio recording and playback interface, Camera video interface, wireless WiFi interface, LCD interface, SD At the same time, the open source Linux2.6.28 is used as the kernel, yaffs2 is used as the root file system, and Qtopia4.4.3 is used as the user interface, which provides a good platform for development, debugging and system design.
2.1 Audio and video capture module
The audio adopts the integrated IIS (Inter-ICSoundBus) audio interface and WM9714 audio chip inside the processor. IIS is a bus standard customized by Philips for audio data transmission between digital audio devices. In Philips’ IIS standard, both the hardware interface system and the audio data format are specified. Based on this hardware and interface specification, the functions of integrated audio output, Linein input and Mic input are realized.
The video capture uses the OV9650CMOS camera module with a resolution of up to 1.3 million pixels, which can be directly connected to the Camera interface of the OK6410 development board. It is suitable for high-end consumer Electronic products, industrial control, vehicle navigation, multimedia terminals, industry PDAs, embedded education and training, personal learning, etc. Its structure is relatively simple, and it provides hardware drivers for easy use and debugging.
2.2 Wireless transmission module
The wireless transmission module of this system is realized by the WiFi module working in the 2.4GHz public frequency band. It follows the IEEE802.11b/g network standard and can be used to connect the terminal to the Internet in the later development. Its maximum data rate is 54Mb/s. Support WinCE, Linux system. The indoor communication distance can reach 100m, and the outdoor open space can reach 300m. The ARM-Linux operating system can be converted from the Ethernet connection mode to the dual-computer communication AD-HOC mode with a simple configuration of the ARM-Linux operating system. After the system is started, a Qt-based window design is designed to facilitate switching the connection mode.
The use of WiFi has good scalability, and can be connected to the WAN through the WiFi of the wireless router, which has a good application prospect. At the same time, most terminal devices such as mobile phones have WiFi function, and the software can be upgraded to the Andriod system in the later stage, which is convenient for development and transplantation. It reduces the development cost and cycle of real-time audio and video transmission, and also provides a new audio and video communication method for modern mobile communications.
After the WiFi driver is configured, the programming of the application layer and the Ethernet interface mode is exactly the same. Due to the large amount of audio and video data in this design, it is not suitable to use UDP, because when the amount of data is too large or the transmission signal is not good, UDP will seriously lose packets, so the connection-oriented TCP transmission protocol is finally selected to ensure the effective transmission of audio and video in the system. . Since TCP transmits data in response time, in the local area network, there is no need to consider the problem of TCP packet loss, which provides a reliable guarantee for the realization of system functions.
3 Software Design
The software is divided into user interface design and data processing, transmission and other module design.
3.1 Overall design of software based on multithreading
The system software architecture is shown in Figure 1. It is a one-way audio and video collection, compression, transmission, reception, decompression, processing and playback audio and video stream control process. Each module adopts thread processing, and the priority between the semaphore processing threads constitutes a loop Threads that efficiently process audio and video data streams. The functions of the system are modularized, which is easy to modify and transplant, and the code is short and concise.
3.2 Echo Cancellation
At the beginning of the system, there are echo and delay problems. The delay is caused by the acquisition and transmission process, so the delay can only be shortened as much as possible, but instant playback cannot be achieved, which is also one of the defects of this system. The echo is caused by the delay. Finally, the open source Speex algorithm is used to eliminate the echo. Specific method: Compile the algorithm into a library file and add it to the Linux kernel, that is, you can use the API function of Speex to achieve audio echo cancellation.
3.3 Synchronization of embedded audio and video
The basic idea of this paper is that the video stream is the main media stream, the audio stream is the secondary media stream, the playback rate of the video remains unchanged, the actual time is determined according to the local system clock, and the audio and video synchronization is achieved by adjusting the audio playback speed.
First select a local system clock reference (LSCR), and then send the LSCR to the video decoder and audio decoder, the two decoders compare the local system clock according to the PTS value of each frame, and the reference generates accurate Display or playback of each frame. time. That is, when generating the output data stream, each data block is timestamped according to the time on the local reference clock (generally including start time and end time). During playback, the timestamp on the chunk is read, and playback is scheduled according to the time on the local system clock reference.
The audio and video synchronization data flow of the whole system is shown in Figure 2.
4 Audio and video channel management
In order to save memory resources and facilitate channel management, this design adopts channel-by-channel thread pool management, and audio and video tasks are completed by their own channels.
Audio and video capture is processed in the same thread, using the select system call. Every time this thread is executed, it is judged whether the audio and video device is ready. If it is ready, the audio or video is collected to the audio and video buffer, and then handed over to the audio and video capture and compression thread. Finally, it is handed over to the sending thread to be packaged and sent by TCP. It should be noted that semaphores are used between the threads of this design to complete the synchronization management of the audio and video software architecture based on TCP between threads. After sending, enter the receiving thread and wait for the other party to pronounce the video data. After the receiving end receives the data from the receiving thread, it judges the packet header of the data, and then hands it over to the decompression processing thread for processing, then plays the audio and video, and then waits for the other party to send the data to the machine.
Due to the high-speed processing of the processor and the high-efficiency video hardware H.264 decompression, the real-time performance of the entire system basically meets the requirements. The embedded audio and video management module realizes the overall control and real-time processing of the entire system, providing a reliable guarantee for audio and video data management.
At present, video surveillance products based on embedded wireless terminals are favored due to their advantages of no wiring, long transmission distance, strong environmental adaptability, stable performance and convenient communication. an irreplaceable role. This system is a handheld terminal for wireless audio and video communication based on embedded Linux. It is small in size and easy to carry. It uses a lithium battery to supply power to the whole system through a switching power supply chip to step down, and its efficiency is greatly improved compared to the traditional DC voltage regulator. It can be used in outdoor visual entertainment, construction site monitoring, large-scale security liaison and other occasions, and has a wide range of application prospects.
The Links: NL2432HC17-01B NL2432HC22-45A