5 回答

TA貢獻1946條經驗 獲得超4個贊
在此示例中,我將使用 C++ OpenCV 庫和 Visual Studio 2017,我將嘗試捕獲 ARCore 相機圖像,將其移動到 OpenCV(盡可能高效),將其轉換為 RGB 顏色空間,然后將其移回 Unity C# 代碼并將其保存在手機的內存中。
首先,我們必須創建一個 C++ 動態庫項目以與 OpenCV 一起使用。為此,我強烈建議遵循Pierre Baret 和 Ninjaman494 對這個問題的回答:OpenCV + Android + Unity。這個過程相當簡單,如果你不會過多地偏離他們的答案(即你可以安全地下載比 3.3.1 版本更新的 OpenCV,但在為 ARM64 而不是 ARM 等編譯時要小心),你應該能夠從 C# 調用 C++ 函數。
根據我的經驗,我必須解決兩個問題 - 首先,如果您將項目作為 C# 解決方案的一部分而不是創建新的解決方案,Visual Studio 將繼續擾亂您的配置,例如嘗試編譯 x86 版本而不是 ARM版本。為了省去麻煩,創建一個完全獨立的解決方案。另一個問題是某些函數無法為我鏈接,從而引發未定義的引用鏈接器錯誤(undefined reference to 'cv::error(int, std::string const&, char const*, char const*, int準確地說)。如果發生這種情況并且問題出在您并不真正需要的函數上,只需在您的代碼中重新創建該函數 - 例如,如果您遇到問題cv::error,請將此代碼添加到您的 .cpp 文件的末尾:
namespace cv {
__noreturn void error(int a, const String & b, const char * c, const char * d, int e) {
throw std::string(b);
}
}
當然,這是丑陋和骯臟的做事方式,所以如果您知道如何修復鏈接器錯誤,請這樣做并告訴我。
現在,您應該有一個可以編譯并可以從 Unity Android 應用程序運行的工作 C++ 代碼。但是,我們希望 OpenCV 不返回數字,而是轉換圖像。因此,將您的代碼更改為:
.h文件
extern "C" {
namespace YOUR_OWN_NAMESPACE
{
int ConvertYUV2RGBA(unsigned char *, unsigned char *, int, int);
}
}
.cpp 文件
extern "C" {
int YOUR_OWN_NAMESPACE::ConvertYUV2RGBA(unsigned char * inputPtr, unsigned char * outputPtr, int width, int height) {
// Create Mat objects for the YUV and RGB images. For YUV, we need a
// height*1.5 x width image, that has one 8-bit channel. We can also tell
// OpenCV to have this Mat object "encapsulate" an existing array,
// which is inputPtr.
// For RGB image, we need a height x width image, that has three 8-bit
// channels. Again, we tell OpenCV to encapsulate the outputPtr array.
// Thanks to specifying existing arrays as data sources, no copying
// or memory allocation has to be done, and the process is highly
// effective.
cv::Mat input_image(height + height / 2, width, CV_8UC1, inputPtr);
cv::Mat output_image(height, width, CV_8UC3, outputPtr);
// If any of the images has not loaded, return 1 to signal an error.
if (input_image.empty() || output_image.empty()) {
return 1;
}
// Convert the image. Now you might have seen people telling you to use
// NV21 or 420sp instead of NV12, and BGR instead of RGB. I do not
// understand why, but this was the correct conversion for me.
// If you have any problems with the color in the output image,
// they are probably caused by incorrect conversion. In that case,
// I can only recommend you the trial and error method.
cv::cvtColor(input_image, output_image, cv::COLOR_YUV2RGB_NV12);
// Now that the result is safely saved in outputPtr, we can return 0.
return 0;
}
}
現在,重建解決方案 ( Ctrl + Shift + B) 并將libProjectName.so文件復制到 Unity 的Plugins/Android文件夾,如鏈接答案中所示。
下一步是從 ARCore 保存圖像,將其移動到 C++ 代碼,然后取回它。讓我們在 C# 腳本的類中添加它:
[DllImport("YOUR_OWN_NAMESPACE")]
public static extern int ConvertYUV2RGBA(IntPtr input, IntPtr output, int width, int height);
Visual Studio 將提示您添加System.Runtime.InteropServicesusing 子句 - 這樣做。這允許我們在 C# 代碼中使用 C++ 函數。現在,讓我們將這個函數添加到我們的 C# 組件中:
public Texture2D CameraToTexture()
{
// Create the object for the result - this has to be done before the
// using {} clause.
Texture2D result;
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle YUVhandle = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
GCHandle RGBhandle = GCHandle.Alloc(RGBimage, GCHandleType.Pinned);
// Call the C++ function that we created.
int k = ConvertYUV2RGBA(YUVhandle.AddrOfPinnedObject(), RGBhandle.AddrOfPinnedObject(), camBytes.Width, camBytes.Height);
// If OpenCV conversion failed, return null
if (k != 0)
{
Debug.LogWarning("Color conversion - k != 0");
return null;
}
// Create a new texture object
result = new Texture2D(camBytes.Width, camBytes.Height, TextureFormat.RGB24, false);
// Load the RGB array to the texture, send it to GPU
result.LoadRawTextureData(RGBimage);
result.Apply();
// Save the texture as an PNG file. End the using {} clause to
// dispose of the CameraImageBytes.
File.WriteAllBytes(Application.persistentDataPath + "/tex.png", result.EncodeToPNG());
}
// Return the texture.
return result;
}
為了能夠運行unsafe代碼,您還需要在 Unity 中允許它。轉到播放器設置(Edit > Project Settings > Player Settings并選中Allow unsafe code復選框。)
現在,您可以調用 CameraToTexture() 函數,比方說,每 5 秒從 Update() 調用一次,相機圖像應保存為/Android/data/YOUR_APPLICATION_PACKAGE/files/tex.png. 圖像可能是橫向的,即使您將手機置于縱向模式,但這也不再那么難以修復。此外,您可能會注意到每次保存圖像時都會凍結 - 因此,我建議在單獨的線程中調用此函數。此外,這里最苛刻的操作是將圖像保存為 PNG 文件,因此如果您出于任何其他原因需要它,應該沒問題(不過仍然使用單獨的線程)。
如果您想了解 YUV_420_888 格式,為什么需要 1.5*pixelCount 數組,以及為什么我們按照我們的方式修改數組,請閱讀https://wiki.videolan.org/YUV/#NV12。其他網站似乎沒有關于此格式如何工作的不正確信息。
另外,如果您有任何問題,請隨時給我評論,我會盡力幫助解決這些問題,以及對代碼和答案的任何反饋。
附錄 1:根據https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html,您應該使用 GetRawTextureData 而不是 LoadRawTextureData,以防止復制。為此,只需固定 GetRawTextureData 返回的數組而不是 RGBimage 數組(您可以將其刪除)。另外,不要忘記調用 result.Apply(); 然后。
附錄 2:不要忘記在使用完兩個 GCHandle 時調用 Free()。

TA貢獻1864條經驗 獲得超2個贊
這是一個僅使用免費插件 OpenCV Plus Unity 的實現。如果您熟悉 OpenCV,則設置非常簡單,文檔也很棒。
此實現使用 OpenCV 正確旋轉圖像,將它們存儲到內存中,并在退出應用程序時將它們保存到文件中。我試圖從代碼中剝離所有 Unity 方面,以便函數 GetCameraImage() 可以在單獨的線程上運行。
我可以確認它可以在 Andoird (GS7) 上運行,我認為它可以普遍運行。
using System;
using System.Collections.Generic;
using GoogleARCore;
using UnityEngine;
using OpenCvSharp;
using System.Runtime.InteropServices;
public class CamImage : MonoBehaviour
{
public static List<Mat> AllData = new List<Mat>();
public static void GetCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
return;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointerYUV = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, MatType.CV_8UC1, pointerYUV);
Mat output = new Mat(camBytes.Height, camBytes.Width, MatType.CV_8UC3);
Cv2.CvtColor(input, output, ColorConversionCodes.YUV2BGR_NV12);// YUV2RGB_NV12);
// FLIP AND TRANPOSE TO VERTICAL
Cv2.Transpose(output, output);
Cv2.Flip(output, output, FlipMode.Y);
AllData.Add(output);
pinnedArray.Free();
}
}
}
然后我在退出程序時調用 ExportImages() 以保存到文件。
private void ExportImages()
{
/// Write Camera intrinsics to text file
var path = Application.persistentDataPath;
StreamWriter sr = new StreamWriter(path + @"/intrinsics.txt");
sr.WriteLine(CameraIntrinsicsOutput.text);
Debug.Log(CameraIntrinsicsOutput.text);
sr.Close();
// Loop through Mat List, Add to Texture and Save.
for (var i = 0; i < CamImage.AllData.Count; i++)
{
Mat imOut = CamImage.AllData[i];
Texture2D result = Unity.MatToTexture(imOut);
result.Apply();
byte[] im = result.EncodeToJPG(100);
string fileName = "/IMG" + i + ".jpg";
File.WriteAllBytes(path + fileName, im);
string messge = "Succesfully Saved Image To " + path + "\n";
Debug.Log(messge);
Destroy(result);
}
}

TA貢獻1797條經驗 獲得超6個贊
我想出了如何在 Arcore 1.8 中獲得全分辨率 CPU 圖像。
我現在可以使用 cameraimagebytes 獲得完整的相機分辨率。
把這個放在你的類變量中:
private ARCoreSession.OnChooseCameraConfigurationDelegate m_OnChoseCameraConfiguration = null;
把這個放在 Start()
m_OnChoseCameraConfiguration = _ChooseCameraConfiguration; ARSessionManager.RegisterChooseCameraConfigurationCallback(m_OnChoseCameraConfiguration); ARSessionManager.enabled = false; ARSessionManager.enabled = true;
將此回調添加到類中:
private int _ChooseCameraConfiguration(List<CameraConfig> supportedConfigurations) { return supportedConfigurations.Count - 1; }
一旦你添加了這些,你應該有 cameraimagebytes 返回相機的完整分辨率。

TA貢獻1898條經驗 獲得超8個贊
對于想要使用 OpencvForUnity 嘗試此操作的每個人:
public Mat getCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, CvType.CV_8UC1);
Mat output = new Mat(camBytes.Height, camBytes.Width, CvType.CV_8UC3);
Utils.copyToMat(pointer, input);
Imgproc.cvtColor(input, output, Imgproc.COLOR_YUV2RGB_NV12);
pinnedArray.Free();
return output;
}

TA貢獻1840條經驗 獲得超5個贊
看來你已經解決了這個問題。
但對于任何想要將 AR 與手勢識別和跟蹤相結合的人,請嘗試 Manomotion:https ://www.manomotion.com/
免費 SDK 并在 12/2020 中完美運行。
使用SDK社區版和下載ARFoundation版本
- 5 回答
- 0 關注
- 286 瀏覽
添加回答
舉報