视频稳定的需求涉及许多领域。它在消费者和专业摄像中极为重要。因此,存在许多不同的机械,光学和算法解决方案。即使在静态图像拍摄中,稳定也可以帮助拍摄时间长的照片。在内窥镜检查和结肠镜检查等医疗诊断应用中,需要稳定视频以确定问题的确切位置和宽度。类似地,在军事应用中,飞行器在侦察飞行中捕获的视频需要稳定以进行定位,导航,目标跟踪等。这同样适用于机器人应用。
如上图所示,在欧几里德运动模型中,图像中的正方形可以转换为具有不同大小,形状位置的任何其他四方形。它比仿射和单应变换更具限制性,但足以用于运动稳定,因为视频的连续帧之间的相机移动通常很小。
/**
* @brief 运动信息结构体
*
*/
struct TransformParam
{
TransformParam() {}
//x轴信息,y轴信息,角度信息
TransformParam(double _dx, double _dy, double _da)
{
dx = _dx;
dy = _dy;
da = _da;
}
double dx;
double dy;
// angle
double da;
void getTransform(Mat &T)
{
// Reconstruct transformation matrix accordingly to new values 重建变换矩阵
T.at<double>(0, 0) = cos(da);
T.at<double>(0, 1) = -sin(da);
T.at<double>(1, 0) = sin(da);
T.at<double>(1, 1) = cos(da);
T.at<double>(0, 2) = dx;
T.at<double>(1, 2) = dy;
}
};
我们循环遍历帧并执行2.1帧间运动信息获取所有代码。C++代码:
//previous transformation matrix 上一张图像的仿射矩阵
Mat last_T;
//从第二帧开始循环遍历视频所有帧
for (int i = 1; i < n_frames; i++)
{
// Vector from previous and current feature points 前一帧角点vector,当前帧角点vector
vector<Point2f> prev_pts, curr_pts;
// Detect features in previous frame 获取前一帧的角点
//前一帧灰度图,前一帧角点vector, 最大角点数,检测到的角点的质量等级,两个角点之间的最小距离
goodFeaturesToTrack(prev_gray, prev_pts, 200, 0.01, 30);
// Read next frame 读取当前帧图像
bool success = cap.read(curr);
if (!success)
{
break;
}
// Convert to grayscale 将当前帧图像转换为灰度图
cvtColor(curr, curr_gray, COLOR_BGR2GRAY);
// Calculate optical flow (i.e. track feature points) 光流法追寻特征点
//输出状态矢量(元素是无符号char类型,uchar),如果在当前帧发现前一帧角点特征则置为1,否则,为0
vector<uchar> status;
//输出误差矢量
vector<float> err;
//光流跟踪
//前一帧灰度图像,当前帧灰度图像,前一帧角点,当前帧角点,状态量,误差量
calcOpticalFlowPyrLK(prev_gray, curr_gray, prev_pts, curr_pts, status, err);
// Filter only valid points 获取光流跟踪下有效的角点
//遍历角点
auto prev_it = prev_pts.begin();
auto curr_it = curr_pts.begin();
for (size_t k = 0; k < status.size(); k++)
{
if (status[k])
{
prev_it++;
curr_it++;
}
//删除无效角点
else
{
prev_it = prev_pts.erase(prev_it);
curr_it = curr_pts.erase(curr_it);
}
}
// Find transformation matrix 获得变换矩阵
//false表示带几何约束的仿射变换,true则是全仿射变化,T为变换矩阵
Mat T = estimateRigidTransform(prev_pts, curr_pts, false);
// In rare cases no transform is found.
// We'll just use the last known good transform.
//极少数情况会找不到变换矩阵,取上一个变换为当前变化矩阵
//当然第一次检测就没找到仿射矩阵,算法会出问题,不过概率很低
if (T.data == NULL)
{
last_T.copyTo(T);
}
T.copyTo(last_T);
// Extract traslation 提取仿射变化结果
double dx = T.at<double>(0, 2);
double dy = T.at<double>(1, 2);
// Extract rotation angle 提取角度
double da = atan2(T.at<double>(1, 0), T.at<double>(0, 0));
// Store transformation 存储仿射变化矩阵
transforms.push_back(TransformParam(dx, dy, da));
// Move to next frame 进行下一次检测准测
curr_gray.copyTo(prev_gray);
cout << "Frame: " << i << "/" << n_frames << " - Tracked points : " << prev_pts.size() << endl;
}
/**
* @brief 轨迹结构体
*
*/
struct Trajectory
{
Trajectory() {}
Trajectory(double _x, double _y, double _a)
{
x = _x;
y = _y;
a = _a;
}
double x;
double y;
// angle
double a;
};
我们还定义了一个函数cumsum,输入为TransformParams结构数据,并通过dx,dy和da(角度)的累积和来返回轨迹信息。C++代码:
/**
* @brief 轨迹累积
*
* @param transforms 运动信息结构体
* @return vector<Trajectory> 轨迹结构体
*/
vector<Trajectory> cumsum(vector<TransformParam> &transforms)
{
// trajectory at all frames 所有帧的运动轨迹
vector<Trajectory> trajectory;
// Accumulated frame to frame transform 累加计算x,y以及a(角度)
double a = 0;
double x = 0;
double y = 0;
//累加
for (size_t i = 0; i < transforms.size(); i++)
{
x += transforms[i].dx;
y += transforms[i].dy;
a += transforms[i].da;
trajectory.push_back(Trajectory(x, y, a));
}
return trajectory;
}
如下图,平滑曲线的值是在小窗口上平均左侧噪声曲线的值。下图显示了左侧噪声曲线的示例,使用右侧大小为5移动平均滤波器进行平滑处理。
在C ++版本中,我们定义了一个名为smooth的函数,它计算平滑的移动平均轨迹。
/**
* @brief 平滑运动轨迹
*
* @param trajectory 运动轨迹
* @param radius 窗格大小
* @return vector<Trajectory>
*/
vector<Trajectory> smooth(vector<Trajectory> &trajectory, int radius)
{
//平滑后的运动轨迹
vector<Trajectory> smoothed_trajectory;
//移动滑动窗格
for (size_t i = 0; i < trajectory.size(); i++)
{
double sum_x = 0;
double sum_y = 0;
double sum_a = 0;
int count = 0;
for (int j = -radius; j <= radius; j++)
{
if (i + j >= 0 && i + j < trajectory.size())
{
sum_x += trajectory[i + j].x;
sum_y += trajectory[i + j].y;
sum_a += trajectory[i + j].a;
count++;
}
}
double avg_a = sum_a / count;
double avg_x = sum_x / count;
double avg_y = sum_y / count;
smoothed_trajectory.push_back(Trajectory(avg_x, avg_y, avg_a));
}
return smoothed_trajectory;
}
//平滑后的运动信息结构体
vector<TransformParam> transforms_smooth;
//原始运动信息结构体
for (size_t i = 0; i < transforms.size(); i++)
{
// Calculate difference in smoothed_trajectory and trajectory 计算平滑后的轨迹和原始轨迹差异
double diff_x = smoothed_trajectory[i].x - trajectory[i].x;
double diff_y = smoothed_trajectory[i].y - trajectory[i].y;
double diff_a = smoothed_trajectory[i].a - trajectory[i].a;
// Calculate newer transformation array 计算平滑后的运动信息结构体数据
double dx = transforms[i].dx + diff_x;
double dy = transforms[i].dy + diff_y;
double da = transforms[i].da + diff_a;
transforms_smooth.push_back(TransformParam(dx, dy, da));
}
,则相应的变换矩阵由下式给出:
当我们稳定视频时,我们可能会看到一些黑色边界。这是预期的,因为要稳定视频,视频帧原图像可能不得不缩小(不是图的尺寸缩小,图像尺寸不变。有两种情况,一种实际先缩小图像,然后从中截取原图大小的区域,缺少图像区域用黑色填充,起到图像增大作用;另外将原图扩大,然后截取原图尺寸相等大小区域,起到图像缩小作用)。我们可以通过以其视频中点为中心缩放图像(例如4%)来缓解该问题。下面的函数fixBorder显示了实现。我们使用getRotationMatrix2D,因为它可以在不移动图像中心的情况下缩放和旋转图像。我们需要做的就是调用此函数,旋转0和缩放1.04(将原图扩大为1.04倍,然后截取原图尺寸相等大小区域)。C++代码如下:
/**
* @brief
*
* @param frame_stabilized
*/
void fixBorder(Mat &frame_stabilized)
{
//将原图扩大为1.04倍,然后截取原图尺寸相等大小区域
Mat T = getRotationMatrix2D(Point2f(frame_stabilized.cols / 2, frame_stabilized.rows / 2), 0, 1.04);
//仿射变换
warpAffine(frame_stabilized, frame_stabilized, T, frame_stabilized.size());
}
// video_stabilization.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。
//
#include "pch.h"
#include <opencv2/opencv.hpp>
#include <iostream>
#include <cassert>
#include <cmath>
#include <fstream>
using namespace std;
using namespace cv;
// In frames. The larger the more stable the video, but less reactive to sudden panning 移动平均滑动窗口大小
const int SMOOTHING_RADIUS = 50;
/**
* @brief 运动信息结构体
*
*/
struct TransformParam
{
TransformParam() {}
//x轴信息,y轴信息,角度信息
TransformParam(double _dx, double _dy, double _da)
{
dx = _dx;
dy = _dy;
da = _da;
}
double dx;
double dy;
// angle
double da;
void getTransform(Mat &T)
{
// Reconstruct transformation matrix accordingly to new values 重建变换矩阵
T.at<double>(0, 0) = cos(da);
T.at<double>(0, 1) = -sin(da);
T.at<double>(1, 0) = sin(da);
T.at<double>(1, 1) = cos(da);
T.at<double>(0, 2) = dx;
T.at<double>(1, 2) = dy;
}
};
/**
* @brief 轨迹结构体
*
*/
struct Trajectory
{
Trajectory() {}
Trajectory(double _x, double _y, double _a)
{
x = _x;
y = _y;
a = _a;
}
double x;
double y;
// angle
double a;
};
/**
* @brief 轨迹累积
*
* @param transforms 运动信息结构体
* @return vector<Trajectory> 轨迹结构体
*/
vector<Trajectory> cumsum(vector<TransformParam> &transforms)
{
// trajectory at all frames 所有帧的运动轨迹
vector<Trajectory> trajectory;
// Accumulated frame to frame transform 累加计算x,y以及a(角度)
double a = 0;
double x = 0;
double y = 0;
//累加
for (size_t i = 0; i < transforms.size(); i++)
{
x += transforms[i].dx;
y += transforms[i].dy;
a += transforms[i].da;
trajectory.push_back(Trajectory(x, y, a));
}
return trajectory;
}
/**
* @brief 平滑运动轨迹
*
* @param trajectory 运动轨迹
* @param radius 窗格大小
* @return vector<Trajectory>
*/
vector<Trajectory> smooth(vector<Trajectory> &trajectory, int radius)
{
//平滑后的运动轨迹
vector<Trajectory> smoothed_trajectory;
//移动滑动窗格
for (size_t i = 0; i < trajectory.size(); i++)
{
double sum_x = 0;
double sum_y = 0;
double sum_a = 0;
int count = 0;
for (int j = -radius; j <= radius; j++)
{
if (i + j >= 0 && i + j < trajectory.size())
{
sum_x += trajectory[i + j].x;
sum_y += trajectory[i + j].y;
sum_a += trajectory[i + j].a;
count++;
}
}
double avg_a = sum_a / count;
double avg_x = sum_x / count;
double avg_y = sum_y / count;
smoothed_trajectory.push_back(Trajectory(avg_x, avg_y, avg_a));
}
return smoothed_trajectory;
}
/**
* @brief
*
* @param frame_stabilized
*/
void fixBorder(Mat &frame_stabilized)
{
//将原图扩大为1.04倍,然后截取原图尺寸相等大小区域
Mat T = getRotationMatrix2D(Point2f(frame_stabilized.cols / 2, frame_stabilized.rows / 2), 0, 1.04);
//仿射变换
warpAffine(frame_stabilized, frame_stabilized, T, frame_stabilized.size());
}
int main(int argc, char **argv)
{
// Read input video 读取视频
VideoCapture cap("./video/detect.mp4");
// Get frame count 读取视频总帧数
int n_frames = int(cap.get(CAP_PROP_FRAME_COUNT));
// Our test video may be wrong to read the frame after frame 1300
n_frames = 1300;
// Get width and height of video stream 获取视频图像宽高
int w = int(cap.get(CAP_PROP_FRAME_WIDTH));
int h = int(cap.get(CAP_PROP_FRAME_HEIGHT));
// Get frames per second (fps) 获取视频每秒帧数
double fps = cap.get(CV_CAP_PROP_FPS);
// Set up output video 设置输出视频
VideoWriter out("video_out.avi", CV_FOURCC('M', 'J', 'P', 'G'), fps, Size(2 * w, h));
// Define variable for storing frames 定义存储帧的相关变量
//当前帧RGB图像和灰度图
Mat curr, curr_gray;
//前一帧RGB图像和灰度图
Mat prev, prev_gray;
// Read first frame 获得视频一张图象
cap >> prev;
// Convert frame to grayscale 转换为灰度图
cvtColor(prev, prev_gray, COLOR_BGR2GRAY);
// Pre-define transformation-store array 仿射变化参数结构体
vector<TransformParam> transforms;
//previous transformation matrix 上一张图像的仿射矩阵
Mat last_T;
//从第二帧开始循环遍历视频所有帧
for (int i = 1; i < n_frames; i++)
{
// Vector from previous and current feature points 前一帧角点vector,当前帧角点vector
vector<Point2f> prev_pts, curr_pts;
// Detect features in previous frame 获取前一帧的角点
//前一帧灰度图,前一帧角点vector, 最大角点数,检测到的角点的质量等级,两个角点之间的最小距离
goodFeaturesToTrack(prev_gray, prev_pts, 200, 0.01, 30);
// Read next frame 读取当前帧图像
bool success = cap.read(curr);
if (!success)
{
break;
}
// Convert to grayscale 将当前帧图像转换为灰度图
cvtColor(curr, curr_gray, COLOR_BGR2GRAY);
// Calculate optical flow (i.e. track feature points) 光流法追寻特征点
//输出状态矢量(元素是无符号char类型,uchar),如果在当前帧发现前一帧角点特征则置为1,否则,为0
vector<uchar> status;
//输出误差矢量
vector<float> err;
//光流跟踪
//前一帧灰度图像,当前帧灰度图像,前一帧角点,当前帧角点,状态量,误差量
calcOpticalFlowPyrLK(prev_gray, curr_gray, prev_pts, curr_pts, status, err);
// Filter only valid points 获取光流跟踪下有效的角点
//遍历角点
auto prev_it = prev_pts.begin();
auto curr_it = curr_pts.begin();
for (size_t k = 0; k < status.size(); k++)
{
if (status[k])
{
prev_it++;
curr_it++;
}
//删除无效角点
else
{
prev_it = prev_pts.erase(prev_it);
curr_it = curr_pts.erase(curr_it);
}
}
// Find transformation matrix 获得变换矩阵
//false表示带几何约束的仿射变换,true则是全仿射变化,T为变换矩阵
Mat T = estimateRigidTransform(prev_pts, curr_pts, false);
// In rare cases no transform is found.
// We'll just use the last known good transform.
//极少数情况会找不到变换矩阵,取上一个变换为当前变化矩阵
//当然第一次检测就没找到仿射矩阵,算法会出问题,不过概率很低
if (T.data == NULL)
{
last_T.copyTo(T);
}
T.copyTo(last_T);
// Extract traslation 提取仿射变化结果
double dx = T.at<double>(0, 2);
double dy = T.at<double>(1, 2);
// Extract rotation angle 提取角度
double da = atan2(T.at<double>(1, 0), T.at<double>(0, 0));
// Store transformation 存储仿射变化矩阵
transforms.push_back(TransformParam(dx, dy, da));
// Move to next frame 进行下一次检测准测
curr_gray.copyTo(prev_gray);
cout << "Frame: " << i << "/" << n_frames << " - Tracked points : " << prev_pts.size() << endl;
}
// Compute trajectory using cumulative sum of transformations 获取累加轨迹
vector<Trajectory> trajectory = cumsum(transforms);
// Smooth trajectory using moving average filter 获取平滑后的轨迹
vector<Trajectory> smoothed_trajectory = smooth(trajectory, SMOOTHING_RADIUS);
//平滑后的运动信息结构体
vector<TransformParam> transforms_smooth;
//原始运动信息结构体
for (size_t i = 0; i < transforms.size(); i++)
{
// Calculate difference in smoothed_trajectory and trajectory 计算平滑后的轨迹和原始轨迹差异
double diff_x = smoothed_trajectory[i].x - trajectory[i].x;
double diff_y = smoothed_trajectory[i].y - trajectory[i].y;
double diff_a = smoothed_trajectory[i].a - trajectory[i].a;
// Calculate newer transformation array 计算平滑后的运动信息结构体数据
double dx = transforms[i].dx + diff_x;
double dy = transforms[i].dy + diff_y;
double da = transforms[i].da + diff_a;
transforms_smooth.push_back(TransformParam(dx, dy, da));
}
//定位当前帧为第1帧
cap.set(CV_CAP_PROP_POS_FRAMES, 0);
//平滑后的变化矩阵
Mat T(2, 3, CV_64F);
Mat frame, frame_stabilized, frame_out;
//对所有帧进行变化得到稳像结果
//跳过第一帧
cap.read(frame);
for (int i = 0; i < n_frames - 1; i++)
{
bool success = cap.read(frame);
if (!success)
{
break;
}
// Extract transform from translation and rotation angle. 提取平滑后的仿射变化矩阵
transforms_smooth[i].getTransform(T);
// Apply affine wrapping to the given frame 应用仿射变化
warpAffine(frame, frame_stabilized, T, frame.size());
// Scale image to remove black border artifact 去除黑边
fixBorder(frame_stabilized);
// Now draw the original and stablised side by side for coolness 将原图和变化后的图横向排列输出到视频
hconcat(frame, frame_stabilized, frame_out);
// If the image is too big, resize it.
if (frame_out.cols > 1920)
{
resize(frame_out, frame_out, Size(frame_out.cols / 2, frame_out.rows / 2));
}
//imshow("Before and After", frame_out);
out.write(frame_out);
cout << "out frame:" << i << endl;
//waitKey(10);
}
// Release video
cap.release();
out.release();
// Close windows
destroyAllWindows();
return 0;
}
python:
# Import numpy and OpenCV
import numpy as np
import cv2
def movingAverage(curve, radius):
window_size = 2 * radius + 1
# Define the filter
f = np.ones(window_size)/window_size
# Add padding to the boundaries
curve_pad = np.lib.pad(curve, (radius, radius), 'edge')
# Apply convolution
curve_smoothed = np.convolve(curve_pad, f, mode='same')
# Remove padding
curve_smoothed = curve_smoothed[radius:-radius]
# return smoothed curve
return curve_smoothed
def smooth(trajectory):
smoothed_trajectory = np.copy(trajectory)
# Filter the x, y and angle curves
for i in range(3):
smoothed_trajectory[:, i] = movingAverage(
trajectory[:, i], radius=SMOOTHING_RADIUS)
return smoothed_trajectory
def fixBorder(frame):
s = frame.shape
# Scale the image 4% without moving the center
T = cv2.getRotationMatrix2D((s[1]/2, s[0]/2), 0, 1.04)
frame = cv2.warpAffine(frame, T, (s[1], s[0]))
return frame
# The larger the more stable the video, but less reactive to sudden panning
SMOOTHING_RADIUS = 50
# Read input video
cap = cv2.VideoCapture('video/detect.mp4')
# Get frame count
n_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# Our test video may be wrong to read the frame after frame 1300
n_frames = 1300
# Get width and height of video stream
w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Get frames per second (fps)
fps = cap.get(cv2.CAP_PROP_FPS)
# Define the codec for output video
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
# Set up output video
out = cv2.VideoWriter('video_out.avi', fourcc, fps, (2 * w, h))
# Read first frame
_, prev = cap.read()
# Convert frame to grayscale
prev_gray = cv2.cvtColor(prev, cv2.COLOR_BGR2GRAY)
# Pre-define transformation-store array
transforms = np.zeros((n_frames-1, 3), np.float32)
for i in range(n_frames-2):
# Detect feature points in previous frame
prev_pts = cv2.goodFeaturesToTrack(prev_gray,
maxCorners=200,
qualityLevel=0.01,
minDistance=30,
blockSize=3)
# Read next frame
success, curr = cap.read()
if not success:
break
# Convert to grayscale
curr_gray = cv2.cvtColor(curr, cv2.COLOR_BGR2GRAY)
# Calculate optical flow (i.e. track feature points)
curr_pts, status, err = cv2.calcOpticalFlowPyrLK(
prev_gray, curr_gray, prev_pts, None)
# Sanity check
assert prev_pts.shape == curr_pts.shape
# Filter only valid points
idx = np.where(status == 1)[0]
prev_pts = prev_pts[idx]
curr_pts = curr_pts[idx]
# Find transformation matrix
# will only work with OpenCV-3 or less
m = cv2.estimateRigidTransform(prev_pts, curr_pts, fullAffine=False)
# Extract traslation
dx = m[0, 2]
dy = m[1, 2]
# Extract rotation angle
da = np.arctan2(m[1, 0], m[0, 0])
# Store transformation
transforms[i] = [dx, dy, da]
# Move to next frame
prev_gray = curr_gray
print("Frame: " + str(i) + "/" + str(n_frames) +
" - Tracked points : " + str(len(prev_pts)))
# Compute trajectory using cumulative sum of transformations
trajectory = np.cumsum(transforms, axis=0)
# Create variable to store smoothed trajectory
smoothed_trajectory = smooth(trajectory)
# Calculate difference in smoothed_trajectory and trajectory
difference = smoothed_trajectory - trajectory
# Calculate newer transformation array
transforms_smooth = transforms + difference
# Reset stream to first frame
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
# Write n_frames-1 transformed frames
for i in range(n_frames-2):
# Read next frame
success, frame = cap.read()
if not success:
break
# Extract transformations from the new transformation array
dx = transforms_smooth[i, 0]
dy = transforms_smooth[i, 1]
da = transforms_smooth[i, 2]
# Reconstruct transformation matrix accordingly to new values
m = np.zeros((2, 3), np.float32)
m[0, 0] = np.cos(da)
m[0, 1] = -np.sin(da)
m[1, 0] = np.sin(da)
m[1, 1] = np.cos(da)
m[0, 2] = dx
m[1, 2] = dy
# Apply affine wrapping to the given frame
frame_stabilized = cv2.warpAffine(frame, m, (w, h))
# Fix border artifacts
frame_stabilized = fixBorder(frame_stabilized)
# Write the frame to the file
frame_out = cv2.hconcat([frame, frame_stabilized])
# If the image is too big, resize it.
if(frame_out.shape[1] > 1920):
frame_out = cv2.resize(
frame_out, (frame_out.shape[1]/2, frame_out.shape[0]/2))
#cv2.imshow("Before and After", frame_out)
# cv2.waitKey(10)
out.write(frame_out)
# Release video
cap.release()
out.release()
# Close windows
cv2.destroyAllWindows()
在我的应用程序中,我需要能够找到所有数字子字符串,然后扫描每个子字符串,找到第一个匹配范围(例如5到15之间)的子字符串,并将该实例替换为另一个字符串“X”。我的测试字符串s="1foo100bar10gee1"我的初始模式是1个或多个数字的任何字符串,例如,re=Regexp.new(/\d+/)matches=s.scan(re)给出["1","100","10","1"]如果我想用“X”替换第N个匹配项,并且只替换第N个匹配项,我该怎么做?例如,如果我想替换第三个匹配项“10”(匹配项[2]),我不能只说s[matches[2]]="X"因为它做了两次替换“1fooX0barXg
如何匹配未被反斜杠转义的平衡定界符对(其本身未被反斜杠转义)(无需考虑嵌套)?例如对于反引号,我试过了,但是转义的反引号没有像转义那样工作。regex=/(?!$1:"how\\"#expected"how\\`are"上面的正则表达式不考虑由反斜杠转义并位于反引号前面的反斜杠,但我愿意考虑。StackOverflow如何做到这一点?这样做的目的并不复杂。我有文档文本,其中包括内联代码的反引号,就像StackOverflow一样,我想在HTML文件中显示它,内联代码用一些spanMaterial装饰。不会有嵌套,但转义反引号或转义反斜杠可能出现在任何地方。
我有一个用户工厂。我希望默认情况下确认用户。但是鉴于unconfirmed特征,我不希望它们被确认。虽然我有一个基于实现细节而不是抽象的工作实现,但我想知道如何正确地做到这一点。factory:userdoafter(:create)do|user,evaluator|#unwantedimplementationdetailshereunlessFactoryGirl.factories[:user].defined_traits.map(&:name).include?(:unconfirmed)user.confirm!endendtrait:unconfirmeddoenden
我有一个驼峰式字符串,例如:JustAString。我想按照以下规则形成长度为4的字符串:抓取所有大写字母;如果超过4个大写字母,只保留前4个;如果少于4个大写字母,则将最后大写字母后的字母大写并添加字母,直到长度变为4。以下是可能发生的3种情况:ThisIsMyString将产生TIMS(大写字母);ThisIsOneVeryLongString将产生TIOV(前4个大写字母);MyString将生成MSTR(大写字母+tr大写)。我设法用这个片段解决了前两种情况:str.scan(/[A-Z]/).first(4).join但是,我不太确定如何最好地修改上面的代码片段以处理最后一种
我真的为这个而疯狂。我一直在搜索答案并尝试我找到的所有内容,包括相关问题和stackoverflow上的答案,但仍然无法正常工作。我正在使用嵌套资源,但无法使表单正常工作。我总是遇到错误,例如没有路线匹配[PUT]"/galleries/1/photos"表格在这里:/galleries/1/photos/1/edit路线.rbresources:galleriesdoresources:photosendresources:galleriesresources:photos照片Controller.rbdefnew@gallery=Gallery.find(params[:galle
导读:随着叮咚买菜业务的发展,不同的业务场景对数据分析提出了不同的需求,他们希望引入一款实时OLAP数据库,构建一个灵活的多维实时查询和分析的平台,统一数据的接入和查询方案,解决各业务线对数据高效实时查询和精细化运营的需求。经过调研选型,最终引入ApacheDoris作为最终的OLAP分析引擎,Doris作为核心的OLAP引擎支持复杂地分析操作、提供多维的数据视图,在叮咚买菜数十个业务场景中广泛应用。作者|叮咚买菜资深数据工程师韩青叮咚买菜创立于2017年5月,是一家专注美好食物的创业公司。叮咚买菜专注吃的事业,为满足更多人“想吃什么”而努力,通过美好食材的供应、美好滋味的开发以及美食品牌的孵
之前在培训新生的时候,windows环境下配置opencv环境一直教的都是网上主流的vsstudio配置属性表,但是这个似乎对新生来说难度略高(虽然个人觉得完全是他们自己的问题),加之暑假之后对cmake实在是爱不释手,且这样配置确实十分简单(其实都不需要配置),故斗胆妄言vscode下配置CV之法。其实极为简单,图比较多所以很长。如果你看此文还配不好,你应该思考一下是不是自己的问题。闲话少说,直接开始。0.CMkae简介有的人到大二了都不知道cmake是什么,我不说是谁。CMake是一个开源免费并且跨平台的构建工具,可以用简单的语句来描述所有平台的编译过程。它能够根据当前所在平台输出对应的m
C#实现简易绘图工具一.引言实验目的:通过制作窗体应用程序(C#画图软件),熟悉基本的窗体设计过程以及控件设计,事件处理等,熟悉使用C#的winform窗体进行绘图的基本步骤,对于面向对象编程有更加深刻的体会.Tutorial任务设计一个具有基本功能的画图软件**·包括简单的新建文件,保存,重新绘图等功能**·实现一些基本图形的绘制,包括铅笔和基本形状等,学习橡皮工具的创建**·设计一个合理舒适的UI界面**注明:你可能需要先了解一些关于winform窗体应用程序绘图的基本知识,以及关于GDI+类和结构的知识二.实验环境Windows系统下的visualstudio2017C#窗体应用程序三.
我已经在mountainlion上成功安装了rbenv和rubybuild。运行rbenvinstall1.9.3-p392结束于:校验和不匹配:ruby-1.9.3-p392.tar.gz(文件已损坏)预期f689a7b61379f83cbbed3c7077d83859,得到1cfc2ff433dbe80f8ff1a9dba2fd5636它正在下载的文件看起来没问题,如果我使用curl手动下载文件,我会得到同样不正确的校验和。有没有人遇到过这个?他们是如何解决的? 最佳答案 tl:博士;使用浏览器从http://ftp.rub
@raw_array[i]=~/[\W]/非常简单的正则表达式。当我用一些非拉丁字母(具体来说是俄语)尝试时,条件是错误的。我能用它做什么? 最佳答案 @raw_array[i]=~/[\p{L}]/使用西里尔字符进行测试。引用:http://www.regular-expressions.info/unicode.html#prop 关于ruby-正则表达式将非英文字母匹配为非单词字符,我们在StackOverflow上找到一个类似的问题: https://