草庐IT

ios - 我想用 GPUImage 制作一个多于两个的输入纹理过滤器。但我得到一个黑色输出

coder 2023-09-28 原文

我想制作一个新的过滤器,例如 GPUImage 的 GPUImageTwoInputFilter。

这是我的代码。一个名为 IFFourInputFilter 的基类,它很可能是 GPUImageTwoInputFilter。

#import "IFFourInputFilter.h"

NSString *const kIFFourInputTextureVertexShaderString = SHADER_STRING
(
 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;
 attribute vec4 inputTextureCoordinate2;
 attribute vec4 inputTextureCoordinate3;
 attribute vec4 inputTextureCoordinate4;

 varying vec2 textureCoordinate;
 varying vec2 textureCoordinate2;
 varying vec2 textureCoordinate3;
 varying vec2 textureCoordinate4;

 void main()
 {
     gl_Position = position;
     textureCoordinate = inputTextureCoordinate.xy;
     textureCoordinate2 = inputTextureCoordinate2.xy;
     textureCoordinate3 = inputTextureCoordinate3.xy;
     textureCoordinate4 = inputTextureCoordinate4.xy;
 }
);


@implementation IFFourInputFilter

#pragma mark -
#pragma mark Initialization and teardown

- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
    if (!(self = [self initWithVertexShaderFromString:kIFFourInputTextureVertexShaderString fragmentShaderFromString:fragmentShaderString]))
    {
        return nil;
    }

    return self;
}

- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
    if (!(self = [super initWithVertexShaderFromString:vertexShaderString fragmentShaderFromString:fragmentShaderString]))
    {
        return nil;
    }

    inputRotation2 = kGPUImageNoRotation;
    inputRotation3 = kGPUImageNoRotation;
    inputRotation4 = kGPUImageNoRotation;

    hasSetTexture1 = NO;
    hasSetTexture2 = NO;
    hasSetTexture3 = NO;

    hasReceivedFrame1 = NO;
    hasReceivedFrame2 = NO;
    hasReceivedFrame3 = NO;
    hasReceivedFrame4 = NO;
    frameWasVideo1 = NO;
    frameWasVideo2 = NO;
    frameWasVideo3 = NO;
    frameWasVideo4 = NO;
    frameCheckDisabled1 = NO;
    frameCheckDisabled2 = NO;
    frameCheckDisabled3 = NO;
    frameCheckDisabled4 = NO;

    frameTime1 = kCMTimeInvalid;
    frameTime2 = kCMTimeInvalid;
    frameTime3 = kCMTimeInvalid;
    frameTime4 = kCMTimeInvalid;

    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageOpenGLESContext useImageProcessingContext];
        filterTextureCoordinateAttribute2 = [filterProgram attributeIndex:@"inputTextureCoordinate2"];

        filterInputTextureUniform2 = [filterProgram uniformIndex:@"inputImageTexture2"]; // This does assume a name of "inputImageTexture2" for second input texture in the fragment shader
        glEnableVertexAttribArray(filterTextureCoordinateAttribute2);

        filterTextureCoordinateAttribute3 = [filterProgram attributeIndex:@"inputTextureCoordinate3"];

        filterInputTextureUniform3 = [filterProgram uniformIndex:@"inputImageTexture3"]; // This does assume a name of "inputImageTexture2" for second input texture in the fragment shader
        glEnableVertexAttribArray(filterTextureCoordinateAttribute3);

        filterTextureCoordinateAttribute4 = [filterProgram attributeIndex:@"inputTextureCoordinate4"];

        filterInputTextureUniform4 = [filterProgram uniformIndex:@"inputImageTexture4"]; // This does assume a name of "inputImageTexture2" for second input texture in the fragment shader
        glEnableVertexAttribArray(filterTextureCoordinateAttribute4);
    });

    return self;
}

- (void)initializeAttributes;
{
    [super initializeAttributes];
    [filterProgram addAttribute:@"inputTextureCoordinate2"];
    [filterProgram addAttribute:@"inputTextureCoordinate3"];
    [filterProgram addAttribute:@"inputTextureCoordinate4"];
}

- (void)disableFrameCheck1;
{
    frameCheckDisabled1 = YES;
}

- (void)disableFrameCheck2;
{
    frameCheckDisabled2 = YES;
}

- (void)disableFrameCheck3;
{
    frameCheckDisabled3 = YES;
}

- (void)disableFrameCheck4;
{
    frameCheckDisabled4 = YES;
}

#pragma mark -
#pragma mark Rendering

- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates sourceTexture:(GLuint)sourceTexture;
{
    if (self.preventRendering)
    {
        return;
    }

    [GPUImageOpenGLESContext setActiveShaderProgram:filterProgram];
    [self setUniformsForProgramAtIndex:0];

    [self setFilterFBO];

    glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
    glClear(GL_COLOR_BUFFER_BIT);

    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D, sourceTexture);
    glUniform1i(filterInputTextureUniform, 2);

    glActiveTexture(GL_TEXTURE3);
    glBindTexture(GL_TEXTURE_2D, filterSourceTexture2);
    glUniform1i(filterInputTextureUniform2, 3);

    glActiveTexture(GL_TEXTURE4);
    glBindTexture(GL_TEXTURE_2D, filterSourceTexture3);
    glUniform1i(filterInputTextureUniform3, 4);

    glActiveTexture(GL_TEXTURE5);
    glBindTexture(GL_TEXTURE_2D, filterSourceTexture4);
    glUniform1i(filterInputTextureUniform4, 5);

    glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
    glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
    glVertexAttribPointer(filterTextureCoordinateAttribute2, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation2]);
    glVertexAttribPointer(filterTextureCoordinateAttribute3, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation3]);
    glVertexAttribPointer(filterTextureCoordinateAttribute4, 2, GL_FLOAT, 0, 0, [[self class] textureCoordinatesForRotation:inputRotation4]);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

- (void)releaseInputTexturesIfNeeded;
{
    if (shouldConserveMemoryForNextFrame)
    {
        [firstTextureDelegate textureNoLongerNeededForTarget:self];
        [textureDelegate2 textureNoLongerNeededForTarget:self];
        [textureDelegate3 textureNoLongerNeededForTarget:self];
        [textureDelegate4 textureNoLongerNeededForTarget:self];
        shouldConserveMemoryForNextFrame = NO;
    }
}

#pragma mark -
#pragma mark GPUImageInput

- (NSInteger)nextAvailableTextureIndex;
{
    if (!hasSetTexture1){
        return 0;
    }else if (!hasSetTexture2) {
        return 1;
    }else if (!hasSetTexture3) {
        return 2;
    }else{
        return 3;
    }
}

- (void)setInputTexture:(GLuint)newInputTexture atIndex:(NSInteger)textureIndex;
{
    switch (textureIndex) {
        case 0:
            filterSourceTexture = newInputTexture;
            hasSetTexture1 = YES;
            break;
        case 1:
            filterSourceTexture2 = newInputTexture;
            hasSetTexture2 = YES;
            break;
        case 2:
            filterSourceTexture3 = newInputTexture;
            hasSetTexture3 = YES;
            break;
        case 3:
            filterSourceTexture4 = newInputTexture;
            break;
        default:
            break;
    }
}

- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
    if (textureIndex == 0)
    {
        [super setInputSize:newSize atIndex:textureIndex];

        if (CGSizeEqualToSize(newSize, CGSizeZero))
        {
            hasSetTexture1 = NO;
        }
    }
}

- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
    switch (textureIndex) {
        case 0:
            inputRotation = newInputRotation;
            break;
        case 1:
            inputRotation2 = newInputRotation;
            break;
        case 2:
            inputRotation3 = newInputRotation;
            break;
        case 3:
            inputRotation4 = newInputRotation;
            break;
        default:
            break;
    }
}

- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
{
    CGSize rotatedSize = sizeToRotate;

    GPUImageRotationMode rotationToCheck;
    switch (textureIndex) {
        case 0:
            rotationToCheck = inputRotation;
            break;
        case 1:
            rotationToCheck = inputRotation2;
            break;
        case 2:
            rotationToCheck = inputRotation3;
            break;
        case 3:
            rotationToCheck = inputRotation4;
            break;
        default:
            break;
    }

    if (GPUImageRotationSwapsWidthAndHeight(rotationToCheck))
    {
        rotatedSize.width = sizeToRotate.height;
        rotatedSize.height = sizeToRotate.width;
    }

    return rotatedSize;
}

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
    outputTextureRetainCount = [targets count];

    // You can set up infinite update loops, so this helps to short circuit them
    if (hasReceivedFrame1 && hasReceivedFrame2 && hasReceivedFrame3 && hasReceivedFrame4)
    {
        return;
    }

    BOOL updatedMovieFrameOppositeStillImage = NO;

    switch (textureIndex) {
        case 0:
            hasReceivedFrame1 = YES;
            frameTime1 = frameTime;
            if (frameCheckDisabled2)
            {
                hasReceivedFrame2 = YES;
            }
            if (frameCheckDisabled3)
            {
                hasReceivedFrame3 = YES;
            }
            if (frameCheckDisabled4)
            {
                hasReceivedFrame4 = YES;
            }

            if (!CMTIME_IS_INDEFINITE(frameTime))
            {
                if (CMTIME_IS_INDEFINITE(frameTime2) && CMTIME_IS_INDEFINITE(frameTime3) && CMTIME_IS_INDEFINITE(frameTime4))
                {
                    updatedMovieFrameOppositeStillImage = YES;
                }
            }
            break;
        case 1:
            hasReceivedFrame2 = YES;
            frameTime2 = frameTime;
            if (frameCheckDisabled1)
            {
                hasReceivedFrame1 = YES;
            }
            if (frameCheckDisabled3)
            {
                hasReceivedFrame3 = YES;
            }
            if (frameCheckDisabled4)
            {
                hasReceivedFrame4 = YES;
            }

            if (!CMTIME_IS_INDEFINITE(frameTime))
            {
                if (CMTIME_IS_INDEFINITE(frameTime1) && CMTIME_IS_INDEFINITE(frameTime3) && CMTIME_IS_INDEFINITE(frameTime4))
                {
                    updatedMovieFrameOppositeStillImage = YES;
                }
            }
            break;
        case 2:
            hasReceivedFrame3 = YES;
            frameTime3 = frameTime;
            if (frameCheckDisabled1)
            {
                hasReceivedFrame1 = YES;
            }
            if (frameCheckDisabled2)
            {
                hasReceivedFrame2 = YES;
            }
            if (frameCheckDisabled4)
            {
                hasReceivedFrame4 = YES;
            }

            if (!CMTIME_IS_INDEFINITE(frameTime))
            {
                if (CMTIME_IS_INDEFINITE(frameTime1) && CMTIME_IS_INDEFINITE(frameTime2) && CMTIME_IS_INDEFINITE(frameTime4))
                {
                    updatedMovieFrameOppositeStillImage = YES;
                }
            }
            break;
        case 3:
            hasReceivedFrame4 = YES;
            frameTime4 = frameTime;
            if (frameCheckDisabled1)
            {
                hasReceivedFrame1 = YES;
            }
            if (frameCheckDisabled3)
            {
                hasReceivedFrame3 = YES;
            }
            if (frameCheckDisabled2)
            {
                hasReceivedFrame2 = YES;
            }

            if (!CMTIME_IS_INDEFINITE(frameTime))
            {
                if (CMTIME_IS_INDEFINITE(frameTime1) && CMTIME_IS_INDEFINITE(frameTime3) && CMTIME_IS_INDEFINITE(frameTime2))
                {
                    updatedMovieFrameOppositeStillImage = YES;
                }
            }
            break;
        default:
            break;
    }

    // || (hasReceivedFirstFrame && secondFrameCheckDisabled) || (hasReceivedSecondFrame && firstFrameCheckDisabled)
    if ((hasReceivedFrame1 && hasReceivedFrame2 && hasReceivedFrame3 && hasReceivedFrame4) || updatedMovieFrameOppositeStillImage)
    {
        [super newFrameReadyAtTime:frameTime atIndex:0];
        hasReceivedFrame1 = NO;
        hasReceivedFrame2 = NO;
        hasReceivedFrame3 = NO;
        hasReceivedFrame4 = NO;
    }
}

- (void)setTextureDelegate:(id<GPUImageTextureDelegate>)newTextureDelegate atIndex:(NSInteger)textureIndex;
{
    switch (textureIndex) {
        case 0:
            firstTextureDelegate = newTextureDelegate;
            break;
        case 1:
            textureDelegate2 = newTextureDelegate;
            break;
        case 2:
            textureDelegate3 = newTextureDelegate;
            break;
        case 3:
            textureDelegate4 = newTextureDelegate;
            break;
        default:
            break;
    }
}

@end

名为 IFAmaroFilter 的类扩展了 IFFourInputFilter。

#import "IFAmaroFilter.h"

NSString *const kIFAmaroFilterFragmentShaderString = SHADER_STRING
(
 precision lowp float;

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;
 uniform sampler2D inputImageTexture2; //blowout;
 uniform sampler2D inputImageTexture3; //overlay;
 uniform sampler2D inputImageTexture4; //map

 void main()
 {
     vec4 texel = texture2D(inputImageTexture, textureCoordinate);
     vec3 bbTexel = texture2D(inputImageTexture2, textureCoordinate).rgb;

     texel.r = texture2D(inputImageTexture3, vec2(bbTexel.r, texel.r)).r;
     texel.g = texture2D(inputImageTexture3, vec2(bbTexel.g, texel.g)).g;
     texel.b = texture2D(inputImageTexture3, vec2(bbTexel.b, texel.b)).b;

     vec4 mapped;
     mapped.r = texture2D(inputImageTexture4, vec2(texel.r, 0.16666)).r;
     mapped.g = texture2D(inputImageTexture4, vec2(texel.g, .5)).g;
     mapped.b = texture2D(inputImageTexture4, vec2(texel.b, .83333)).b;
     mapped.a = 1.0;

     gl_FragColor = texel;
 }
 );

@implementation IFAmaroFilter

- (id)init;
{
    if (!(self = [super initWithFragmentShaderFromString:kIFAmaroFilterFragmentShaderString]))
    {
        return nil;
    }

    return self;
}

@end

当我使用过滤器时,我得到了黑色输出。代码如下:

    filter = [[IFAmaroFilter alloc] init];
    GPUImagePicture *gp1 = [[GPUImagePicture alloc] initWithImage:[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"blackboard1024" ofType:@"png"]]];
    GPUImagePicture *gp2 = [[GPUImagePicture alloc] initWithImage:[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"overlayMap" ofType:@"png"]]];
    GPUImagePicture *gp3 = [[GPUImagePicture alloc] initWithImage:[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"amaroMap" ofType:@"png"]]];

    [stillCamera addTarget:filter atTextureLocation:0];
    [gp1 addTarget:filter atTextureLocation:1];
    [gp1 processImage];
    [gp2 addTarget:filter atTextureLocation:2];
    [gp2 processImage];
    [gp3 addTarget:filter atTextureLocation:3];
    [gp3 processImage];
    [filter addTarget:(GPUImageView *)self.view];

最佳答案

我发现 GPUImagePicture 会自动释放,所以过滤器不会接收纹理。 如果你遇到同样的问题,仔细检查纹理的生命控制,观察它们何时被释放。

关于ios - 我想用 GPUImage 制作一个多于两个的输入纹理过滤器。但我得到一个黑色输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14514949/

有关ios - 我想用 GPUImage 制作一个多于两个的输入纹理过滤器。但我得到一个黑色输出的更多相关文章

  1. ruby - 使用 Vim Rails,您可以创建一个新的迁移文件并一次性打开它吗? - 2

    使用带有Rails插件的vim,您可以创建一个迁移文件,然后一次性打开该文件吗?textmate也可以这样吗? 最佳答案 你可以使用rails.vim然后做类似的事情::Rgeneratemigratonadd_foo_to_bar插件将打开迁移生成的文件,这正是您想要的。我不能代表textmate。 关于ruby-使用VimRails,您可以创建一个新的迁移文件并一次性打开它吗?,我们在StackOverflow上找到一个类似的问题: https://sta

  2. ruby-on-rails - Rails - 一个 View 中的多个模型 - 2

    我需要从一个View访问多个模型。以前,我的links_controller仅用于提供以不同方式排序的链接资源。现在我想包括一个部分(我假设)显示按分数排序的顶级用户(@users=User.all.sort_by(&:score))我知道我可以将此代码插入每个链接操作并从View访问它,但这似乎不是“ruby方式”,我将需要在不久的将来访问更多模型。这可能会变得很脏,是否有针对这种情况的任何技术?注意事项:我认为我的应用程序正朝着单一格式和动态页面内容的方向发展,本质上是一个典型的网络应用程序。我知道before_filter但考虑到我希望应用程序进入的方向,这似乎很麻烦。最终从任何

  3. ruby-on-rails - 渲染另一个 Controller 的 View - 2

    我想要做的是有2个不同的Controller,client和test_client。客户端Controller已经构建,我想创建一个test_clientController,我可以使用它来玩弄客户端的UI并根据需要进行调整。我主要是想绕过我在客户端中内置的验证及其对加载数据的管理Controller的依赖。所以我希望test_clientController加载示例数据集,然后呈现客户端Controller的索引View,以便我可以调整客户端UI。就是这样。我在test_clients索引方法中试过这个:classTestClientdefindexrender:template=>

  4. ruby-on-rails - 如何在 ruby​​ 中使用两个参数异步运行 exe? - 2

    exe应该在我打开页面时运行。异步进程需要运行。有什么方法可以在ruby​​中使用两个参数异步运行exe吗?我已经尝试过ruby​​命令-system()、exec()但它正在等待过程完成。我需要用参数启动exe,无需等待进程完成是否有任何ruby​​gems会支持我的问题? 最佳答案 您可以使用Process.spawn和Process.wait2:pid=Process.spawn'your.exe','--option'#Later...pid,status=Process.wait2pid您的程序将作为解释器的子进程执行。除

  5. ruby-on-rails - 如果 Object::try 被发送到一个 nil 对象,为什么它会起作用? - 2

    如果您尝试在Ruby中的nil对象上调用方法,则会出现NoMethodError异常并显示消息:"undefinedmethod‘...’fornil:NilClass"然而,有一个tryRails中的方法,如果它被发送到一个nil对象,它只返回nil:require'rubygems'require'active_support/all'nil.try(:nonexisting_method)#noNoMethodErrorexceptionanymore那么try如何在内部工作以防止该异常? 最佳答案 像Ruby中的所有其他对象

  6. ruby - 为什么 SecureRandom.uuid 创建一个唯一的字符串? - 2

    关闭。这个问题需要detailsorclarity.它目前不接受答案。想改进这个问题吗?通过editingthispost添加细节并澄清问题.关闭8年前。Improvethisquestion为什么SecureRandom.uuid创建一个唯一的字符串?SecureRandom.uuid#=>"35cb4e30-54e1-49f9-b5ce-4134799eb2c0"SecureRandom.uuid方法创建的字符串从不重复?

  7. ruby-on-rails - Rails - 从另一个模型中创建一个模型的实例 - 2

    我有一个正在构建的应用程序,我需要一个模型来创建另一个模型的实例。我希望每辆车都有4个轮胎。汽车模型classCar轮胎模型classTire但是,在make_tires内部有一个错误,如果我为Tire尝试它,则没有用于创建或新建的activerecord方法。当我检查轮胎时,它没有这些方法。我该如何补救?错误是这样的:未定义的方法'create'forActiveRecord::AttributeMethods::Serialization::Tire::Module我测试了两个环境:测试和开发,它们都因相同的错误而失败。 最佳答案

  8. ruby - 如何验证 IO.copy_stream 是否成功 - 2

    这里有一个很好的答案解释了如何在Ruby中下载文件而不将其加载到内存中:https://stackoverflow.com/a/29743394/4852737require'open-uri'download=open('http://example.com/image.png')IO.copy_stream(download,'~/image.png')我如何验证下载文件的IO.copy_stream调用是否真的成功——这意味着下载的文件与我打算下载的文件完全相同,而不是下载一半的损坏文件?documentation说IO.copy_stream返回它复制的字节数,但是当我还没有下

  9. Ruby 文件 IO 定界符? - 2

    我正在尝试解析一个文本文件,该文件每行包含可变数量的单词和数字,如下所示:foo4.500bar3.001.33foobar如何读取由空格而不是换行符分隔的文件?有什么方法可以设置File("file.txt").foreach方法以使用空格而不是换行符作为分隔符? 最佳答案 接受的答案将slurp文件,这可能是大文本文件的问题。更好的解决方案是IO.foreach.它是惯用的,将按字符流式传输文件:File.foreach(filename,""){|string|putsstring}包含“thisisanexample”结果的

  10. ruby - 用 Ruby 编写一个简单的网络服务器 - 2

    我想在Ruby中创建一个用于开发目的的极其简单的Web服务器(不,不想使用现成的解决方案)。代码如下:#!/usr/bin/rubyrequire'socket'server=TCPServer.new('127.0.0.1',8080)whileconnection=server.acceptheaders=[]length=0whileline=connection.getsheaders想法是从命令行运行这个脚本,提供另一个脚本,它将在其标准输入上获取请求,并在其标准输出上返回完整的响应。到目前为止一切顺利,但事实证明这真的很脆弱,因为它在第二个请求上中断并出现错误:/usr/b

随机推荐