草庐IT

hadoop - Sqoop Import to Hive 在某个点无限期挂起

coder 2024-01-06 原文

我正在尝试使用 Sqoop Import 将 mysql 表导入 Hive,但是在执行命令后,CLI 保持平静,没有任何反应,并且无限期挂起。 下面是命令和问题的详细信息..

[cloudera@quickstart bin]$ sqoop create-hive-table --connect jdbc:mysql://10.X.X.XX:XXXX/rkdb --username root -P --table employee --hive-table emps

Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 17/08/30 22:20:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.12.0 Enter password: 17/08/30 22:20:05 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 17/08/30 22:20:05 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 17/08/30 22:20:05 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 17/08/30 22:20:06 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM employee AS t LIMIT 1 17/08/30 22:20:06 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM employee AS t LIMIT 1 17/08/30 22:20:08 INFO hive.HiveImport: Loading uploaded data into Hive

Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.12.0.jar!/hive-log4j.properties

非常感谢任何线索。

编辑: 在命令中包含 --verbose 以获取详细日志记录:

  Note: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/08/31 01:34:13 DEBUG orm.CompilationManager: Could not rename /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.java to /usr/lib/spark/examples/lib/./employee.java
17/08/31 01:34:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.jar
17/08/31 01:34:13 DEBUG orm.CompilationManager: Scanning for .class files in directory: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee$1.class -> employee$1.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee$2.class -> employee$2.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee$3.class -> employee$3.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee$4.class -> employee$4.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee$FieldSetterCommand.class -> employee$FieldSetterCommand.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.class -> employee.class
17/08/31 01:34:13 DEBUG orm.CompilationManager: Finished writing jar file /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.jar
17/08/31 01:34:13 WARN manager.MySQLManager: It looks like you are importing from mysql.
17/08/31 01:34:13 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
17/08/31 01:34:13 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
17/08/31 01:34:13 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
17/08/31 01:34:13 DEBUG manager.MySQLManager: Rewriting connect string to jdbc:mysql://10.0.2.15:3306/rkdb?zeroDateTimeBehavior=convertToNull
17/08/31 01:34:13 INFO mapreduce.ImportJobBase: Beginning import of employee
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Checking for existing class: employee
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Attempting to load jar through URL: jar:file:/tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.jar!/
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Previous classloader is sun.misc.Launcher$AppClassLoader@7d487b8b
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Testing class in jar: employee
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Loaded jar into current JVM: jar:file:/tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.jar!/
17/08/31 01:34:13 DEBUG util.ClassLoaderStack: Added classloader for jar /tmp/sqoop-cloudera/compile/5b58dd4e681df3737c3f8ce4f32013ac/employee.jar: java.net.FactoryURLClassLoader@2ea3741
17/08/31 01:34:16 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/08/31 01:34:16 DEBUG db.DBConfiguration: Securing password into job credentials store
17/08/31 01:34:16 DEBUG mapreduce.DataDrivenImportJob: Using table class: employee
17/08/31 01:34:16 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat
17/08/31 01:34:19 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/sqoop-1.4.6-cdh5.12.0.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/share/java/mysql-connector-java-5.1.34-bin.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/sqoop-1.4.6-cdh5.12.0.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/sqoop-1.4.6-cdh5.12.0.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/avro.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-lang3-3.4.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/jackson-mapper-asl-1.8.8.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/ant-contrib-1.0b3.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-hadoop.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/kite-hadoop-compatibility.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-logging-1.1.3.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/slf4j-api-1.7.5.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/opencsv-2.3.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/ant-eclipse-1.0-jvm1.2.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/xz-1.0.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-jexl-2.1.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-format.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-compress-1.4.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-avro.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-codec-1.4.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-encoding.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-jackson.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/kite-data-core.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/jackson-core-2.3.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/kite-data-mapreduce.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-column.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/avro-mapred-hadoop2.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/jackson-databind-2.3.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/paranamer-2.3.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/snappy-java-1.0.4.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/hsqldb-1.8.0.10.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/kite-data-hive.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/commons-io-1.4.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/jackson-annotations-2.3.1.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/jackson-core-asl-1.8.8.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/parquet-common.jar
17/08/31 01:34:19 DEBUG mapreduce.JobBase: Adding to job classpath: file:/usr/lib/sqoop/lib/fastutil-6.3.jar
17/08/31 01:34:20 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/10.0.2.15:8032
17/08/31 01:34:23 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:952)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:690)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:879)
17/08/31 01:34:25 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:952)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:690)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:879)
17/08/31 01:34:26 DEBUG db.DBConfiguration: Fetching password from job credentials store
17/08/31 01:34:26 INFO db.DBInputFormat: Using read commited transaction isolation
17/08/31 01:34:26 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `employee`
17/08/31 01:34:26 INFO db.IntegerSplitter: Split size: 125; Num splits: 4 from: 100 to: 600
17/08/31 01:34:26 DEBUG db.IntegerSplitter: Splits: [                         100 to                          600] into 4 parts
17/08/31 01:34:26 DEBUG db.IntegerSplitter:                          100
17/08/31 01:34:26 DEBUG db.IntegerSplitter:                          225
17/08/31 01:34:26 DEBUG db.IntegerSplitter:                          350
17/08/31 01:34:26 DEBUG db.IntegerSplitter:                          475
17/08/31 01:34:26 DEBUG db.IntegerSplitter:                          600
17/08/31 01:34:26 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '`id` >= 100' and upper bound '`id` < 225'
17/08/31 01:34:26 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '`id` >= 225' and upper bound '`id` < 350'
17/08/31 01:34:26 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '`id` >= 350' and upper bound '`id` < 475'
17/08/31 01:34:26 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '`id` >= 475' and upper bound '`id` <= 600'
17/08/31 01:34:26 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:952)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:690)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:879)
17/08/31 01:34:26 INFO mapreduce.JobSubmitter: number of splits:4
17/08/31 01:34:27 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1504153900489_0005
17/08/31 01:34:29 INFO impl.YarnClientImpl: Submitted application application_1504153900489_0005
17/08/31 01:34:29 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1504153900489_0005/
17/08/31 01:34:29 INFO mapreduce.Job: Running job: job_1504153900489_0005
17/08/31 01:34:52 INFO mapreduce.Job: Job job_1504153900489_0005 running in uber mode : false
17/08/31 01:34:52 INFO mapreduce.Job:  map 0% reduce 0%
17/08/31 01:37:25 INFO mapreduce.Job:  map 50% reduce 0%
17/08/31 01:38:48 INFO mapreduce.Job:  map 100% reduce 0%
17/08/31 01:38:50 INFO mapreduce.Job: Job job_1504153900489_0005 completed successfully
17/08/31 01:38:51 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=609708
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=409
        HDFS: Number of bytes written=158
        HDFS: Number of read operations=16
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=8
    Job Counters 
        Launched map tasks=4
        Other local map tasks=4
        Total time spent by all maps in occupied slots (ms)=234310144
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=457637
        Total vcore-milliseconds taken by all map tasks=457637
        Total megabyte-milliseconds taken by all map tasks=234310144
    Map-Reduce Framework
        Map input records=6
        Map output records=6
        Input split bytes=409
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=8848
        CPU time spent (ms)=7840
        Physical memory (bytes) snapshot=445177856
        Virtual memory (bytes) snapshot=2910158848
        Total committed heap usage (bytes)=188219392
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=158
17/08/31 01:38:51 INFO mapreduce.ImportJobBase: Transferred 158 bytes in 271.6628 seconds (0.5816 bytes/sec)
17/08/31 01:38:51 INFO mapreduce.ImportJobBase: Retrieved 6 records.
17/08/31 01:38:51 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@7d487b8b
17/08/31 01:38:51 DEBUG hive.HiveImport: Hive.inputTable: employee
17/08/31 01:38:51 DEBUG hive.HiveImport: Hive.outputTable: rkdb_hive.employee11
17/08/31 01:38:51 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM `employee` AS t LIMIT 1
17/08/31 01:38:51 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
17/08/31 01:38:52 DEBUG manager.SqlManager: Using fetchSize for next query: -2147483648
17/08/31 01:38:52 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
17/08/31 01:38:52 DEBUG manager.SqlManager: Found column id of type [4, 11, 0]
17/08/31 01:38:52 DEBUG manager.SqlManager: Found column name of type [12, 20, 0]
17/08/31 01:38:52 DEBUG manager.SqlManager: Found column dept of type [12, 10, 0]
17/08/31 01:38:52 DEBUG manager.SqlManager: Found column salary of type [4, 10, 0]
17/08/31 01:38:52 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE `rkdb_hive.employee11` ( `id` INT, `name` STRING, `dept` STRING, `salary` INT) COMMENT 'Imported by sqoop on 2017/08/31 01:38:52' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' LINES TERMINATED BY '\012' STORED AS TEXTFILE
17/08/31 01:38:52 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://quickstart.cloudera:8020/user/cloudera/ImportSqoop12' INTO TABLE `rkdb_hive.employee11`
17/08/31 01:38:52 INFO hive.HiveImport: Loading uploaded data into Hive
17/08/31 01:38:52 DEBUG hive.HiveImport: Using in-process Hive instance.
17/08/31 01:38:52 DEBUG util.SubprocessSecurityManager: Installing subprocess security manager

Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.12.0.jar!/hive-log4j.properties

最佳答案

请在您的 sqoop 导入命令中包含 import--hive-import

关于hadoop - Sqoop Import to Hive 在某个点无限期挂起,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45973861/

有关hadoop - Sqoop Import to Hive 在某个点无限期挂起的更多相关文章

  1. ruby - 树顶语法无限循环 - 2

    我脑子里浮现出一些关于一种新编程语言的想法,所以我想我会尝试实现它。一位friend建议我尝试使用Treetop(Rubygem)来创建一个解析器。Treetop的文档很少,我以前从未做过这种事情。我的解析器表现得好像有一个无限循环,但没有堆栈跟踪;事实证明很难追踪到。有人可以指出入门级解析/AST指南的方向吗?我真的需要一些列出规则、常见用法等的东西来使用像Treetop这样的工具。我的语法分析器在GitHub上,以防有人希望帮助我改进它。class{initialize=lambda(name){receiver.name=name}greet=lambda{IO.puts("He

  2. hadoop安装之保姆级教程(二)之YARN的配置 - 2

    1.1.1 YARN的介绍 为克服Hadoop1.0中HDFS和MapReduce存在的各种问题⽽提出的,针对Hadoop1.0中的MapReduce在扩展性和多框架⽀持⽅⾯的不⾜,提出了全新的资源管理框架YARN. ApacheYARN(YetanotherResourceNegotiator的缩写)是Hadoop集群的资源管理系统,负责为计算程序提供服务器计算资源,相当于⼀个分布式的操作系统平台,⽽MapReduce等计算程序则相当于运⾏于操作系统之上的应⽤程序。 YARN被引⼊Hadoop2,最初是为了改善MapReduce的实现,但是因为具有⾜够的通⽤性,同样可以⽀持其他的分布式计算模

  3. ruby - 获取数组中的值并最小化某个类属性的最优雅的方法是什么? - 2

    假设我有以下类(class):classPersondefinitialize(name,age)@name=name@age=ageenddefget_agereturn@ageendend我有一组Person对象。是否有一种简洁的、类似于Ruby的方法来获取最小(或最大)年龄的人?如何根据它对它们进行排序? 最佳答案 这样做会:people_array.min_by(&:get_age)people_array.max_by(&:get_age)people_array.sort_by(&:get_age)

  4. ruby-on-rails - ruby数组奇怪的东西(无限数组) - 2

    当我写下面的代码时:x=[1,2,3]x我得到这个输出:[1,2,3,[...]][1,2,3,[...]][1,2,3,[...]]我不应该只得到[1,2,3,[1,2,3]]吗?解释是什么? 最佳答案 这没什么奇怪的。数组的第四个元素就是数组本身,所以当你求第四个元素时,你得到的是数组,当你求第四个元素的第四个元素时,你得到的是数组,当你求第四个元素时,你得到的是数组。第四个元素的第四个元素的第四个元素的元素......你得到了数组。就这么简单。唯一有点不寻常的是Array#to_s检测到这样的递归,而不是进入无限循环,而是返回

  5. ruby - 检查一个数组中的任何数字是否小于另一个数组中的某个数字 - 2

    这似乎是一个很常见的问题。遗憾的是我在SO上找不到它。如果这是一个重复的问题;我为此道歉。假设我有两个整数数组A和B:A=[17,3,9,11,11,15,2]B=[1,13]如果数组A的任何元素小于数组B的任何元素,我需要返回true或false。简单的方法是使用2个循环(O(n^2)复杂度)defis_greater?(a,b)retVal=falseb.each{|element|a.each{|value|if(valuetrue我还整理了两个数组中的元素,然后使用单个while循环来确定A中的元素是否小于B中的元素。A.sort!B.sort!defis_greater?(a

  6. ruby - Net::SSH sudo 命令在输入密码后挂起 - 2

    我一直在尝试使用Thor编写一个小型库,以帮助我快速创建新项目和站点。我写了这个小方法:defssh(cmd)Net::SSH.start(server_ip,user,:port=>port)do|session|session.execcmdendend只是协助我在需要时在远程服务器上运行快速命令。问题是当我需要在远程端的sudo下运行命令时,脚本似乎卡在我身上。例如当执行这个...ssh("sudocp#{file_from_path}#{file_to_path}")脚本会提示我输入密码[sudo]passwordforuser:但是在输入之后整个事情就挂起。有人会碰巧知道它为

  7. ruby - 如何找出所有数组元素是否都符合某个条件? - 2

    我有一个大数组,我需要知道它的所有元素是否都能被2整除。我是这样做的,但是有点丑:_true=truearr.each{|e|(e%2).zero?||_true=false}if_true==true#...end如何在没有额外循环/赋值的情况下做到这一点? 最佳答案 这样就可以了。arr.all?(&:even?) 关于ruby-如何找出所有数组元素是否都符合某个条件?,我们在StackOverflow上找到一个类似的问题: https://stackov

  8. ruby - 如何禁止在 RSpec 中显示挂起(跳过)的规范? - 2

    我有几个跳过的规范。Pending:(Failureslistedhereareexpectedanddonotaffectyoursuite'sstatus)1)...#Notyetimplemented#./spec/requests/request_spec.rb:22如何抑制未决规范的输出? 最佳答案 您可以添加以下配置选项以从运行中过滤掉所有待处理的规范:RSpec.configuredo|config|config.filter_run_excludingskip:trueend此外,here是一个更详细的抑制输出的建议

  9. ruby-on-rails - Rails 和 Rake 命令挂起并且什么都不做 - 2

    我不知道为什么,但是当我在我的Rails项目中运行rake命令时,没有任何反应。railsserver什么也不做。有什么建议吗? 最佳答案 你可以在开头添加一个“ruby-rtracer”以查看它卡在哪里。 关于ruby-on-rails-Rails和Rake命令挂起并且什么都不做,我们在StackOverflow上找到一个类似的问题: https://stackoverflow.com/questions/3657296/

  10. ruby-on-rails - 即使没有挂起的迁移,Rails 迁移也非常缓慢 - 2

    我的生产Rails应用程序需要167秒来运行rakedb:migrate。可悲的是,没有要运行的迁移。我试图在检查是否有待处理的迁移时调整运行的迁移,但随后检查花费了同样长的时间。我心目中唯一的“借口”是数据库并不小,那里有1M条记录,但我看不出这有什么关系。我查看了日志,但没有任何迹象表明出了什么问题。我在运行ruby2.2.0rails4.2.0有没有人知道为什么会这样,是否有什么办法可以解决? 最佳答案 运行rakedb:migrate任务还会调用db:schema:dump任务,这将更新您的db/schema.rb。因此,即

随机推荐