草庐IT

python - scikit learn中partial_fit遇到的错误

coder 2023-08-21 原文

在 scikit learn 中使用 partial_fit 函数进行训练时,我在程序未终止的情况下收到以下错误,这怎么可能,即使经过训练的模型表现正确并提供正确的输出,这又是如何发生的?这有什么值得担心的吗?

/usr/lib/python2.7/dist-packages/sklearn/naive_bayes.py:207: RuntimeWarning: divide by zero encountered in log
  self.class_log_prior_ = (np.log(self.class_count_)

我正在使用以下修改后的训练函数,因为我必须维护一个恒定的标签\类列表,因为 partial_fit 不允许在后续运行中添加新类\标签,每批训练数据中的先验类是相同的:

class MySklearnClassifier(SklearnClassifier):
    def train(self, labeled_featuresets,classes=None, partial=True):
        """
        Train (fit) the scikit-learn estimator.

        :param labeled_featuresets: A list of ``(featureset, label)``
            where each ``featureset`` is a dict mapping strings to either
            numbers, booleans or strings.
        """

        X, y = list(compat.izip(*labeled_featuresets))
        X = self._vectorizer.fit_transform(X)
        y = self._encoder.fit_transform(y)



        if partial:
            classes=self._encoder.fit_transform(list(set(classes)))
            self._clf.partial_fit(X, y, classes=list(set(classes)))
        else:
            self._clf.fit(X, y)

        return self

同样在第二次调用 partial_fit 时,它会抛出类 count=2000 的以下错误,调用 model = self.train(featureset, classes=labels,partial=partial) 时训练样本为 3592:

self._clf.partial_fit(X, y, classes=list(set(classes)))
  File "/usr/lib/python2.7/dist-packages/sklearn/naive_bayes.py", line 277, in partial_fit
    self._count(X, Y)
  File "/usr/lib/python2.7/dist-packages/sklearn/naive_bayes.py", line 443, in _count
    self.feature_count_ += safe_sparse_dot(Y.T, X)
ValueError: operands could not be broadcast together with shapes (2000,11430) (2000,10728) (2000,11430) 

根据抛出的错误,我哪里出错了?这是否意味着我正在推送不正确的尺寸数据? 我试过跟随,我现在打电话:

        X = self._vectorizer.transform(X)
        y = self._encoder.transform(y)

每次调用部分拟合。早些时候,我对每个 partialfit 调用都使用了 fittransform。这是正确的吗

class MySklearnClassifier(SklearnClassifier):
    def train(self, labeled_featuresets, classes=None, partial=False):
        """
        Train (fit) the scikit-learn estimator.

        :param labeled_featuresets: A list of ``(featureset, label)``
            where each ``featureset`` is a dict mapping strings to either
            numbers, booleans or strings.
        """

        X, y = list(compat.izip(*labeled_featuresets))

        if partial:
            classes = self._encoder.fit_transform(np.unique(classes))
            X = self._vectorizer.transform(X)
            y = self._encoder.transform(y)
            self._clf.partial_fit(X, y, classes=list(set(classes)))
        else:
             X = self._vectorizer.fit_transform(X)
             y = self._encoder.fit_transform(y)
             self._clf.fit(X, y)

        return self._clf

经过多次尝试,我能够让以下代码正常工作,通过考虑第一次调用,但我假设分类器腌制文件的大小会在每次迭代后增加,但我为每批获得相同大小的 pkl 文件这是不可能的:

 class MySklearnClassifier(SklearnClassifier):

    def train(self, labeled_featuresets, classes=None, partial=False,firstcall=True):
        """
        Train (fit) the scikit-learn estimator.

        :param labeled_featuresets: A list of ``(featureset, label)``
            where each ``featureset`` is a dict mapping strings to either
            numbers, booleans or strings.
        """

        X, y = list(compat.izip(*labeled_featuresets))

        if partial:

           if firstcall:
                classes = self._encoder.fit_transform(np.unique(classes))
                X = self._vectorizer.fit_transform(X)
                y = self._encoder.fit_transform(y)
                self._clf.partial_fit(X, y, classes=classes)
           else:

                X = self._vectorizer.transform(X)
                y = self._encoder.fit_transform(y)
                self._clf.partial_fit(X, y)
        else:
             X = self._vectorizer.fit_transform(X)
             y = self._encoder.fit_transform(y)
             self._clf.fit(X, y)

        return self

完整代码如下:

class postagger(ClassifierBasedTagger):
    """
    A classifier based postagger.
    """
    #MySklearnClassifier()
    def __init__(self, feature_detector=None, train=None,estimator=None,

                 classifierinstance=None, backoff=None,
                 cutoff_prob=None, verbose=True):

        if backoff is None:
            self._taggers = [self]
        else:
            self._taggers = [self] + backoff._taggers
        if estimator:
            classifier = MySklearnClassifier(estimator=estimator)
            #MySklearnClassifier.__init__(self, classifier)
        elif classifierinstance:
            classifier=classifierinstance






        if feature_detector is not None:
            self._feature_detector = feature_detector
            # The feature detector function, used to generate a featureset
            # or each token: feature_detector(tokens, index, history) -> featureset

        self._cutoff_prob = cutoff_prob
        """Cutoff probability for tagging -- if the probability of the
           most likely tag is less than this, then use backoff."""

        self._classifier = classifier
        """The classifier used to choose a tag for each token."""

        # if train and picklename:
        #     self._train(classifier_builder, picklename,tagged_corpus=train, ONLYERRORS=False,verbose=True,onlyfeatures=True ,LOADCONSTRUCTED=None)

    def legacy_getfeatures(self, tagged_corpus=None, ONLYERRORS=False, existingfeaturesetfile=None, verbose=True,
                           labels=artlabels):

        featureset = []
        labels=artlabels
        if not existingfeaturesetfile and tagged_corpus:
            if ONLYERRORS:

                classifier_corpus = open(tagged_corpus + '-ONLYERRORS.richfeature', 'w')
            else:
                classifier_corpus = open(tagged_corpus + '.richfeature', 'w')

            if verbose:
                print('Constructing featureset  for training corpus for classifier.')
            nlp = English()
            #df=pandas.DataFrame()
            store = HDFStore('featurestore.h5')



            for sentence in sPickle.s_load(open(tagged_corpus,'r')):
                untagged_words, tags, senindex = zip(*sentence)
                doc = nlp(u' '.join(untagged_words))
                # untagged_sentence, tags , rest = unpack_three(*zip(*sentence))
                for index in range(len(sentence)):
                    if ONLYERRORS:
                        if tags[index] == '<!SAME!>' and random.random() < 0.05:
                            featureset = self.new_feature_detector(doc, index)
                            sPickle.s_dump_elt((featureset, tags[index]), classifier_corpus)
                            featureset['label']=tags[index]
                            featureset['senindex']=str(senindex[0])
                            featureset['wordindex']=index
                            df=pandas.DataFrame([featureset])
                            store.append('df',df,index=False,min_itemsize = 150)
                            # classifier_corpus.append((featureset, tags[index]))
                        elif tags[index] in labels:
                            featureset = self.new_feature_detector(doc, index)
                            sPickle.s_dump_elt((featureset, tags[index]), classifier_corpus)
                            featureset['label']=tags[index]
                            featureset['senindex']=str(senindex[0])
                            featureset['wordindex']=index
                            df=pandas.DataFrame([featureset])
                            store.append('df',df,index=False,min_itemsize = 150)


                        # classifier_corpus.append((featureset, tags[index]))
        # else:
        #     for element in sPickle.s_load(open(existingfeaturesetfile, 'w')):
        #         featureset.append( element)

        return tagged_corpus + '.richfeature'

    def _train(self, featuresetdata, classifier_builder=MultinomialNB(), partial=False, batchsize=500):
        """
        Build a new classifier, based on the given training data
        *tagged_corpus*.

        """



        #labels = set(cPickle.load(open(arguments['-k'], 'r')))
        if partial==False:
           print('Training classifier FULLMODE')
           featureset = []
           for element in sPickle.s_load(open(featuresetdata, 'r')):
               featureset.append(element)

           model = self._classifier.train(featureset, classes=artlabels, partial=False,firstcall=True)
           print('Training complete, dumping')
           try:
            joblib.dump(model,  str(featuresetdata) + '-FULLTRAIN ' + slugify(str(classifier_builder))[:10] +'.mpkl')
            print "joblib dumped"
           except:
               print "joblib error"
           cPickle.dump(model, open(str(featuresetdata) + '-FULLTRAIN ' + slugify(str(classifier_builder))[:10] +'.cmpkl', 'w'))
           print('dumped')
           return
        #joblib.dump(self._classifier,str(datetime.datetime.now().hour)+'-naivebayes.pickle',compress=0)

        print('Training classifier each batch of {} training points'.format(batchsize))

        for i, batchelement in enumerate(batch(sPickle.s_load(open(featuresetdata, 'r')), batchsize)):
            featureset = []
            for element in batchelement:
                featureset.append(element)



            # model =  super(postagger, self).train (featureset, partial)
            # pdb.set_trace()
            # featureset = [item for sublist in featureset for item in sublist]
            trainsize = len(featureset)
            print("submitting {} training points for training\neg last one:".format(trainsize))
            for d, l in featureset:
                if len(d) != 113:
                    print d
                    assert False

            print featureset[-1]
            # pdb.set_trace()
            try:
                if i==0:
                    model = self._classifier.train(featureset, classes=artlabels, partial=True,firstcall=True)
                else:
                    model = self._classifier.train(featureset, classes=artlabels, partial=True,firstcall=False)

            except:
                type, value, tb = sys.exc_info()
                traceback.print_exc()
                pdb.post_mortem(tb)

            print('Training for batch {} complete, dumping'.format(i))
            cPickle.dump(model, open(
                str(featuresetdata) + '-' + slugify(str(classifier_builder))[
                                            :10] + 'UPDATED batch-{} of {} points.mpkl'.format(
                    i, trainsize), 'w'))
            print('dumped')
        #joblib.dump(self._classifier,str(datetime.datetime.now().hour)+'-naivebayes.pickle',compress=0)

    def untag(self,tagged_sentence):
        """
        Given a tagged sentence, return an untagged version of that
        sentence.  I.e., return a list containing the first element
        of each tuple in *tagged_sentence*.

            >>> from nltk.tag.util import untag
            >>> untag([('John', 'NNP'), ('saw', 'VBD'), ('Mary', 'NNP')])
            ['John', 'saw', 'Mary']

        """

        return [w[0] for w in tagged_sentence]

    def evaluate(self, gold):
        """
        Score the accuracy of the tagger against the gold standard.
        Strip the tags from the gold standard text, retag it using
        the tagger, then compute the accuracy score.

        :type gold: list(list(tuple(str, str)))
        :param gold: The list of tagged sentences to score the tagger on.
        :rtype: float
        """
        gold_tokens=[]
        full_gold_tokens=[]

        tagged_sents = self.tag_sents(self.untag(sent) for sent in gold)
        for sentence in gold:#flatten the list

            untagged_sentences, goldtags,type_feature,startpos_feature,sentence_feature,senindex_feature = zip(*sentence)


            gold_tokens.extend(zip(untagged_sentences,goldtags))
            full_gold_tokens.extend(zip( untagged_sentences, goldtags,type_feature,startpos_feature,sentence_feature,senindex_feature))





        test_tokens = sum(tagged_sents, []) #flatten the list
        getmismatch(gold_tokens,test_tokens,full_gold_tokens)
        return accuracy(gold_tokens, test_tokens)

    #
    def new_feature_detector(self, tokens, index):
        return getfeatures(tokens, index)


    def tag_sents(self, sentences):
        """
        Apply ``self.tag()`` to each element of *sentences*.  I.e.:

            return [self.tag(sent) for sent in sentences]
        """
        return [self.tag(sent) for sent in sentences]

    def tag(self, tokens):
        # docs inherited from TaggerI
        tags = []
        for i in range(len(tokens)):
            tags.append(self.tag_one(tokens, i))
        return list(zip(tokens, tags))

    def tag_one(self, tokens, index):
        """
        Determine an appropriate tag for the specified token, and
        return that tag.  If this tagger is unable to determine a tag
        for the specified token, then its backoff tagger is consulted.

        :rtype: str
        :type tokens: list
        :param tokens: The list of words that are being tagged.
        :type index: int
        :param index: The index of the word whose tag should be
            returned.
        :type history: list(str)
        :param history: A list of the tags for all words before *index*.
        """
        tag = None
        for tagger in self._taggers:
            tag = tagger.choose_tag(tokens, index)
            if tag is not None:  break
        return tag

    def choose_tag(self, tokens, index):
        # Use our feature detector to get the featureset.
        featureset = self.new_feature_detector(tokens, index)

        # Use the classifier to pick a tag.  If a cutoff probability
        # was specified, then check that the tag's probability is
        # higher than that cutoff first; otherwise, return None.

        if self._cutoff_prob is None:
            return self._classifier.prob_classify_many([featureset])
            #return self._classifier.classify_many([featureset])


        pdist = self._classifier.prob_classify_many([featureset])
        tag = pdist.max()
        return tag if pdist.prob(tag) >= self._cutoff_prob else None

最佳答案

1。 RuntimeWarning

您收到此警告是因为在 0 上调用了 np.log:

In [6]: np.log(0)
/home/anaconda/envs/python34/lib/python3.4/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in log
  if __name__ == '__main__':
Out[6]: -inf

那是因为在你的一个调用中,一些类根本没有被表示(它们的计数为 0)因此 np.log 被调用为 0。你不必担心关于它。

2。类(class)先验

I am using the following modified training function as I have to maintain a constant list of labels\classes as the partial_fit does not allow adding new classes\labels on subsequent runs , the class prior is same in each batch of training data

  • 如果您使用 partial_fit,您需要从一开始就传递标签/类列表,这是对的。
  • 我不确定每批训练数据中的类之前是否相同。这可能有几种不同的含义,如果您能在这里阐明您的意思,那就太好了。
    与此同时,MultinomialNB 等分类器的默认行为是它们先验拟合数据(基本上它们计算频率)。使用 partial_fit 时,它们将增量 执行此计算,以便您获得与使用单个 fit 调用相同的结果。

3。你的错误

Also on the second call to partial_fit it throws following error for class count=2000 , and training samples are 3592 on calling model = self.train(featureset, classes=labels,partial=partial)

这里我们需要更多的细节。我很困惑 X 的形状是 (n_samples, n_features) 但在回溯中它看起来是 (2000,11430)。这意味着 X 有 2000 个样本。

错误确实意味着您输入的尺寸不一致。我建议为每个 partial_fit 调用打印 X.shapey.shape after vectorization

此外,您不应该在为每个 partial_fit 调用转换 X 的矢量化器上调用 fitfit_transform :你应该适应它一次,然后只变换 X。这是为了确保你获得变换后的 X 的一致尺寸。

4。您的“早期”解决方案

这是您告诉我们您正在使用的代码:

class MySklearnClassifier(SklearnClassifier):
    def train(self, labeled_featuresets, classes=None, partial=False):
        """
        Train (fit) the scikit-learn estimator.

        :param labeled_featuresets: A list of ``(featureset, label)``
            where each ``featureset`` is a dict mapping strings to either
            numbers, booleans or strings.
        """

        X, y = list(compat.izip(*labeled_featuresets))

        if partial:
            classes = self._encoder.fit_transform(np.unique(classes))
            X = self._vectorizer.transform(X)
            y = self._encoder.transform(y)
            self._clf.partial_fit(X, y, classes=list(set(classes)))
        else:
             X = self._vectorizer.fit_transform(X)
             y = self._encoder.fit_transform(y)
             self._clf.fit(X, y)

        return self._clf

据我所知,这并没有太大问题,但我们确实需要更多关于您如何在此处使用它的上下文。
吹毛求疵:我觉得如果将 classes 变量作为类属性会更清楚,因为对于每个 partial_fit 调用,此变量都需要相同。
如果您将不同的值传递给构造函数 classes 参数,您可能会在这里做错事。

可以帮助我们帮助您的更多信息:

  • X.shape、y.shape 的打印。
  • 上下文:您如何使用您提供的代码??
  • _vectorizer_encoder 使用什么?您最终使用的是什么分类器?

关于python - scikit learn中partial_fit遇到的错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32697093/

有关python - scikit learn中partial_fit遇到的错误的更多相关文章

  1. python - 如何使用 Ruby 或 Python 创建一系列高音调和低音调的蜂鸣声? - 2

    关闭。这个问题是opinion-based.它目前不接受答案。想要改进这个问题?更新问题,以便editingthispost可以用事实和引用来回答它.关闭4年前。Improvethisquestion我想在固定时间创建一系列低音和高音调的哔哔声。例如:在150毫秒时发出高音调的蜂鸣声在151毫秒时发出低音调的蜂鸣声200毫秒时发出低音调的蜂鸣声250毫秒的高音调蜂鸣声有没有办法在Ruby或Python中做到这一点?我真的不在乎输出编码是什么(.wav、.mp3、.ogg等等),但我确实想创建一个输出文件。

  2. ruby-on-rails - rails : "missing partial" when calling 'render' in RSpec test - 2

    我正在尝试测试是否存在表单。我是Rails新手。我的new.html.erb_spec.rb文件的内容是:require'spec_helper'describe"messages/new.html.erb"doit"shouldrendertheform"dorender'/messages/new.html.erb'reponse.shouldhave_form_putting_to(@message)with_submit_buttonendendView本身,new.html.erb,有代码:当我运行rspec时,它失败了:1)messages/new.html.erbshou

  3. ruby-on-rails - Rails 常用字符串(用于通知和错误信息等) - 2

    大约一年前,我决定确保每个包含非唯一文本的Flash通知都将从模块中的方法中获取文本。我这样做的最初原因是为了避免一遍又一遍地输入相同的字符串。如果我想更改措辞,我可以在一个地方轻松完成,而且一遍又一遍地重复同一件事而出现拼写错误的可能性也会降低。我最终得到的是这样的:moduleMessagesdefformat_error_messages(errors)errors.map{|attribute,message|"Error:#{attribute.to_s.titleize}#{message}."}enddeferror_message_could_not_find(obje

  4. ruby - 通过 RVM (OSX Mountain Lion) 安装 Ruby 2.0.0-p247 时遇到问题 - 2

    我的最终目标是安装当前版本的RubyonRails。我在OSXMountainLion上运行。到目前为止,这是我的过程:已安装的RVM$\curl-Lhttps://get.rvm.io|bash-sstable检查已知(我假设已批准)安装$rvmlistknown我看到当前的稳定版本可用[ruby-]2.0.0[-p247]输入命令安装$rvminstall2.0.0-p247注意:我也试过这些安装命令$rvminstallruby-2.0.0-p247$rvminstallruby=2.0.0-p247我很快就无处可去了。结果:$rvminstall2.0.0-p247Search

  5. ruby-on-rails - 迷你测试错误 : "NameError: uninitialized constant" - 2

    我遵循MichaelHartl的“RubyonRails教程:学习Web开发”,并创建了检查用户名和电子邮件长度有效性的测试(名称最多50个字符,电子邮件最多255个字符)。test/helpers/application_helper_test.rb的内容是:require'test_helper'classApplicationHelperTest在运行bundleexecraketest时,所有测试都通过了,但我看到以下消息在最后被标记为错误:ERROR["test_full_title_helper",ApplicationHelperTest,1.820016791]test

  6. ruby-on-rails - 如何在 Rails View 上显示错误消息? - 2

    我是rails的新手,想在form字段上应用验证。myviewsnew.html.erb.....模拟.rbclassSimulation{:in=>1..25,:message=>'Therowmustbebetween1and25'}end模拟Controller.rbclassSimulationsController我想检查模型类中row字段的整数范围,如果不在范围内则返回错误信息。我可以检查上面代码的范围,但无法返回错误消息提前致谢 最佳答案 关键是您使用的是模型表单,一种显示ActiveRecord模型实例属性的表单。c

  7. 使用 ACL 调用 upload_file 时出现 Ruby S3 "Access Denied"错误 - 2

    我正在尝试编写一个将文件上传到AWS并公开该文件的Ruby脚本。我做了以下事情:s3=Aws::S3::Resource.new(credentials:Aws::Credentials.new(KEY,SECRET),region:'us-west-2')obj=s3.bucket('stg-db').object('key')obj.upload_file(filename)这似乎工作正常,除了该文件不是公开可用的,而且我无法获得它的公共(public)URL。但是当我登录到S3时,我可以正常查看我的文件。为了使其公开可用,我将最后一行更改为obj.upload_file(file

  8. ruby-on-rails - 错误 : Error installing pg: ERROR: Failed to build gem native extension - 2

    我克隆了一个rails仓库,我现在正尝试捆绑安装背景:OSXElCapitanruby2.2.3p173(2015-08-18修订版51636)[x86_64-darwin15]rails-v在您的Gemfile中列出的或native可用的任何gem源中找不到gem'pg(>=0)ruby​​'。运行bundleinstall以安装缺少的gem。bundleinstallFetchinggemmetadatafromhttps://rubygems.org/............Fetchingversionmetadatafromhttps://rubygems.org/...Fe

  9. ruby - #之间? Cooper 的 *Beginning Ruby* 中的错误或异常 - 2

    在Cooper的书BeginningRuby中,第166页有一个我无法重现的示例。classSongincludeComparableattr_accessor:lengthdef(other)@lengthother.lengthenddefinitialize(song_name,length)@song_name=song_name@length=lengthendenda=Song.new('Rockaroundtheclock',143)b=Song.new('BohemianRhapsody',544)c=Song.new('MinuteWaltz',60)a.betwee

  10. ruby-on-rails - 每次我尝试部署时,我都会得到 - (gcloud.preview.app.deploy) 错误响应 : [4] DEADLINE_EXCEEDED - 2

    我是Google云的新手,我正在尝试对其进行首次部署。我的第一个部署是RubyonRails项目。我基本上是在关注thisguideinthegoogleclouddocumentation.唯一的区别是我使用的是我自己的项目,而不是他们提供的“helloworld”项目。这是我的app.yaml文件runtime:customvm:trueentrypoint:bundleexecrackup-p8080-Eproductionconfig.ruresources:cpu:0.5memory_gb:1.3disk_size_gb:10当我转到我的项目目录并运行gcloudprevie

随机推荐