Colossal-AI训练diffusion经验记录
参考链接
调用colossalai包和 pytorch-lightning包
#case1: HybridAdam
from lightning.pytorch import trainer, LightningModule
from colossalai.nn.optimizer import HybridAdam
class MyDiffuser(LightningModule):
...
def configure_sharded_model(self) -> None:
# create your model here
self.model = construct_diffuser_model(...)
...
def configure_optimizers(self):
# use the specified optimizer
optimizer = HybridAdam(self.model.parameters(), self.lr)
...
model = MyDiffuser()
trainer = Trainer(accelerator="gpu", devices=1, precision=16, strategy="colossalai")
trainer.fit(model)
#case2: ColossalAIStrategy显存优化
from lightning.pytorch import trainer, LightningModule
from lightning.pytorch.strategies import ColossalAIStrategy
Mystrategy = ColossalAIStrategy(use_chunk=True, enable_distributed_storage=True, placement_policy=auto)
trainer = Trainer(accelerator="gpu", devices=4, precision=16, strategy=Mystrategy)
trainer.fit(model)
python main.py --logdir /your_log_dir -t -b config/train_colossalai.yamlmodel:
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
your_sub_module_config:
target: your.model.import.path
params:
from_pretrained: 'your_file_path/unet/diffusion_pytorch_model.bin'
...
lightning:
trainer:
strategy:
target: pytorch_lightning.strategies.ColossalAIStrategy
params:
...
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("your_ColoDiffusion_checkpoint_path").to("cuda")
image = pipe('your prompt', num_inference_steps=50)["sample"][0]
image.save('file path')
We provide the script train_colossalai.sh to run the training task with colossalai, and can also use train_ddp.sh to run the training task with ddp to compare.
可以通过 train_colossalai.sh 来执行训练任务:
python main.py --logdir /tmp/ --train --base configs/train_colossalai.yaml --ckpt 512-base-ema.ckpt
You can change the --logdir to decide where to save the log information and the last checkpoint.
You will find your ckpt inlogdir/checkpointsorlogdir/diff_tb/version_0/checkpoints
You will find your train config yaml inlogdir/configs
You can add the --ckpt if you want to load the pretrained model, for example 512-base-ema.ckpt
You can change the --base to specify the path of config yaml
Training config
train_colossalai.yaml中可以修改的一些参数
devices: device number used for training, default 8
max_epochs: max training epochs, default 2
precision: the precision type used in training, default 16 (fp16), you must use fp16 if you want to apply colossalai
使用Colossal-AI需下载pytorch-lightning>=1.8.1
Colossal-AI 已集成作为 PyTorch Lightning 的官方大模型解决方案
Step 2: install lightning
Install Lightning version later than 2022.01.04. We suggest you install lightning from source.
注意太早的 pytorch-lightning 版本没有strategy更没有ColossalAIStrategy,
经测试1.4.1和1.6.1版本都报错或者warning了,最后换到1.8.1成功跑通
pip install pytorch-lightning==1.8.1
3090需下载 CUDA >= 11.6版本
否则报如下提示
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
参考如下文章
解决:NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with …
xformer需要pytorch>=1.13.1
/home/user/anaconda3/envs/colossalai/lib/python3.7/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
应该时torchvision有问题,重新安装pytorch,经测试如下版本可成功跑通
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge...
=========================================================================================
No pre-built kernel is found, build and load the fused_optim kernel during runtime now
=========================================================================================
Detected CUDA files, patching ldflags
Emitting ninja build file /home/fangfei/.cache/colossalai/torch_extensions/torch1.12_cu11.6/build.ninja...
Building extension module fused_optim...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
...
[6/7] /home/fangfei/.conda/envs/tmm/bin/nvcc -DTORCH_EXTENSION_NAME=fused_optim -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/kernels/include -I/home/fangfei/.conda/envs/tmm/include -isystem /home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include -isystem /home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/TH -isystem /home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/THC -isystem /home/fangfei/.conda/envs/tmm/include -isystem /home/fangfei/.conda/envs/tmm/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 --use_fast_math -lineinfo -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -std=c++14 -c /home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu -o multi_tensor_l2norm_kernel.cuda.o
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu: In function ‘std::tuple<at::Tensor, at::Tensor> multi_tensor_l2norm_cuda(int, at::Tensor, std::vector<std::vector<at::Tensor> >, c10::optional<bool>)’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:289:89: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
tensor_lists[0][0].scalar_type(), 0, "multi_tensor_l2norm_cuda",
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:290:9: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
multi_tensor_apply<1>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:291:120: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
BLOCK_SIZE, chunk_size, noop_flag, tensor_lists,
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:292:40: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
L2NormFunctor<scalar_t_0>(), output.DATA_PTR<float>(),
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:305:115: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup<<<per_tensor ? ntensors : 1, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:305:163: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup<<<per_tensor ? ntensors : 1, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:305:196: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup<<<per_tensor ? ntensors : 1, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:305:241: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup<<<per_tensor ? ntensors : 1, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu: In function ‘void multi_tensor_norm_out_cuda(int, at::Tensor, std::vector<std::vector<at::Tensor> >, at::Tensor, float, float, int)’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:349:90: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
tensor_lists[0][0].scalar_type(), 0, "multi_tensor_maxnorm_cuda",
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:349:125: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
tensor_lists[0][0].scalar_type(), 0, "multi_tensor_maxnorm_cuda",
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:351:91: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
BLOCK_SIZE, chunk_size, noop_flag, tensor_lists,
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:351:126: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
BLOCK_SIZE, chunk_size, noop_flag, tensor_lists,
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:356:89: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
tensor_lists[0][0].scalar_type(), 0, "multi_tensor_l2norm_cuda",
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:356:124: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
tensor_lists[0][0].scalar_type(), 0, "multi_tensor_l2norm_cuda",
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:358:89: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
BLOCK_SIZE, chunk_size, noop_flag, tensor_lists,
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:358:124: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
BLOCK_SIZE, chunk_size, noop_flag, tensor_lists,
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:376:101: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup_v2<<<ntensors, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:376:136: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup_v2<<<ntensors, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:376:157: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup_v2<<<ntensors, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:376:178: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
cleanup_v2<<<ntensors, 512, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh: In instantiation of ‘void multi_tensor_apply(int, int, const at::Tensor&, const std::vector<std::vector<at::Tensor> >&, T, ArgTypes ...) [with int depth = 1; T = L2NormFunctor<float>; ArgTypes = {float*, float*, bool, int}]’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:290:57: required from here
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh:104:150: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
multi_tensor_apply_kernel<<<loc_block_info, block_size, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh: In instantiation of ‘void multi_tensor_apply(int, int, const at::Tensor&, const std::vector<std::vector<at::Tensor> >&, T, ArgTypes ...) [with int depth = 1; T = L2NormFunctor<c10::Half>; ArgTypes = {float*, float*, bool, int}]’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:292:88: required from here
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh:104:150: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
multi_tensor_apply_kernel<<<loc_block_info, block_size, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh: In instantiation of ‘void multi_tensor_apply(int, int, const at::Tensor&, const std::vector<std::vector<at::Tensor> >&, T, ArgTypes ...) [with int depth = 1; T = MaxNormFunctor<float>; ArgTypes = {float*, float*, bool, int}]’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:350:27: required from here
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh:104:150: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
multi_tensor_apply_kernel<<<loc_block_info, block_size, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh: In instantiation of ‘void multi_tensor_apply(int, int, const at::Tensor&, const std::vector<std::vector<at::Tensor> >&, T, ArgTypes ...) [with int depth = 1; T = MaxNormFunctor<c10::Half>; ArgTypes = {float*, float*, bool, int}]’:
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu:352:28: required from here
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh:104:150: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
multi_tensor_apply_kernel<<<loc_block_info, block_size, 0, stream>>>(
^
/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:235:1: note: declared here
T * data() const {
^ ~~
[7/7] c++ colossal_C_frontend.o multi_tensor_sgd_kernel.cuda.o multi_tensor_scale_kernel.cuda.o multi_tensor_adam.cuda.o multi_tensor_l2norm_kernel.cuda.o multi_tensor_lamb.cuda.o -shared -L/home/fangfei/.conda/envs/tmm/lib/python3.8/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/home/fangfei/.conda/envs/tmm/lib64 -lcudart -o fused_optim.so
Loading extension module fused_optim...
Time to load fused_optim op: 67.05005216598511 seconds
Segmentation fault (core dumped)
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
...
虽然不影响程序运行 不过去查了一下好像是xformer的原因
参考文章:解决stable diffusion wenUI安装xformers出错问题
...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
DiffusionWrapper has 865.91 M params.
=========================================================================================
No pre-built kernel is found, build and load the cpu_adam kernel during runtime now
=========================================================================================
Emitting ninja build file /home/fangfei/.cache/colossalai/torch_extensions/torch1.13_cu11.7/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.413104295730591 seconds
=========================================================================================
No pre-built kernel is found, build and load the fused_optim kernel during runtime now
=========================================================================================
Detected CUDA files, patching ldflags
Emitting ninja build file /home/fangfei/.cache/colossalai/torch_extensions/torch1.13_cu11.7/build.ninja...
Building extension module fused_optim...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_optim...
Time to load fused_optim op: 1.8385069370269775 seconds
Segmentation fault (core dumped)
因为之前跑stable diffusion不会出现这个问题,分析要么是colossalai加的代码有问题,要么是包冲突。
已向colossalai的github提交issue等待回复中…
在我们运行程序的时候,经常会遇到Segmentation fault (core dumped)的问题,这种问题多见于内存操作不当,比如空指针、野指针的读写操作,数组越界访问,常量被破坏等。对于一个比较大的程序,比较难定位错误具体的位置,因此我们需要利用core文件进行错误的查找。
[解决问题]查找产生Segmentation fault (core dumped)错误的真正原因
如何生成和调试 Linux 程序崩溃产生的 core 文件
查资料得知core dumped问题可以通过core文件查看报错
1.ulimit -c unlimited #不限制core文件大小
此时,程序产生core dump时会自动生成core文件在默认生成路径中
2.如果没有看到core文件可以修改生成路径到当前文件夹
例如:
sudo sysctl -w kernel.core_pattern=core.%p
3.core文件查看方式
gdb sysbench core.37795
或者进入gdb后
core-file core.7416
...
[New LWP 39080]
Core was generated by `python main.py --logdir tmp --train --base configs/train_colossalai_cifar10.yaml'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fba24575540 in ?? ()
[Current thread is 1 (LWP 37795)]
报错的是这一句Program terminated with signal SIGSEGV
SIGSEGV意思是程序企图向分配的内存以外的区域读写,或者向只读区域进行写操作(访问空指针,内存越界访问,访问已经释放的内存…)
非常难受的是,范围太广了难以定位。
尝试gdb调试python
GDB调试指南(入门,看这篇够了)
使用gdb调试Python进程
使用gdb调试Python程序
待解决…
Sinatra新手;我正在运行一些rspec测试,但在日志中收到了一堆不需要的噪音。如何消除日志中过多的噪音?我仔细检查了环境是否设置为:test,这意味着记录器级别应设置为WARN而不是DEBUG。spec_helper:require"./app"require"sinatra"require"rspec"require"rack/test"require"database_cleaner"require"factory_girl"set:environment,:testFactoryGirl.definition_file_paths=%w{./factories./test/
我有两个Rails模型,即Invoice和Invoice_details。一个Invoice_details属于Invoice,一个Invoice有多个Invoice_details。我无法使用accepts_nested_attributes_forinInvoice通过Invoice模型保存Invoice_details。我收到以下错误:(0.2ms)BEGIN(0.2ms)ROLLBACKCompleted422UnprocessableEntityin25ms(ActiveRecord:4.0ms)ActiveRecord::RecordInvalid(Validationfa
我正在尝试将以下SQL查询转换为ActiveRecord,它正在融化我的大脑。deletefromtablewhereid有什么想法吗?我想做的是限制表中的行数。所以,我想删除少于最近10个条目的所有内容。编辑:通过结合以下几个答案找到了解决方案。Temperature.where('id这给我留下了最新的10个条目。 最佳答案 从您的SQL来看,您似乎想要从表中删除前10条记录。我相信到目前为止的大多数答案都会如此。这里有两个额外的选择:基于MurifoX的版本:Table.where(:id=>Table.order(:id).
我目前正在用Ruby编写一个项目,它使用ActiveRecordgem进行数据库交互,我正在尝试使用ActiveRecord::Base.logger记录所有数据库事件具有以下代码的属性ActiveRecord::Base.logger=Logger.new(File.open('logs/database.log','a'))这适用于迁移等(出于某种原因似乎需要启用日志记录,因为它在禁用时会出现NilClass错误)但是当我尝试运行包含调用ActiveRecord对象的线程守护程序的项目时脚本失败并出现以下错误/System/Library/Frameworks/Ruby.frame
我有一个应用需要发送用户事件邀请。当用户邀请friend(用户)参加事件时,如果尚不存在将用户连接到该事件的新记录,则会创建该记录。我的模型由用户、事件和events_user组成。classEventdefinvite(user_id,*args)user_id.eachdo|u|e=EventsUser.find_or_create_by_event_id_and_user_id(self.id,u)e.save!endendend用法Event.first.invite([1,2,3])我不认为以上是完成我的任务的最有效方法。我设想了一种方法,例如Model.find_or_cr
在许多ruby类之间共享记录器实例的最佳(正确)方法是什么?现在我只是将记录器创建为全局$logger=Logger.new变量,但我觉得有更好的方法可以在不使用全局变量的情况下执行此操作。如果我有以下内容:moduleFooclassAclassBclassC...classZend在所有类之间共享记录器实例的最佳方式是什么?我是以某种方式在Foo模块中声明/创建记录器还是只是使用全局$logger没问题? 最佳答案 在模块中添加常量:moduleFooLogger=Logger.newclassAclassBclassC..
如何在出现异常时指定全局救援,如果您将Sinatra用于API或应用程序,您将如何处理日志记录? 最佳答案 404可以在not_found方法的帮助下处理,例如:not_founddo'Sitedoesnotexist.'end500s可以通过调用带有block的错误方法来处理,例如:errordo"Applicationerror.Plstrylater."end错误的详细信息可以通过request.env中的sinatra.error访问,如下所示:errordo'Anerroroccured:'+request.env['si
例如,假设我有一个名为Products的模型,并且在ProductsController中,我有以下代码用于product_listView以显示已排序的产品。@products=Product.order(params[:order_by])让我们想象一下,在product_listView中,用户可以使用下拉菜单按价格、评级、重量等进行排序。数据库中的产品不会经常更改。我很难理解的是,每次用户选择新的order_by过滤器时,rails是否必须查询,或者rails是否能够以某种方式缓存事件记录以在服务器端重新排序?有没有一种方法可以编写它,以便在用户排序时rails不会重新查询结果
在我的Rails项目中,我有三个模型:classRecipe:recipe_categorizationsaccepts_nested_attributes_for:recipe_categories,allow_destroy::trueendclassCategory:recipe_categorizationsendclassRecipeCategorization通过这个简单的has_many:through设置,我怎样才能像这样获取给定的食谱:@recipe=Recipe.first并根据现有类别向此食谱添加类别,并在相应类别上对其进行更新。所以:@category=#Exi
我有一个帖子属于城市的关系,城市又属于一个州,例如:classPost现在我想找到所有帖子及其所属的城市和州。我编写了以下查询来获取带有城市的帖子,但不知道如何在同一查找器中获取带有城市的相应州:@post=Post.find:all,:include=>[:city]感谢任何帮助。谢谢。 最佳答案 Post.all(:include=>{:city=>:state}) 关于ruby-on-rails-使用Rails事件记录获取二级模型,我们在StackOverflow上找到一个类似的问