-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Running Full Analysis: Process fastp (1_R1)
terminated with an error exit status (255)
#41
Comments
Try to run it with read path within the quotes. .<path_to_reads>/.fastq.gz to '.<path_to_reads>/.fastq.gz' |
Thank you for your response! Running _**Error executing process > 'busco4_tri (1)' Caused by: Command executed: echo -e "\n-- Starting BUSCO --\n" busco -i 1.Trinity.fa -o 2.Trinity.bus4 -l ./DBs/busco_db/diptera_odb10/ -m tran -c 8 --offline echo -e "\n-- DONE with BUSCO --\n" cp 1.Trinity.bus4/short_summary..1.Trinity.bus4.txt . Command exit status: Command output: -- Starting BUSCO -- INFO: ***** Start a BUSCO v4.1.4 analysis, current time: 04/03/2022 17:39:55 ***** Command error: Work dir: Tip: you can replicate the issue by changing to the process work dir and entering the command However, the _ ./DBs/busco_db/diptera_odb10/_ does exist and nextflow should have access to it. I tried to to rerun the precheck_TransPi.sh script with different paths to busco but none of them work. Could you please let me know what is causing the issue and how I could possibly resolve it? Thank you! |
Check in your nextflow.config, if the DB path is provided correctly. In my case the mount point is linked to the Transpi, root path. Hence it worked well. Eg: |
I used the absolute path and it was able to continue the run. Then I ran into an issue with the RNAQUAST environment, but I was able to create the environment prior to resuming the run as suggested by the other github issue. I continued the run, but just received the following error message. I can see that there may be an issue with the device saying there is not space, potentially because only one core is being used, if I understand correctly. Could you please let me know if you have any suggestions as to how I can resolve this issue. Error executing process > 'busco4_tri (1)' Caused by: Command executed: echo -e "\n-- Starting BUSCO --\n" busco -i 1.Trinity.fa -o 1.Trinity.bus4 -l /data/gn2311/transpi/TransPi/DBs/busco_db/diptera_odb10/ -m tran -c 8 --offline echo -e "\n-- DONE with BUSCO --\n" cp 1.Trinity.bus4/short_summary..1.Trinity.bus4.txt . Command exit status: Command output: -- Starting BUSCO -- INFO: ***** Start a BUSCO v4.1.4 analysis, current time: 04/07/2022 08:52:49 ***** Command wrapper: Work dir: A fatal error has been detected by the Java Runtime Environment:SIGBUS (0x7) at pc=0x00007f48eb134d45, pid=15518, tid=0x00007f4771ede700JRE version: OpenJDK Runtime Environment (8.0_212-b03) (build 1.8.0_212-8u212-b03-0ubuntu1.18.04.1-b03)Java VM: OpenJDK 64-Bit Server VM (25.212-b03 mixed mode linux-amd64 compressed oops)Problematic frame:C [libc.so.6+0x18ed45]Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java againAn error report file with more information is saved as:/data/gn2311/transpi/TransPi/hs_err_pid15518.logSegmentation fault (core dumped) |
The previous busco4 error was resolved after resuming the run again. I have since received an error regarding the diamon_blastx. I have tried to resume it multiple times, but continue to get the same error. Would you have any idea how to resolve the issue? Thank you again for your help. Error executing process > 'swiss_diamond_trinotate (2)' Caused by: Command executed: dbPATH=/data/gn2311/transpi/TransPi/DBs/sqlite_db/ v=$( diamond --version 2>&1 | tail -n 1 | cut -f 3 -d " " ) Command exit status: -- Starting Diamond -- File /data/gn2311/transpi/TransPi/DBs/sqlite_db//uniprot_sprot.pep not found. Run the precheck to fix this issue Work dir: Tip: when you have fixed the problem you can continue the execution adding the option |
Hi,
I have an issue with trying to run Full Analysis on TransPi. I have 8 files containing paired end data named 1_R1.fastq.gz, 1_R2.fastq.qz, etc that I would like to produce a de novo transcriptome from.
I cloned the repository and ran the configuration using the bash precheck_TransPi.sh for the conda installation (option 1). Everything was downloaded and installed without any error.
I tried to use the full analysis by running
"nextflow run TransPi.nf -- all --reads .<path_to_reads>/*.fastq.gz --k 25,35,55,75,85 --maxReadLen 150 -profile conda"
I receive the following error message:
Something went wrong. Check error message below and/or log files.
Error executing process > 'fastp (1_R1)'
Caused by:
Process
fastp (1_R1)
terminated with an error exit status (255)Command executed:
fastp -i 1_R1.fastq.gz -I null -o left-1_R1.filter.fq -O right-1_R1.filter.fq --detect_adapter_for_pe --average_qual 5 --overrepresentation_analysis --html 1_R1.fastp.html --json 1_R1.fastp.json --thread 8 --report_title 1_R1
v=$( fastp --version 2>&1 | awk '{print $2}' )
echo "fastp: $v" >fastp.version.txt
Command exit status:
255
Command output:
(empty)
Command error:
ERROR: Failed to open file: null
Work dir:
/data/gn2311/transpi/TransPi/work/ae/ec5145900bb3af7866a1cb7e83f07f
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named
.command.sh
I also checked the command.sh file, which contains the following:
#!/bin/bash -ue
fastp -i 1_R1.fastq.gz -I null -o left-1_R1.filter.fq -O right-1_R1.filter.fq --detect_adapter_for_pe --average_qual 5 --overrepresentation_analysis --html 1_R1.fastp.html --json 1_R1.fastp.json --thread 8 --report_title 1_R1
v=$( fastp --version 2>&1 | awk '{print $2}' )
echo "fastp: $v" >fastp.version.txt
I am not sure what is causing the error, could you please let me know how to fix this issue?
Best,
Andy
The text was updated successfully, but these errors were encountered: