A shell script that converts a csv file to a json file
- Firstly, git bash into your folder where you have kept the .sh files.
- copy the csv file (input.csv) into the folder (csv-to-json-converter) with csvtojson.sh file
- execute first "chmod a+x csvtojson.sh" to make csvtojson.sh executable
- execute the csvtojson.sh file in the Git Bash terminal with "bash ./csvtojson.sh csv/input.csv > json/output.json"
- First I downloaded the dataset from International Greenhouse Gas Emissions .
- Next, for converting to JSON file:
- csvtojson.sh : It reads the csv files line by line and takes the table headers as the keys of the objects in the json array 'data' and the data of the table as the value of each set of keys.
Example: Suppose in line 2 of the csv file (since line 1 of the csv file contains the table headers) there are:
Australia,
2014,
393126.946994288,
carbon_dioxide_co2_emissions_without_land_use_land_use_change_and_forestry_lulucf_in_kilotonne_co2_equivalent
So while converting to json, it will appear almost like this :
{
"data":
[
{
"country_or_area":"Australia",
"year":"2014",
"value":"393126.946994288",
"category":"carbon_dioxide_co2_emissions_without_land_use_land_use_change_and_forestry_lulucf_in_kilotonne_co2_equivalent"
},
...............
}
]
input=$1
Takes input parameter
[ -z $1 ]
Checks if input[1] is empty or not
[ ! -e $input ]
Checks for existence of filename(input)
read first_line < $input
Reads first line of input file and stores it in first_line
attributes=
echo $first_line | awk -F, {'print NF'}
echo $first_line | awk -F, {'print NF'} with tail tags acts as a Bash command where:
- awk for pattern scanning and processing
- awk -F fs: where fs is the input seperator
- NF is the count of input fields.
So basically, this command returns the number of fields in the input file.