bash - Unique entry set in the first column of all csv files under directory -


this question has answer here:

i have list of comma separated files under directory. there no headers, , unfortunately not same length each row.

i want find unique entry in first column across files.

what's quickest way of doing in shell programming?

awk -f "," '{print $1}' *.txt | uniq 

seems uniq entries of each files. want files.

shortest still using awk (this print row)

awk -f, '!a[$1]++' *.txt 

to first field

awk -f, '!a[$1]++ {print $1}' *.txt 

Comments

Popular posts from this blog

javascript - oscilloscope of speaker input stops rendering after a few seconds -

javascript - gulp-nodemon - nodejs restart after file change - Error: listen EADDRINUSE events.js:85 -

Fatal Python error: Py_Initialize: unable to load the file system codec. ImportError: No module named 'encodings' -