The path of my original file contains Chinese characters, and the intermediate files do not contain.
First, it failed during pegasus-transfer process.
The error message showed: "UnicodeEncodeError: 'ascii' codec can't encode characters in position 47-49: ordinal not in range(128)".
So in the file /usr/bin/pegasus-transfer, I changed all "get_src_path()" into "get_src_path().encode('utf-8')". And it worked successfully, then failed in pegasus-monitord job.
Again the problem is " UnicodeEncodeError: 'ascii' codec can't encode characters in position 890-898: ordinal not in range(128)"
Below are the complete error messages.
22 Traceback (most recent call last):
23 File "/usr/bin/pegasus-monitord", line 1259, in <module>
24 process_output = process_dagman_out(workflow_entry.wf, workflow_entry.ml_buffer[0:ml_pos])
25 File "/usr/bin/pegasus-monitord", line 702, in process_dagman_out
26 add(wf, my_jobid, "%s_SCRIPT_FAILURE" % (my_script), status=my_exit_code)
27 File "/usr/bin/pegasus-monitord", line 551, in add
28 wf.update_job_state(jobid, sched_id, my_job_submit_seq, event, status, my_time, reason)
29 File "/usr/lib/python2.7/dist-packages/Pegasus/monitoring/workflow.py", line 2351, in update_job_state
30 real_app_exitcode = self.parse_job_output(my_job, job_state)
31 File "/usr/lib/python2.7/dist-packages/Pegasus/monitoring/workflow.py", line 1938, in parse_job_output
32 my_invocation_found = my_job.extract_job_info( my_output)
33 File "/usr/lib/python2.7/dist-packages/Pegasus/monitoring/job.py", line 469, in extract_job_info
34 task_output = self.split_task_output( my_record["stdout"])
35 File "/usr/lib/python2.7/dist-packages/Pegasus/monitoring/job.py", line 695, in split_task_output
36 task_data.write( task_output )
37 UnicodeEncodeError: 'ascii' codec can't encode characters in position 890-898: ordinal not in range(128)
So, how can I change those files related to make it possible to use Chinese characters in the file path?