It can't be done. The PostScript language does not support arbitrary opacity (only fully opaque and completely transparent). See this wikipedia reference.
The Ghostscript language, however, does support arbitrary opacity, as an extension to the language (extra commands such as .setopacityalpha
). See here for details. This is how pstricks made an apparently-transparent eps file.
In all your dozens of questions about figure conversion, I don't recall that you have ever explained why you want to produce eps versions of your figures (which makes it much harder to answer all those questions, by the way). If your reason is only to use the figures in a latex->dvips->ps2pdf workflow, where you can guarantee that the conversion to pdf will use ghostscript, then the .setopacityalpha
method is appropriate (although if this is your goal, why not simply use pdf images with pdftex in the first place? or at least add -Ppdf
to your dvips invocation to make pdf-optimized eps?). However, the reason people usually want eps figures is because they are giving the figures to some publisher who does not accept pdf, in which case the publisher will almost certainly also not accept ghostscript-specific extensions and the .setopacityalpha
method will fail also. If you happen to know that the publisher uses Adobe Distiller, then there is another way to produce transparent-extended EPS, via the pdfmark
extension, described here. You can ask pstricks to use the pdfmark
method for opacity by replacing pstricks.con
by distiller.cfg
(distributed with pstricks). (You will have to ask your publisher to set /AllowTransparency true
in their joboptions file).
If you need a strictly standards-conforming EPS file that will work with all PostScript engines, then you need either to avoid transparency altogether or use something like the ps2pdf followed by pdftoeps method as given in your batch file, which will rasterize the transparent parts of the image. (I think the next version of of pdftops will at least allow you to specify a rasterization resolution). Due to the rasterization, the EPS will in general be larger, of course.
If I'm understanding the situation correctly, you're receiving a document with EPS figures from a customer, and you need to produce bulletproof color and grayscale PDFs which you will then pass on to a third-party to RIP. You may not have control over the software used for the RIP, so you want to do whatever will minimize the probability of problems.
You're using epstopdf to convert the figures to PDF, and epstopdf is simply a wrapper for gs. That means that your current workflow is actually gs+gs+pdflatex. Passing the same figure through gs twice in a row seems like a relatively safe thing to do. If the PDF code generated by gs is buggy, then the bug is presumably present after the first pass, and it's not so likely that any new bug will creep in on the second pass.
The gs+pdflatex+gs workflow sounds like a bad idea to me. I've had problems in the past where filtering pdflatex output through gs produced buggy pdf output, and the result was rejected by a RIP. The RIP was apparently correct to reject it, because the gs output was actually syntactically invalid. The bug was reported to the gs team, who responded by saying that they didn't care and wouldn't fix it: http://bugs.ghostscript.com/show_bug.cgi?id=693322
I would suggest that you do some preflight checking on the figures. It's safest if they use outlines rather than fonts. I've had frequent RIP problems with PDF figures that used transparency, so my current practice is that for any figure that uses transparency, I render it to a bitmap using pdftoppm and have pdflatex grab the bitmap. At 300 dpi they look quite good. Here are a couple of scripts I use to automate that process.
Render an SVG figure to PDF or PNG as appropriate:
http://www.lightandmatter.com/cgi-bin/gitweb.cgi?p=.git;a=blob;f=scripts/render_one_figure.pl
Do a preflight check on a figure:
http://www.lightandmatter.com/cgi-bin/gitweb.cgi?p=.git;a=blob;f=scripts/preflight_one_fig.pl
This is slightly different than what you want to do, since your source format is EPS rather than SVG, but I think you'll find that a lot of the preflight checks and conversion methods will also apply to you. Since we're both on linux, snippets you take from my code should work for you as well.
In general, I don't think pdflatex does any processing at all on PDF figures. It simply copies them into its output. That means that any syntax errors in the PDFs will also be present as syntax errors in the output of pdflatex. It also means that any fonts embedded in the PDFs will be embedded in pdflatex's output. For example, if there is a font in one of the figures whose license forbids redistribution, then that illegal font appears in your pdflatex output as well. It's a good idea to use the linux utility pdffonts on both the figures and the final output so you can see what's going on. For all these reasons, the safest approach of all would be to render every single figure as a bitmap.
Best Answer
The pros of one tend to be cons of the other, so I'll just list features. What comes to mind straight away:
dvips
\special
s features.pstricks
,psfrag
)dvipdfmx
dvipdfm
)xdvipdfmx
is the fork that allows XeTeX to produce PDF output; while Mac OS X hasxdv2pdf
, on Windows/Linux there's no alternative.