castano / nvidia-texture-tools Goto Github PK
View Code? Open in Web Editor NEWTexture processing tools with support for Direct3D 10 and 11 formats.
Home Page: https://github.com/castano/nvidia-texture-tools/wiki
License: Other
Texture processing tools with support for Direct3D 10 and 11 formats.
Home Page: https://github.com/castano/nvidia-texture-tools/wiki
License: Other
What steps will reproduce the problem?
Load any non-compressed (DDPF_RGB) DDS image that has a zero alpha mask
(for example with the nvdecompress utility)
What is the expected output? What do you see instead?
Load hangs infinitely.
What version of the product are you using? On what operating system?
Latest svn (revision 94).
Please provide any additional information below.
A patch which fixes the problem is attached. Note that I was not entirely
sure how to fix the convert function, but it seems to work.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2007 at 6:03
Attachments:
What steps will reproduce the problem?
1. Open any dds file with for instance DXT5 compression
2. Run nvdecompress on the file
What is the expected output? What do you see instead?
I see a 24-bit TGA file without alpha channel. Instead I expect to see a
32-bit TGA file with alpha channel.
What version of the product are you using? On what operating system?
Latest svn (revision 489) on Fedora 8.
Please provide any additional information below.
Attached is a dds file, the tga output before the patch, and the tga output
after the patch, and the patch fixing the issue.
Original issue reported on code.google.com by [email protected]
on 23 Mar 2008 at 10:50
Attachments:
There's no dds to tga converter in the toolset (as the one there used to be
in nvdxt). I patched one up.
Original issue reported on code.google.com by [email protected]
on 19 Jun 2007 at 2:05
Attachments:
This includes:
- Add support for input images in fp16 and fp32 formats.
- Add options for floating point clamping and quantization (simple tone
mapping).
- Add support for floating point output, RGBE, and other encodings.
Original issue reported on code.google.com by [email protected]
on 26 Jan 2008 at 8:57
The function ImageEXR::loadFloatEXR assumes that the channels in the EXR
file are in the order R, G, B, A. This does not seem to be generally the
case. I have a file with the channels in the order A, B, G, R.
My fix is below. It pulls out the name of each channel and uses that to
figure out the channel index.
namespace
{
int channelIndexFromName(const char* name)
{
char c = tolower(name[0]);
switch (c)
{
case 'r':
return 0;
case 'g':
return 1;
case 'b':
return 2;
default:
return 3;
}
}
}
FloatImage * nv::ImageIO::loadFloatEXR(const char * fileName, Stream & s)
{
nvCheck(s.isLoading());
nvCheck(!s.isError());
ExrStream stream(fileName, s);
Imf::InputFile inputFile(stream);
Imath::Box2i box = inputFile.header().dataWindow();
int width = box.max.x - box.min.y + 1;
int height = box.max.x - box.min.y + 1;
const Imf::ChannelList & channels = inputFile.header().channels();
// Count channels.
uint channelCount= 0;
for (Imf::ChannelList::ConstIterator it = channels.begin(); it !=
channels.end(); ++it)
{
channelCount++;
}
// Allocate FloatImage.
AutoPtr<FloatImage> fimage(new FloatImage());
fimage->allocate(channelCount, width, height);
// Describe image's layout with a framebuffer.
Imf::FrameBuffer frameBuffer;
for (Imf::ChannelList::ConstIterator it = channels.begin(); it !=
channels.end(); ++it)
{
int channelIndex = channelIndexFromName(it.name());
frameBuffer.insert(it.name(), Imf::Slice(Imf::FLOAT, (char
*)fimage->channel(channelIndex), sizeof(float), sizeof(float) * width));
}
// Read it.
inputFile.setFrameBuffer (frameBuffer);
inputFile.readPixels (box.min.y, box.max.y);
return fimage.release();
}
Original issue reported on code.google.com by [email protected]
on 19 May 2008 at 7:14
Add support for normal map encoding using DXT1. The quality is poor, but on
some hardware DXT1 has great advantages over DXT5.
Original issue reported on code.google.com by [email protected]
on 5 Dec 2007 at 9:02
Scaling support similar to the older texture tool would be nice to have.
This is useful for multi-platform development where you want to scale
textures to different sizes depending on how much memory you may have.
Original issue reported on code.google.com by [email protected]
on 26 Sep 2007 at 4:46
On G80 hardware mipmap generation takes longer than compression. An accelerated
implementation of the mipmap convolution filter would result in a great speedup.
Original issue reported on code.google.com by [email protected]
on 17 Apr 2007 at 9:03
including 3D texture compression formats (DXT and VTC).
Original issue reported on code.google.com by [email protected]
on 26 Jan 2008 at 8:58
This will prevent dependencies on specific versions of the CUDA runtime.
Since NVTT is designed to be included in third party applications, these
applications might be using a conflicting version of the runtime. To
resolve this issue, libraries and plugins should use the driver API instead.
Original issue reported on code.google.com by [email protected]
on 14 Dec 2007 at 9:03
What steps will reproduce the problem?
1. Building using "cmake . && make" on Fedora 7 x86_64
What is the expected output? What do you see instead?
It should build normally, but instead:
[per@localhost nvidia-texture-tools]$ make
Scanning dependencies of target nvcore
[ 2%] Building CXX object src/nvcore/CMakeFiles/nvcore.dir/Memory.o
/home/per/devinstall/nvidia-texture-tools/src/nvcore/Memory.cpp:1: error:
CPU you selected does not support x86-64 instruction set
/home/per/devinstall/nvidia-texture-tools/src/nvcore/Memory.cpp:1: error:
CPU you selected does not support x86-64 instruction set
make[2]: *** [src/nvcore/CMakeFiles/nvcore.dir/Memory.o] Error 1
make[1]: *** [src/nvcore/CMakeFiles/nvcore.dir/all] Error 2
make: *** [all] Error 2
What version of the product are you using? On what operating system?
Both version 0.9.4 and svn trunk revision 76 fail as above.
Please provide any additional information below.
Patch that should fix the problem is attached. It detects g++ and sets the
use of SSE3 in a way that should not break with x86_64 processors.
Original issue reported on code.google.com by [email protected]
on 18 Aug 2007 at 10:25
Attachments:
What steps will reproduce the problem?
1. svn revision 30 does not compile out of the box on OS X.
What is the expected output? What do you see instead?
Error in Debug.cpp, line 170.
What version of the product are you using? On what operating system?
svn revision 30, OS X 10.4.9 on Intel.
Please provide any additional information below.
A couple of places in code seem to be using NV_OS_OSX macro, but it looks like
that should be
NV_OS_DARWIN one.
Attached a file that makes codebase compile.
Original issue reported on code.google.com by [email protected]
on 28 May 2007 at 5:50
Attachments:
Add support for XBox-360 gamma curve. See:
Alex Vlachos, "Post Processing in The Orange Box," Game Developer's
Conference, February 2008
http://www.valvesoftware.com/publications.html
XBox approximates the gamma curve with a piecewise linear function. We
currently assume that the destination gamma space is exact.
Original issue reported on code.google.com by [email protected]
on 21 Mar 2008 at 8:31
I've integrated the texture tool into our tool pipeline, which uses msbuild
to execute tasks based on dependency checking. I'd like it if there was an
option so that the error messages followed the msbuild error format:
http://blogs.msdn.com/msbuild/archive/2006/11/03/msbuild-visual-studio-aware-err
or-messages-and-message-formats.aspx
Original issue reported on code.google.com by [email protected]
on 12 Oct 2007 at 1:18
The Texture Tools library does not have documentation.
This weekend I started writing a tutorial in the wiki:
http://code.google.com/p/nvidia-texture-tools/wiki/ApiDocumentation
I plan to finish it for the 1.0 release.
Original issue reported on code.google.com by [email protected]
on 28 Nov 2007 at 3:23
For some images it seems that the 64 bit version of the tools doesn't
produce the same images as the 32 bit version of the tools, which is very
unsettling. It doesn't happen for example for the lena picture, but many
other pictures (I've attached a full testcase).
Run this on a regular 64 bit machine, after unzipping this and run:
32\nvcompress32.exe -bc1 causeway.dds 32_causeway.dds
64\nvcompress64.exe -bc1 causeway.dds 64_causeway.dds
That should reproduce the error.
The tools were compiled against revision 563 in the source repository.
This happens somewhat frequently for me, out of 141 files, 7 of them
exhibited these difference.
Original issue reported on code.google.com by [email protected]
on 24 May 2008 at 12:54
Attachments:
Mip Map Filtering options from the old texture tool would be nice to have
back in the new tool.
Original issue reported on code.google.com by [email protected]
on 26 Sep 2007 at 4:48
The image loading libraries used by NVTT use the prebuilt libraries
provided by the gnuwin32 project. These libraries are only available for
win32, so on win64 the loaders for jpeg and png are not available.
A simple solution would be to use stb_image as a fallback for systems where
the standard libraries are not available:
ttps://mollyrocket.com/forums/viewtopic.php?t=315
Original issue reported on code.google.com by [email protected]
on 11 Feb 2008 at 7:14
What steps will reproduce the problem?
1. Use __SSE1__ define (remove __SSE2__) when compiling the library
2. Execute compression on an AMD Athlon CPU
3. Outputed files are invalid
What is the expected output? What do you see instead?
Instead of normal DDS compressed images the output is filled completely
with one color or it almost look like the desired output but with different
color tone and strange lines on it.
What version of the product are you using? On what operating system?
Version: Nvidia texture tools 2 Alpha
OS: Windows XP SP2 32bit
Original issue reported on code.google.com by [email protected]
on 14 Dec 2007 at 2:41
could we have an option to turn off the %<percentage complete> output in
myOutputHandler::writeData()?
Original issue reported on code.google.com by [email protected]
on 12 Oct 2007 at 1:21
What steps will reproduce the problem?
1. Convert from a TGA with an alpha channel to a DXT-compressed format
(DXT1a, DXT3, DXT5 have alpha)
2. CompressionOptions::setQuantization( true, false, false );
What is the expected output? What do you see instead?
Instead of seeing a similar image to the input image, the resulting image
is completely opaque, with alpha values of 255 right across the board.
What version of the product are you using? On what operating system?
Tested with NVidia Texture Tools 2.0.3 on Windows XP (32-bit).
Please provide any additional information below.
It seems this line is discarding any alpha information in the source image:
Color32 pixel16 = toColor32( toColor16(pixel32) );
Line 117 of nvimage/Quantize.cpp
Original issue reported on code.google.com by [email protected]
on 11 Jun 2008 at 8:59
using latest r447.
running debugbuild of nvcompress.exe with options
"-fast -bc3n"
triggers "nvDebugCheck(c0.a <= c1.a);" assert at
nv::compressBlock_BoundsRange (line 1171 in fastcompressdxt.cpp)
when running without asserts, seems to compress texture fine. faulty assert?
using visual studion 2005, windows xp 64bit, latest vnidia drivers and 8800gt.
Original issue reported on code.google.com by [email protected]
on 25 Feb 2008 at 3:06
Round instead of truncate. When rounding take into account bit expansion.
Original issue reported on code.google.com by [email protected]
on 20 Mar 2008 at 1:38
Add support for YCoCg-DXT5 texture compression as described in:
http://developer.nvidia.com/object/real-time-ycocg-dxt-compression.html
Original issue reported on code.google.com by [email protected]
on 1 Nov 2007 at 5:47
Currently the NVTT library has to be compiled for a specific instruction
set. The SSE2 code path is 40% faster than the SSE code path, and no SSE is
much slower, 3-4 times slower. It would be nice to automatically select the
best code path dynamically according to the available CPU capabilities.
Original issue reported on code.google.com by [email protected]
on 12 Dec 2007 at 10:27
Any chance you could put together an updated package of win32 binaries +
source code?
I'm trying to just grab the source code via svn, but am running into
issues... hrrm.
-sam
Original issue reported on code.google.com by [email protected]
on 27 Nov 2007 at 2:43
DXT1a is now supported, but the CUDA compressor is missing.
Original issue reported on code.google.com by [email protected]
on 4 Feb 2008 at 7:56
when compressing dxt1n with cuda hardware normalps packed have 4x4 errors
(messed up blocks).
tested both debug and release builds and ran
nvcompress -bc1n golfball_nm.tga
texture is 256x256 and packes fine wihtout such erorrs when using
-fast or -nocuda.
errors seems to be semirandom, appearing on places similar to other places
in texture without errors so i would guess inputdata is not at fault.
running latest r447
using windows xp 64bit, visual studio 2005, latest nvidia drivers and 8800gt
Original issue reported on code.google.com by [email protected]
on 25 Feb 2008 at 3:34
It's nice that the texture tools have been released under the MIT license,
but it looks like US citizens can't legally use it. The patent on block
compression(S3TC) won't expire until 2017 and the MIT license doesn't grant
me(or other users) any use of the S3TC patent. Simon Brown isn't located in
the USA, so the patent doesn't apply to him. I've wanted to use Simon
Brown's library many times, but alas, it would be illegal to compile it in
the USA without separately licensing the S3TC patent...
US S3TC Patent
http://patft.uspto.gov/netacgi/nph-Parser?u=%2Fnetahtml%2Fsrchnum.htm&Sect1=PTO1
&Sect2=HITOFF&p=1&r=1&l=50&f=G&d=PALL&s1=5956431.PN.&OS=PN/5956431&RS=PN/5956431
I know NVidia has a license for the S3TC patent, but unless NVidia is
sub-licensing the S3TC patent along with the code, it appears this can't be
used by US citizens.
Original issue reported on code.google.com by [email protected]
on 15 Jun 2007 at 1:26
Cubemap support from the older texture tool:
-cubeMap : create cube map .
Cube faces specified with individual files with -list option
positive x, negative x, positive y, negative y, positive
z, negative z
Use -output option to specify filename
Cube faces specified in one file. Use -file to specify input
filename
Original issue reported on code.google.com by [email protected]
on 26 Sep 2007 at 5:00
as suggested at
http://developer.nvidia.com/object/newsletter_tip_archive.html#21 you can
get nicer results if you sharpen some mipmap levels after they are
generated (this functionality was available in the old Texture Tools).
Artists on my game want to use the "Sharpen soft" filter for some mipmap
levels (which is available in the Photoshop DDS plugin).
Original issue reported on code.google.com by [email protected]
on 21 Dec 2007 at 11:42
Support for DXT1a has been requested by many developers. It should be easy to
add by using the
weighted compressor. Need CUDA implementation of weighted cluster fit
compressor.
Original issue reported on code.google.com by [email protected]
on 17 Apr 2007 at 9:00
Support for the A8L8 format.
Regarding how to treat the RGBA input data, it could be either by using a
set convention (like R->A, G->L) or allowing it to be set as an option.
Original issue reported on code.google.com by [email protected]
on 15 May 2007 at 11:18
The alpha channel of the image can be interpreted in different ways. Some
images use it as an independent channel, others use it for transparency,
while others assume that colors are premultiplied by the alpha channel.
The compressor should behave in different ways according to that. Currently
fast compressors behave as if the alpha was independent, but the others
behave as if it was used for transparency.
There's an API available to specify the alpha mode:
/// Alpha mode.
enum AlphaMode
{
AlphaMode_None,
AlphaMode_Transparency,
AlphaMode_Premultiplied,
};
void InputOptions::setAlphaMode(AlphaMode alphaMode);
but it's not being used yet.
Original issue reported on code.google.com by [email protected]
on 4 Feb 2008 at 11:23
What steps will reproduce the problem?
when compressing singlecolored texture ( 32 by 32 pixels, all colored
0x40404080) to DXT5 with FastCompressDXT
at nv::compressBlock_BoundsRange c0 and c1 get same value that
triggers nvDebugCheck(block->color.col0.u > block->color.col1.u);
on first entry to this function.
What version of the product are you using? On what operating system?
using latest 2.0.1 (revision 445) with visual studio 2005, windows xp 64.
Please provide any additional information below.
changed assert comparison to '>=', seemed to work fine. didn't check for
other issues with block palette being same color. might have DXT1a issues?
Original issue reported on code.google.com by [email protected]
on 15 Feb 2008 at 4:44
What steps will reproduce the problem?
1. Open any 3DC compressed normal map.
2. Decompress with nvdecompress.
3. Open it in any image editor.
What is the expected output? What do you see instead?
I expect that the image includes a blue channel for the Z-component. But
the blue channel is missing. Comparing the decompressed picture with dds
previewers such as WTV and ddsview also shows that the decompressed image
is wrong.
What version of the product are you using? On what operating system?
Latest svn (489) on Fedora 8.
Please provide any additional information below.
Attached are the patch, a test dds file, and two screenshots of the dds
file (one from WTV and one from ddsview). Note that in the patch, I
additionally had to change the X and Y mapping: I had to map X to the green
channel and Y to the red channel for the decompressed texture to be the
same as the previews. Obviously something else is wrong (the 3DC specs
clearly state that the X component should be in the red channel and Y in
the green... ???) In any case with this patch, the decompression seems to
give the same results as can be seen from the other 3rd party programs.
Original issue reported on code.google.com by [email protected]
on 26 Mar 2008 at 11:42
Attachments:
What steps will reproduce the problem?
1. Create a 256x256 tga with RGB 29,29,31.
2. Compress using both the nVidia Photoshop plugin, as well as the
command-line tool nvcompress. DXT5 (bc3), nomips.
What is the expected output? What do you see instead?
Output from Photoshop plugin is a monochromatic image of 30, 29, 32.
Output from command-line tool is 33, 28, 33. I expect to see equivalent or
better results from the command-line tool, but in this case the Photoshop
plugin delivers superior results.
What version of the product are you using? On what operating system?
2.0.1 on Windows XP.
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 29 Apr 2008 at 11:46
This lib compress only DXT1 format and not DXT5.
Do you plan to support this format soon (With cuda)?
Original issue reported on code.google.com by [email protected]
on 22 Jun 2007 at 3:09
I found that attempting to convert an image to A4R4G4B4 format was the
easiest way to reproduce this issue. Instead of seeing a full image in the
output, the second half of the image is zeroed (right side). This was
tested with NVidia Texture Tools release 2.0.3 on Windows XP (32 bit).
If my suspicions are correct, it seems that the padding is clobbering data
because it makes an assumption about the byte-width of the image.
Please find attached what I believe to be a reasonable fix for this issue.
Calls made:
CompressionOptions::setFormat( nvtt::Format_RGBA );
CompressionOptions::setPixelFormat( 16, 0x0f00, 0x00f0, 0x000f, 0xf000 );
Original issue reported on code.google.com by [email protected]
on 11 Jun 2008 at 4:06
Attachments:
Non power of two mipmap generation requires the use of a polyphase filter
for high quality results. Currently NVTT implements a polyphaser box filter
as explained here:
http://developer.nvidia.com/object/np2_mipmapping.html
However, higher quality filters are not implemented. An efficient
implementation would cache the the kernel for each phase and reuse them for
each row (or column).
I'm not sure what's the best way of doing that in CUDA, though.
Original issue reported on code.google.com by [email protected]
on 8 May 2007 at 10:40
The texture tools are written in C++. That makes it difficult to use them
from C or any other language that relies on C for its bindings, like
blitzmax, lua, mono, etc.
Original issue reported on code.google.com by [email protected]
on 10 Dec 2007 at 6:11
NVTT crashes when runs out of memory when generating mipmaps of a 8192 x
8192 texture on 32 bit OS's. It would be nice to detect the problem and
fail reporting the issue with a descriptive error code.
A better alternative would be to actually support unlimited image sizes by
processing the image in tiles.
Original issue reported on code.google.com by [email protected]
on 11 Feb 2008 at 8:51
The following snipped of code produces an unhandled exception when run:
nvtt::CompressionOptions compressionOptions;
compressionOptions.setFormat( nvtt::Format_DXT5 );
compressionOptions.setQuality( nvtt::Quality_Fastest );
Although the following works:
nvtt::CompressionOptions compressionOptions;
compressionOptions.setFormat( nvtt::Format_DXT5 );
compressionOptions.setQuality( nvtt::Quality_Production );
This was tested on Windows XP using nvidia-texture-tools-2.0.2. The NVidia
Texture Tools are compiled in Debug without CUDA support.
Original issue reported on code.google.com by [email protected]
on 6 Jun 2008 at 10:22
I had some trouble to compile the nvidia-texture-tools with Mac OS X 10.4
(leopard) :
1. type not found
nvidia-texture-tools 2/src/nvcore/DefsGnucDarwin.h:54: error: ‘uint8_t’
does not name a type
nvidia-texture-tools 2/src/nvcore/DefsGnucDarwin.h:57: error: ‘uint16_t’
does not name a type
nvidia-texture-tools 2/src/nvcore/DefsGnucDarwin.h:60: error: ‘uint32_t’
does not name a type
nvidia-texture-tools 2/src/nvcore/DefsGnucDarwin.h:63: error: ‘uint64_t’
does not name a type
nvidia-texture-tools 2/src/nvcore/DefsGnucDarwin.h:67: error: ‘uint32’ does
not name a type
fixed : added #include <stdint.h>inside the DefsGnucDarwin.h file
2. Backtrace & abi
nvidia-texture-tools 2/src/nvcore/Debug.cpp: In function
‘void<unnamed>::nvPrintStackTrace(void**, int, int)’:
nvidia-texture-tools 2/src/nvcore/Debug.cpp:132: error: ‘backtrace_symbols’
was not declared
in this scope
fixed : miss execinfo and cxxabi headers
#if NV_OS_DARWIN
# include <unistd.h> // getpid
# include <sys/types.h>
# include <sys/sysctl.h> // sysctl
# include <ucontext.h>
# include <execinfo.h>
# include <cxxabi.h>
#endif
3. context
nvidia-texture-tools 2/src/nvcore/Debug.cpp: In function
‘void*<unnamed>::callerAddress(void*)’:
nvidia-texture-tools 2/src/nvcore/Debug.cpp:176: error: ‘struct
__darwin_mcontext32’ has no
member named ‘ss’
These errors are related to Unix'03 compliance; as of Leopard, many nonstandard
symbols have
been moved out of the user namespace.
fixed : workaround using the flag __DARWIN_UNIX03
static void * callerAddress(void * secret)
{
# if NV_OS_DARWIN && NV_CPU_PPC && !__DARWIN_UNIX03
ucontext_t * ucp = (ucontext_t *)secret;
return (void *) ucp->uc_mcontext->ss.srr0;
# elif NV_OS_DARWIN && NV_CPU_X86 && !__DARWIN_UNIX03
ucontext_t * ucp = (ucontext_t *)secret;
return (void *) ucp->uc_mcontext->ss.eip;
# elif NV_CPU_X86_64
// #define REG_RIP REG_INDEX(rip) // seems to be 16
ucontext_t * ucp = (ucontext_t *)secret;
return (void *)ucp->uc_mcontext.gregs[REG_RIP];
# elif NV_CPU_X86 && !__DARWIN_UNIX03
ucontext_t * ucp = (ucontext_t *)secret;
return (void *)ucp->uc_mcontext.gregs[14/*REG_EIP*/];
# elif NV_CPU_PPC && !__DARWIN_UNIX03
ucontext_t * ucp = (ucontext_t *)secret;
return (void *) ucp->uc_mcontext.regs->nip;
# else
return NULL;
# endif
Original issue reported on code.google.com by [email protected]
on 19 May 2008 at 6:17
I'd like to see continued support for the Photoshop DDS load/save plugin.
I'm currently using version 8.23.1101.1715 in WinXP Pro sp2 with Photoshop
CS3.
Many game development artists are using this tool. There is really no other
replacement for what it does.
Current problems:
1. Cubemap mips are loading incorrectly. See this post:
http://developer.nvidia.com/forums/index.php?showtopic=1092
2. Volume texture is not saving properly. See this post:
http://developer.nvidia.com/forums/index.php?showtopic=734
Preferred handling of cubemaps would be a pow-2 square 2D image, with six
Layers, one for each of the six sides. The plugin could assume the Layers
to be arranged in order from top to bottom, +X to -Z. If "Use existing
mips" is turned on, the plugin could assume each layer to contain its own
mips, arranged left-to-right from largest mip to smallest mip (smallest
being 1 pixel).
Volume maps could be a similar setup using Layers. Top layer = top slice. #
of layers = # of slices. This could support non-square dimensions as well,
though I'm not sure how the plugin could deduce the mip layout for "Use
existing mips" if the image is non-square.
Thanks for considering this!
Eric
Original issue reported on code.google.com by [email protected]
on 11 Feb 2008 at 7:15
The CUDA DXT1 compressor produces artifacts when compiled with the CUDA 2.0
toolkit.
There's a simple workaround, but reduces performance by 5-10%. Ideally this
should be solved before the final release of CUDA 2.0.
Original issue reported on code.google.com by [email protected]
on 21 May 2008 at 6:56
What steps will reproduce the problem?
1. convert the attached dds file to a tga using NVTT
What is the expected output? What do you see instead?
A nice picture without artifact. I see a vertical black line in the center
of the picture. This artifact is not present with other DDS readers.
What version of the product are you using? On what operating system?
Latest svn checkout (revision 18).
Please provide any additional information below.
Note that the test.dds is flipped along the x axis, just happens to be this
file.
Original issue reported on code.google.com by [email protected]
on 19 Jun 2007 at 2:24
Attachments:
I am new to this field of Image Processing and GPU so please bare with me.
Where will you use hardware accelerated DXT compressor? Is it to store
compressed texturs in the buffer? What if I want to read the bitmaps from
the GPU (glReadPixels()) and process it may be by sending them to the
different machine. Could I use this compressor to compress these bitmaps.
Also I won’t need to move them from GPU to CPU to compress using software
so it will be faster.
Also do you more information on nvimagediff tool?
Thanks.
Original issue reported on code.google.com by [email protected]
on 29 Aug 2007 at 2:28
NVTT should be reentrant.
Original issue reported on code.google.com by [email protected]
on 6 Mar 2008 at 5:17
The CPU compressors output compressed data by calling the output callback
as soon as a compressed block is available. This is good to minimize memory
allocations inside the library, but in managed applications it causes many
managed/unmanaged transitions. It would be good if it was possible to
specify a buffer size so that the callback is only invoked when the buffer
is full.
Original issue reported on code.google.com by [email protected]
on 11 Apr 2008 at 9:57
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.