![]() Mirsky, Y., Lee, W.: The creation and detection of deepfakes: a survey. Meshry, M., Suri, S., Davis, L.S., Shrivastava, A.: Learned spatial representations for few-shot talking-head synthesis. Martin-Brualla, R., et al.: LookinGood: enhancing performance capture with real-time neural re-rendering. Marra, F., Gragnaniello, D., Cozzolino, D., Verdoliva, L.: Detection of GAN-generated fake images over social networks. Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. Liu, Y., et al.: Grand challenge of 106-point facial landmark localization. Koujan, M.R., Doukas, M.C., Roussos, A., Zafeiriou, S.: Head2Head: video-based neural head synthesis. ![]() Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. Kim, H., et al.: Neural style-preserving visual dubbing. In: Leibe, B., Matas, J., Sebe, N., Welling, M. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. Jiang, X., et al.: MNN: a universal and efficient inference engine. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-Image translation with conditional adversarial networks. Huang, Z., Zhang, T., Heng, W., Shi, B., Zhou, S.: RIFE: real-time intermediate flow estimation for video frame interpolation. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. Ha, S., Kersner, M., Kim, B., Seo, S., Kim, D.: MarioNETte: few-shot face reenactment preserving identity of unseen targets. Garrido, P., et al.: VDub: modifying face video of actors for plausible visual alignment to a dubbed audio track. 4690–4699 (2019)ĭoukas, M.C., Zafeiriou, S., Sharmanska, V.: HeadGAN: one-shot neural head synthesis and editing. 13786–13795 (2020)ĭeng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. 1021–1030 (2017)īurkov, E., Pasechnik, I., Grigorev, A., Lempitsky, V.: Neural head reenactment with latent pose descriptors. 187–194 (1999)īulat, A., Tzimiropoulos, G.: How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). ![]() ACM TOG 36(6), 1–13 (2017)īlanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. KeywordsĪlexander, O., et al.: The digital emily project: achieving a photorealistic digital actor. In this paper, we introduce \(\text ^\rho \) can achieve real-time performance for face images of \(1440\times 1440\) resolution with a desktop GPU and \(256\times 256\) resolution with a mobile CPU. Existing one-shot face reenactment methods either present obvious artifacts in large pose transformations, or cannot well-preserve the identity information in the source images, or fail to meet the requirements of real-time applications due to the intensive amount of computation involved.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |